Finding Your People

by Dr. Leah Poloskey, Assistant Clinical Professor, Department of Exercise and Rehabilitation Science, Merrimack College, and 

by Dr. Sarah Benes, Associate Clinical Professor, Department of Nutrition and Public Health, Merrimack College

(Post #1: Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

How it all began . . .

Reflecting on the journey to having a Guest Editor spot with a mini-series of blog posts about metacognition with our colleagues from across campus was a great opportunity to reconnect to the power of community in this work. And it all began with a problem . . .

We had been discussing how challenging it was to engage students in our Health Science classes (Leah teaches in the Exercise and Rehabilitation Department and Sarah teaches in the Nutrition and Public Health Department). We decided to work together to investigate more deeply (rather than just dwelling on the challenge). We applied to host a Teaching Circle, which is an informal structure at Merrimack College that allows faculty and staff to come together around common interests. Teaching Circle facilitators are awarded small stipends for their time and effort in developing and running these opportunities. We believed that the Teaching Circle structure would provide a great opportunity for us to work within existing campus initiatives to enhance collaboration and engagement with faculty and staff across campus.schematic of three people facing each other with lightbulb being lit over their heads

Our first Teaching Circle was about student engagement. We ended up exploring mindset and the ways that mindset can impact engagement. We conducted a research study where we developed a tool that essentially is a measure of metacognitive states (Mandeville, et al., 2018). With this tool we learned how to assess a student’s self appraisal of their learning, which is a great opportunity to review a student’s intellectual development, mindset and metacognition. Now we had a way to assess these constructs, but what next?

We decided to apply for another Teaching Circle with a focus specifically on Metacognition. Our idea was approved and we were able to engage an even larger group of faculty, staff, and administrators from our academic support staff, to the psychology and business departments and more! Everyone in the group was interested in learning more about ways to support metacognition in our students in our various spaces. And this was the beginning of this blog post series!

What We Learned

Every meeting we had brought together a different group of people depending on schedules and availability. We had core folks who came each time and then a variety of others who came when they were able. Thinking about it now, we remember every meeting being exciting, dynamic, and invigorating.

We didn’t have set agendas and we didn’t have much reading or preparation (unless people asked for items to read). We really just came together to talk and share about our successes and challenges related to supporting students developing their metacognitive skills and to brainstorm ideas to try in our spaces. However, this opportunity for informal community gathering and building was a needed breath of fresh air. We always left energized for the work ahead (and we think the other participants did too!).

In fact, as a result of the Metacognition Teaching Circle, we embarked on a whole new project in which we used the MINDS survey (Mandeville, et al., 2018) at the beginning of the semester and then created “low touch” interventions to support metacognition and growth mindset depending on how students scored on the scale. From this we learned that many students are not familiar with concepts of metacognition and mindfulness, that many actually appreciated the tips and strategies we sent them (and some even used them!), and that students felt that more learning on these topics would be beneficial.

This then lead us to another study, this time examining faculty perceptions of metacognition which we were excited about because our experience suggested that it is likely that folks in certain settings or with certain backgrounds would be more familiar with metacognition and that faculty may not have the understanding or skills to teach metacognition in their courses. For faculty, it is so important to understand the idea of metacognition as it enables students to become flexible and self-directed learners. The teaching and the support of metacognition in the classroom is impactful. It allows students to become aware of their own thinking and to become proficient in choosing appropriate thinking strategies for different learning tasks. Unfortunately, this line of inquiry did not last long due to COVID 19 but we hope to pick this back up this year as we feel it is an important area that could be impactful for faculty and students.

While the research ideas and changes to practice are exciting and were impactful benefits of our Teaching Circles, one of our biggest takeaways was the reminder of the importance of finding others who are also doing the work. Sometimes on our campus, and we suspect it is the case at other institutions as well, we get siloed and often our meetings are with the same folks about the same topics. Being able to facilitate and participate in a cross-campus initiative about a passion topic was an amazing opportunity to meet new people, make new connections, gain different perspectives and create new ideas and strategies to try. We found many people doing great work with students on our campus across so many different departments and schools, and most importantly, found “our people” – people who you can go to when you are stuck, people who you can bounce ideas off of and collaborate with . . . we found our “metacognition people” (some of them at least).

While this was not a “new” idea or “cutting edge”, coming off a year in which we have been separated (in so many ways), we were reminded of the power of connections with others to maintain and sustain ourselves as academics and as humans. We wanted to share that in the guest series by not only showcasing some of the work that our colleagues are doing but also to remind readers to try and find your people . . . whether they are on your campus or off, whether you meet in person or virtual – or only via Tweets on Twitter . . . find the people who can help you maintain, sustain and grow your interest, skills, passion and joy!

We hope you enjoy reading the work of our colleagues and that it helps you on your journey.

References

​​Mandeville, D., Perks. L., Benes, S. & Poloskey, L. (2018). The Mindset and Intellectual Development Scale (MINDS): Metacognitive Assessment for Undergraduate Students. International Journal of Teaching and Learning in Higher Education, 30(3).


Revising the Metacognitive Awareness Inventory (MAI) to be More User-Friendly

By: Melissa Terlecki, PhD, Cabrini University PA

Background

Measuring metacognition, or the awareness of one’s thoughts, is no easy task. Self-report may be limited and we may overestimate the frequency with which we use that information to self-regulate. However, in my quest to assess metacognition, I found the MAI, or the Metacognitive Awareness Inventory (Schraw and Dennison, 1994). The MAI, comparatively speaking, is one of the most widely used instruments. Schraw and Dennison found an alpha coefficient of .91 on each factor of the MAI and .95 for the entire MAI, which indicates reliability. Pintrich (2000) agrees the MAI has external validity given MAI scores and students’ academic achievement are highly correlated.

The Problem

Despite the wide use and application of the MAI, I found the survey measurement scale unfitting and constrictive. The survey consists of 52 questions with true or false response options. Some of the behaviors and cognitions measured on the MAI include, “I consider several alternatives to a problem before I answer,” “I understand my intellectual strengths and weaknesses,” “I have control over how well I learn,” and “I change strategies when I fail to understand”, just to name a few (see https://services.viu.ca/sites/default/files/metacognitive-awareness-inventory.pdf).

Though these questions are valid, to dichotomously respond to an extreme “true”, as in I always do this, OR a “false”, as in I never do this, is problematic. Yes-No responses also make for difficult quantitative analysis. All or nothing responses makes hypothesis testing (non-parametric testing) challenging. I felt that if the scale was changed to be Likert-type, then participants could more accurately self-report on how often they may exhibit these behaviors or cognitions, and we could more readily assess variability and change.

The Revised MAI

Thus, I revised the MAI to use a five-point Likert-type rating scale, ranging from “I never do this” to “I do this always” (see Figure 1). Five points also allows a middle rating with  two extremes on either side (always/never). It is important to note that the original content of the survey questions has not been altered.  

My recent findings (Terlecki & McMahon, 2018; Terlecki & Oluwademilade, in preparation) show the revised MAI to be effective as a pre- and post-test measure to assess the growth due to metacognitive instruction, compared to controls with varying levels of instruction, in college students.

Five example MAI items with the new Likert-scale response options.

Figure 1. Revised MAI likert-scale (Terlecki & McMahon, 2018). Response scale adapted  from Schraw and Dennison (1994) with permission from Sperling (Dennison).

In our longitudinal sample of roughly 500 students, results showed that students exposed to direct metacognitive instruction (across a one semester term) yielded the greatest improvements on the revised MAI (compared to controls), although maturation (age and level in school) had a moderating effect. Thus, we concluded that students who were deliberately taught metacognitive strategies did exhibit an increase in their cognitive awareness, as measured by the revised MAI, regardless of initial levels of self-awareness. In other words, the older one is, the greater the likelihood one may be self-aware; however, explicit metacognitive intervention still boasts improvements.

These changes might not have been elucidated using the original, dichotomous true/false response options. The revised MAI is a useful tool in measuring such metacognitive behaviors and whether changes in frequency may occur over time or intervention. Likewise, anecdotal evidence from my participants, as well as researchers, supports the ease of reporting using this Likert-scale, in comparison to the frustration of using the 2-point bifurcation. Still, usage of the revised MAI in more studies will be required to validate.

Suggestions for Future Usage of the MAI & Call for Collaboration

The Metacognitive Awareness Inventory (MAI) is a common assessment used to measure metacognition. Quantifying metacognition proves challenging, yet this revised instrument appears promising and has already provided evidence that metacognition can grow over time. The addition of a wider range of response options should be more useful in drilling down to frequency of usage of metacognitive behaviors and thinking.

Validation studies on the revised scoring have yet to be conducted, thus if other researchers and/or authors are interested in piloting the revised MAI, please contact me (* see contact information below). It would be great to collaborate and collect more data using the Likert-form, as well as have a larger sample that would allow us to run more advanced statistics on the reliability and validity of the new scaling.

References

Pintrich, P.R. (2000). Issues in self-regulation theory and research. Journal of Mind and Behavior, 21, 213-220.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475.

Terlecki, M. & McMahon, A. (2018). A call for metacognitive intervention: Improvements due to curricular programming and training. Journal of Leadership Education, 17(4), doi:10.12806/V17/I4/R8

Terlecki, M. & Oluwademilade, A. (2020). The effects of instruction and maturity on metacognition (in preparation).

*Contact: If looking to collaborate or validate the revised instrument, please contact Melissa Terlecki at mst723@cabrini.edu.

If you’d like to compare the MAI to other metacognition assessment inventories, please see “A Non-Exhaustive List of Quantitative Tools to Assess Metacognition” by Jessica Santangelo.


Selecting a quantitative measure of metacognition  

by Dr. Jessica Santangelo, Hofstra University

If you are interested in metacognition and promoting the development of metacognitive skills, you may also be interested in measuring metacognition. But how does one assess a person’s metacognitive development?

Metacognitive development can be assessed via quantitative or qualitative measures. Quantitative measures include self-report measures, often using Likert-style survey instruments, while qualitative measures use coding of responses to open-ended prompts (e.g., Stanton 2015). While quantitative measures are generally easier and faster to score, a drawback is that self-report measures are not always accurate (Schunk 2008). Qualitative data can be more rich, providing deeper and more nuanced information, but is much more labor intensive and time consuming to analyze. Ideally, one uses a combination of quantitative and qualitative data to develop as complete a picture of metacognitive development as possible.

When I set out to assess the metacognitive development of 484 (!) students, I was overwhelmed by the number of quantitative tools available. The focus of the tools varies. Some tools attempt to assess metacognition directly while others assess factors or attributes associated with metacognition (e.g., study skills, self-regulated learning). Some are not explicitly intended to assess metacognition (e.g., LASSI), but are used by some authors as an indicator of metacognitive development (e.g., Downing et al 2007, 2011). Some have been through many iterations over the years (e.g., ASI, RASI, and ASSIST) while others remain relatively unchanged (e.g., MAI, MSLQ). Some are free while others have a per student fee. Some are longer (120 items, ILS) and others are shorter (18, RASI).

How does one choose the “best” quantitative tool? Unfortunately, there is no easy answer. It depends on the specific question being addressed and the amount of time and money available to administer the tool. I compiled a (non-comprehensive) list of tools I encountered in my search along with some information about each one to assist anyone looking for a quantitative measure of metacognitive development.

For my 484-student project, I chose to use the Metacognitive Awareness Inventory (MAI; Schraw and Dennison 1994) in combination with coding responses to open-ended prompts I created. I chose the MAI because it purports to measure metacognition directly (rather than being a study or learning skills inventory), is free, and is of moderate length (52 items). Others have found correlations between MAI results and other quantitative measures of student success (e.g., GPA and end of course grades), even suggesting using the MAI as a screening tool to identify students who could benefit from metacognition training (Young and Fry 2008). These characteristics  fit with the questions I was asking: Can we rapidly (and accurately) assess metacognitive development at the beginning of an introductory course? Does including explicit instruction and implicit practice with metacognitive skills in a course increase student metacognitive development?

While coding the open-ended responses is taking months to complete, it has revealed some clear and interesting patterns. In contrast, the quantitative data from the MAI, though gathered in about 5 minutes running scantron sheets through a machine, show no patterns at all. There does not appear to be any relationship between the quantitative MAI data and the qualitative data or any other measure of student success (GPA, exam and course grades, etc.). I’m not entirely surprised – metacognitive skills are unlikely to be wholly captured by a number generated by a 52-item self-report questionnaire. However, given the results of others (e.g., Sperling et al 2004, Young and Fry 2008) I was hopeful there would be at least some relationship between the quantitative and qualitative results.

This is not to say that rapid assessments via self-report questionnaires are worthless. It is simply a caution to not rely on these quantitative tools as one’s sole measure of metacognitive development. Indeed, I have colleagues who have had more “success” with tools other than the MAI (e.g, with the MSLQ), where success is defined as the quantitative tool reflecting similar patterns or trends as other, more time-consuming qualitative measures.

As with many things in science, there is no easy answer. My hope is that this compilation of available tools makes the choice of which one to use a little easier.

For more in-depth reading on measuring metacognition, I recommend:

Mogashana, D., J. M. Case, and D. Marshall. 2012. What do student learning inventories really measure? A critical analysis of students’ responses to the approaches to learning and studying inventory. Studies in Higher Education 37:783–792.

Schraw, G., and J. Impara, eds. 2000. Issues in the Measurement of Metacognition. Buros Institute of Mental Measurements, Lincoln, NE.

References

Downing, K., F. Ning, and K. Shin. 2011. Impact of problem‐based learning on student experience and metacognitive development. Multicultural Education & Technology Journal 5:55–69.

Downing, K., R. Ho, K. Shin, L. Vrijmoed, and E. Wong. 2007. Metacognitive development and moving away. Educational Studies 33:1–13.

Schraw, G., and R. S. Dennison. 1994. Assessing metacognitive awareness. Contemporary educational psychology 19:460–475.

Schunk, D. H. 2008. Metacognition, self-regulation, and self-regulated learning: research recommendations. Educational Psychology Review 20:463–467.

Sperling, R. A., B. C. Howard, R. Staley, and N. DuBois. 2004. Metacognition and self-regulated learning constructs. Educational Research and Evaluation 10:117–139.

Stanton, J. D., X. N. Neider, I. J. Gallegos, and N. C. Clark. 2015. Differences in metacognitive regulation in introductory biology students: when prompts are not enough. CBE-Life Sciences Education 14:ar15.

Young, A., and J. Fry. 2008. Metacognitive awareness and academic achievement in college students. Journal of the Scholarship of Teaching and Learning 8:1–10.

 


Quantifying Metacognition — Some Numeracy behind Self-Assessment Measures

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

Early this year, Lauren Scharff directed us to what might be one of the most influential reports on quantification of metacognition, which is Kruger and Dunning’s 1999 “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” In the 16 years that since elapsed, a popular belief sprung from that paper that became known as the “Dunning-Kruger effect.” Wikipedia describes the effect as a cognitive bias in which relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than it really is. Wikipedia thus describes a true metacognitive handicap in a lack of ability to self-assess. I consider Kruger and Dunning (1999) as seminal because it represents what may be the first attempt to establish a way to quantify metacognitive self-assessment. Yet, as time passes, we always learn ways to improve on any good idea.

At first, quantifying the ability to self-assess seems simple. It appears that comparing a direct measure of confidence to perform taken through one instrument with a direct measure of demonstrated competence taken through another instrument should do the job nicely. For people skillful in self-assessment, the scores on both self-assessment and performance measures should be about equal. Seriously large differences can indicate underconfidence on one hand or “illusory superiority” on the other.

The Signal and the Noise

In practice, measuring self-assessment accuracy is not nearly so simple. The instruments of social science yield data consisting of the signal that expresses the relationship between our actual competency and our self-assessed feelings of competency and significant noise generated by our human error and inconsistency.

In analogy, consider signal as your favorite music on a radio station, the measuring instrument as your radio receiver, the noise as the static that intrudes on your favorite music, and the data as the actual sound mix of noise and signal that you hear. The radio signal may truly exist, but unless we construct suitable instruments to detect it, we will not be able to generate convincing evidence that the radio signal even exists. Failures can lead to the conclusions that metacognitive self-assessment is no better than random guessing.

Your personal metacognitive skill is analogous to an ability to tune to the clearest signal possible. In this case, you are “tuning in” to yourself—to your “internal radio station”—rather than tuning the instruments that can measure this signal externally. In developing self-assessment skill, you are working to attune your personal feelings of competence to reflect the clearest and most accurate self-assessment of your actual competence. Feedback from the instruments has value because they help us to see how well we have achieved the ability to self-assess accurately.

Instruments and the Data They Yield

General, global questions such as: “How would you rate your ability in math?” “How well can you express your ideas in writing?” or “How well do you understand science?” may prove to be crude, blunt self-assessment instruments. Instead of single general questions, more granular instruments like knowledge surveys that elicit multiple measures of specific information seem needed.

Because the true signal is harder to detect than often supposed, researchers need a critical mass of data to confirm the signal. Pressures to publish in academia can cause researchers to rush to publish results from small databases obtainable in a brief time rather than spending the time, sometimes years, needed to generate the database of sufficient size that can provide reproducible results.

Understanding Graphical Depictions of Data

Some graphical conventions that have become almost standard in the self-assessment literature depict ordered patterns from random noise. These patterns invite researchers to interpret the order as produced by the self-assessment signal. Graphing of nonsense data generated from random numbers in varied graphical formats can reveal what pure randomness looks like when depicted in any graphical convention. Knowing the patterns of randomness enables acquiring the numeracy needed to understand self-assessment measurements.

Some obvious questions I am anticipating follow: (1) How do I know if my instruments are capturing mainly noise or signal? (2) How can I tell when a database (either my own or one described in a peer-reviewed publication) is of sufficient size to be reproducible? (3) What are some alternatives to single global questions? (4) What kinds of graphs portray random noise as a legitimate self-assessment signal? (5) When I see a graph in a publication, how can I tell if it is mainly noise or mainly signal? (6) What kind of correlations are reasonable to expect between self-assessed competency and actual competency?

Are There Any Answers?

Getting some answers to these meaty questions requires more than a short blog post, but some help is just a click or two away. This blog directs readers to “Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency” (Numeracy, January 2016) with acknowledgments to my co-authors Christopher Cogan, Steven Fleisher, Eric Gaze and Karl Wirth for their infinite patience with me on this project. Numeracy is an open-source journal, and you can download the paper for free. Readers will likely see self-assessment literature in different ways way after reading the article.


Metacognitive Awareness and Academic Achievement in College Students

“Schraw and Dennison (1994) developed the Metacognitive Awareness Inventory (MAI) to assess metacognitive knowledge and metacognitive regulation which they referred to as the knowledge of cognition factor and the regulation of cognition factor.” Young and Fry’s article discusses the correlations between the final course grades, GPS and MAI. (Metacognitive Awareness Inventory) Findings show that the scores on the MAI greatly differ between undergraduate and graduate students.

Young, A., & Fry, J. (2012). Metacognitive awareness and academic achievement in college students. Journal of the Scholarship of Teaching and Learning, 8(2), 1-10.

Metacognitive Awareness and Academic Achievement in College Students

 


The relationship between goals, metacognition, and academic success

In this article Savia Countinho investigates the relationship between mastery goals, performance goals, metacognition (using the Metacognitive Awareness Inventory), and academic success.

Countinho, S. (2007). The relationship between goals, metacognition, and academic successEducate. 7(1), p. 39-47


Metacognitive Development in Professional Educators

Stewart, Cooper and Moulding investigate adult metacognition development, specifically comparing pre-service teachers and practicing teachers. They used the Metacognitive Awareness Inventory and found that metacognition improves significantly with age and with years of teaching experience, but not with gender or level of teaching (Pre-K though post-secondary ed levels).

Stewart, P. W., Cooper. S. S., & Moulding, L. R. (2007). Metacognitive development in professional educators. The Researcher, 21(1), 32-40.


Supports and Barriers to Students’ Metacognitive Development in a Large Intro Chemistry Course

by Ashley Welsh, Postdoctoral Teaching & Learning Fellow, Vantage College

First off, I must admit that this blog posting has been a long time coming. I was fortunate enough to meet both John Draeger and Lauren Scharff at the ISSOTL conference in Quebec City in October of 2014. Their “Improving With Metacognition” (IWM) poster was a beacon for someone such as myself who is engaged with metacognition in both my teaching and research. I was thrilled to know there were individuals creating and contributing to a repository of literature and reflections surrounding metacognition. This past January, John asked me to contribute a blog post to the website, however I thought it best to defer my writing until after the completion of my PhD this past spring. Thus, here I am. Ready to write.

For the past 7 years I have been actively engaged with undergraduate science education and research at the University of British Columbia (UBC). Within my research and teaching, I have become increasingly aware of students’ concerns with developing and adapting the appropriate study habits/strategies for success in their introductory courses. This concern was echoed by several of my colleagues teaching large (300+ students/section) introductory math and science courses.

This growing concern led me to exploring students’ metacognitive development in two sections of a large, second year introductory organic chemistry course for biological science majors (~245 students/section). Across the literature and at UBC, this course has a reputation as a challenging, cumulative course where students often fail to develop meaningful learning strategies and fall behind in the course (Grove & Bretz, 2012; Lynch & Trujillo, 2011; Zhao et al., 2014). As a result of its reputation, the instructor with whom I was working designed several formative assessments (e.g. bi-weekly in-class quizzes, written reflections), scaffolded in-class activities (e.g. targeted study strategy readings and discussion), and workshops to improve students’ learning strategies. That is, to improve their ability to control, monitor, evaluate, and plan their learning processes (Anderson & Nashon, 2007; Thomas, 2012). Despite students’ high completion of these targeted activities/homework, many still seemed to be struggling with how to study effectively. As such, we were curious to understand the barriers and supports for students’ metacognitive development in this particular course.

My research adopted an interpretive case study approach (Creswell, 2009; Stake, 1995) with data being collected via a pre/post metacognitive instrument, a student feedback survey, classroom observations, and student interviews. At this point in time I will not get into the nitty gritty details of my thesis, but instead, will draw on a few of the main observations/themes that emerged from my work.

  1. High stakes assessments may overshadow resources designed for metacognitive development: Within this course, students’ placed considerable emphasis on high stakes assessment as a means for studying, learning, and reflection. Despite students perceiving the formative assessment measures (e.g. in-class quizzes, homework assignments, targeted study strategy activities) as useful to their learning, the majority of them attributed the midterm and final examinations as driving their studying and behaviours. The examinations were worth roughly 75% of students’ grades and as such, students expressed being more concerned with their performance on these high stakes assessments than with their own study strategies. Students indicated that because the formative activities and workshops were only worth about 15% of their grade, they rarely reflected back on these resources or implemented the advised learning strategies. While these resources were designed to provide ongoing feedback on students’ learning strategies and performance, students mentioned that their performance on the first midterm exam was the primary crossroad at which they would explicitly reflect upon their learning strategies. As one student mentioned, “The midterm is the first major point at which you realize you didn’t understand things”. Unfortunately this was often too late in the semester for most students to effectively change their strategies.
  1. The majority of students reported difficulty implementing metacognitive strategies for enhanced learning: While many students were aware of their weaknesses and lack of concentration when studying, they still struggled with effectively monitoring, evaluating and planning their learning. One student mentioned that “while I do study hard, I don’t think I study smart”. Even when students were aware of their issues, implementing change was difficult as they weren’t exactly sure what to do. Despite the instructor modeling effective strategies and providing multiple opportunities for students to reflect on their learning, several students had difficulty with acknowledging, recognizing, or implementing this advice. Students unanimously praised the efforts of the instructor and the multiple resources she created to support their learning, but outside of class, students often struggled with staying on task or changing their behaviours/attitudes. Some students mentioned they were more concerned with getting a question right than with understanding the problem solving process or with implementing the appropriate strategies for learning. The majority of students I spoke to indicated that throughout their education they had rarely received explicit advice about how to study and some even mentioned that despite writing down the advice they received in class, they were “far too lazy to change”. With learning strategies not taking a primary role in their previous and current education, it’s not surprising that most students found it difficult to implement appropriate strategies for learning.
  1. Students emphasized the importance of gaining awareness of oneself as a learner and seeking help from others: While students acknowledged that the demanding course material and high-stakes assessments were barriers to their learning, they also noted the critical influence that their own strategies and abilities as learners had on their experience and performance. Some students viewed their own stubbornness or personal issues as reasons why they were “too lazy to change” or more likely to “stick with what I already know. Like memorizing and cramming”. When asked to provide advice for incoming students, all of the students I interviewed (n=26) mentioned the necessity for students to “know yourself and what suits you best. And change it – experiment with it. Know how you study. Know that.” This comment was echoed by several students who emphasized the need for every student to be aware of their weaknesses as learners and to actively and immediately seek help from others when concerned or confused. Students who exhibited effective learning strategies were more likely to attend office hours, to create study groups, and to implement and evaluate the instructor’s study advice. Furthermore, these students could explicitly articulate the strategies they used for studying and could identify which course resources were most influential to their learning approaches.

The three themes described above are only a snapshot of some of the issues unveiled within my doctoral research. They have led me to consider more research that could explore:

  • How increasing the weight (percentage of the final grade) of the formative assessment/activities relative to the high-stakes examinations might impact students’ learning strategies/behaviours;
  • How to appropriately shift students’ fixations on grades to that of understanding and learning;
  • How we might better support students in seeing value in activities, resources, or low-stakes assessment that have been designed to support them as metacognitive, confident learners; and
  • How we might achieve these assessment and learning goals in large, introductory science courses.

I look forward to any comments/questions you have on this topic!

-Ashley

——————————–

Anderson, D., & Nashon, S. (2007). Predators of knowledge construction: Interpreting students’ metacognition in an amusement park physics program. Science Education, 91(2), 298-320. doi: 10.1002/sce.20176

Creswell, J. W. (2009). Research design, qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage.

Grove, N. P., & Bretz, S. L. (2012). A continuum of learning: from rote memorization to meaningful learning in organic chemistry. Chemistry Education Research and Practice, 13, 201-208.

Lynch, D. J., & Trujillo, H. (2011). Motivational beliefs and learning strategies in organic chemistry. International Journal of Science and Mathematics Education, 9(1351- 1365).

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Thomas, G. (2012). Metacognition in science education: Past, present, and future considerations. In B. J. Fraser, K. Tobin & C. J. McRobbie (Eds.), Second International Handbook of Science Education (pp. 131-144): Springer International Handbooks of Education.

Zhao, N., Wardeska, J. G., McGuire, S. Y., & Cook, E. (2014). Metacognition: An effective tool to promote success in college science learning. Journal of College Science Teaching, 43(4), 48-54.


Comprehension Monitoring: The Role of Conditional Knowledge Part 2

by Antonio Gutierrez, Georgia Southern University

In my previous post, I discussed the role conditional knowledge (i.e., the why, when, and where to apply strategies given task demands) plays in learners’ ability to calibrate their performance against their actual performance. This is in light of debates about the relations between the various dimensions of metacognition. Calibration is a component of one’s ability to monitor comprehension, which is a regulatory function. Conditional knowledge, on the other hand, is part of the knowledge component of metacognition. As a researcher I often wonder whether instead of making assumptions that these various metacognitive functions are related whether perhaps we should empirically test these assumptions. In metacognitive research it is often assumed that the knowledge and regulation aspects of metacognition are related. From a theoretical perspective, this makes sense. However, for us to assume that this is the case with all samples and populations may be a stretch, especially given the diversity and individual differences among learners. In this vein, I am currently seeking ethics approval to conduct research with middle school students because this is an understudied population with respect to metacognition. In this proposed research I plan to not only investigate calibration among middle school students and the influence metacognitive strategy training has on learners’ calibration, but I plan to empirically assess the association between the eight dimensions of metacognition (Knowledge: declarative, procedural, and conditional; Regulation: planning, information management, debugging strategies; comprehension monitoring; and evaluation of learning). I will also attempt to test the predictive power of various components of metacognition on learners’ calibration. I am especially interested in empirically measuring the association between conditional knowledge and calibration as well as the predictive power of conditional knowledge on calibration. I expect that metacognitive strategy training will improve learners’ performance, confidence judgments, and also their calibration. I also suspect that those with greater conditional knowledge will have better calibration, and hence, I expect conditional knowledge to strongly predict calibration.

This particular study is one among a series of scientific investigations on the validity of theoretical claims made when researchers discuss metacognition. In my attempt to provide educators a toolkit of domain-general metacognitive strategies they can readily apply in their classrooms, this series of studies will help me provide the empirical evidence necessary to demonstrate the utility and relevance of metacognitive strategies to not only scholars but practitioners as well. These strategies have been adapted from an unpublished pilot study I conducted prior to my dissertation. This research will help me to continue to refine these strategies to better suit adolescents. Moreover, it will shed some light on the link between conditional metacognitive knowledge and calibration, which is a topic that began with earlier posts. Stay tuned for preliminary results of my first investigation.


Measuring Metacognitive Judgments

In Gregg Schraw’s (2009) chapter, Measuring Metacognitive Judgments, he artfully provides a taxonomy of calibration measures that attempt to assesses metacognitive judgment of learning. For more information, follow the hyperlink below.

Schraw, G. (2009). Measuring Metacognitive Judgments. In D. J. Hacker, J. Dunlosky, &  A. C. Graesser (Eds.). Handbook of metacognition in education, 415.


Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration

“This study examined the effect of strategy instruction and incentives on performance, confidence, and calibration accuracy. Individuals (N = 107) in randomly assigned treatment groups received a multicomponent strategy instruction intervention, financial incentives for high performance, or both. The authors predicted that incentives would improve performance, while strategy instruction would improve performance, confidence, and calibration accuracy as a result of better monitoring and self-regulation of learning. The authors compared pre- and posttest items and 20 new posttest-only items. They found significant effects for strategy training on performance, confidence, and calibration accuracy, as well as the interaction between strategy training and time on calibration accuracy. Incentives improved performance and calibration accuracy, either directly, or through an interaction with strategy training. Implications for future research are discussed.” For more information about this article, follow the link below.

Gutierrez, A. P., & Schraw, G. (2014). Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration. The Journal of Experimental Education, (ahead-of-print), 1-19.


The Motivated Strategies for Learning Questionnaire

The Motivated Strategies for Learning Questionnaire (MSLQ) was developed by Pintrich, Smith, Garcia, and McKeachie (1993). This measure has been cited in over 1600 articles and is a very well established measure of not only metacognition, but motivation. The MSLQ is split into two main scales. The Motivation Scale is comprised of the Intrinsic Goal Orientation, Extrinsic Goal Orientation, Task Value, Control of Learning Beliefs, Self-Efficacy for Learning and Performance, and Test Anxiety sub-scales. the Learning Strategies Scale is comprised of the Rehearsal, Elaboration, Organization, Critical Thinking, Metacognitive Self-Regulation, Time and Study Environment, Effort Regulation, Peer Learning, and Help Seeking sub-scales. For more information on the MSLQ please refer to the reference and hyperlink below.

Pintrich, P. R., Smith, D. A., García, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801-813.


The Need for Cognition Scale

A classic personality measure of metacognition was developed by John Cacioppo and Richard Petty (1982) entitled, The Need for Cognition Scale (NCS). This is a widely used 18-item Likert scale that assesses “the tendency for an individual to engage in and enjoy thinking” (Cacioppo & Petty, 1982, p. 116). The NCS has been cited in over 3000 articles and has well established psychometric properties. For more information please read the original article by Cacioppo and Petty (1982).

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116.