The Metacognitive Reading Strategies Questionnaire (MRSQ): Cross-Cultural Comparisons

by Roman Taraban, Ph.D. Texas Tech University

When you read, do you ask yourself whether the material is contributing to your knowledge of the subject, whether you should revise your prior knowledge, or how you might use the new knowledge that you are acquiring?  Do you highlight information or make notes in the margins to better remember and find information later on? Prior research by Pressley and colleagues (e.g., Pressley & Afflerbach, 1995) suggested that the type of metacognitions suggested by reading strategies like these were critical for effective reading comprehension.  

photo of a stack of books with a pair of reading glasses on top

Inspired by that research, Taraban et al. (2000) conducted a study involving 340 undergraduates and 35 reading strategies like those suggested by Pressley and colleagues and found that self-reports of strategy use were significantly associated with grade-point averages (GPA). Specifically, students who reported higher use of reading strategies also had higher GPAs.  Additionally, responses to open-ended questions showed that students who could name more reading strategies and reading goals also had significantly higher GPAs. 

The data in Taraban et al. (2000) overwhelmingly suggested a strong positive relationship between students’ knowledge and use of reading goals and strategies and their academic performance.  More generally, data by Taraban et al. and others suggest that effective reading depends on metacognitive processing – i.e., on directed cognitive effort to guide and regulate comprehension. Skilled readers know multiple strategies and when to apply them. In the remainder of this post, I review subsequent developments associated with metacognitive reading strategies, including cross-cultural comparisons, as well as raising a question about the relevance of these strategies to present-day text processing and comprehension given widespread technological developments.

Analytic VS Pragmatic Reading Strategies

In 2004, my students and I created a questionnaire, dubbed the Metacognitive Reading Strategies Questionnaire (MRSQ) (Taraban et al., 2004). The questionnaire drew on the strategies tested earlier in Taraban et al. (2000) and organized the strategies into two subscales through factor analytic methods: analytic strategies and pragmatic strategies.  The analytic scale relates to cognitive strategies like making inferences and evaluating the text (e.g., After I read the text, I consider other possible interpretations to determine whether I understood the text.). The pragmatic scale relates to practical methods for finding and remembering information from the text (e.g., I try to underline when reading in order to remember the information.). Students respond to these statements using a five-point Likert-type scale: Never Use, Rarely Use, Sometimes Use, Often Use, Always Use.

Initial applications of the MRSQ suggested that the two-factor model could aid in better understanding students’ use of metacognitive comprehension strategies.  Specifically, in students’ self-reports of expected GPA for the coming academic year, there was a significant positive correlation with analytic strategies but a non-significant correlation with pragmatic strategies, which suggested that students who reported higher use of analytic strategies also anticipated doing well academically in the coming academic year.

Cross-Cultural Explorations of Reading Strategies

Vianty (2007) used the MRSQ to explore difference in students’ use of metacognitive reading strategies in their native language, Bahasa Indonesia, and their second language, English. Participants were students in a teacher education program who completed the MRSQ in English and Bahasa Indonesia. Vianty found that students processed language differently in their native language compared to a non-native language.

In comparing mean use of analytic strategies when reading in their native language compared to English, Vianty found that nearly all means were higher for Bahasa Indonesia.  T-tests showed significant differences favoring Bahasa Indonesia for eight out of sixteen analytic strategies. Conversely, four of the six pragmatic strategies were favored when reading English, however, only one difference (I take notes when reading in order to remember the information) was significant on a t-test. Vianty concluded that students used analytic strategies significantly more in Bahasa Indonesia than English. Conversely, use of pragmatic strategies was higher when reading in English, but the effect was weak.

Taraban et al. (2013) compared US and Indian engineering undergraduates on their application of analytic and pragmatic strategies. The language of instruction in Indian universities is English; however, this is not typically the native language (the mother tongue) of the students.  Therefore, the researchers predicted lower use of analytic strategies and higher use of pragmatic strategies among Indian students compared to US students, reasoning from the findings in Vianty (2007). The latter but not former prediction was supported. Indeed, Indian students applied analytic strategies significantly more frequently than US students.  Pragmatic strategy use was significantly lower than analytic strategy use for US students but not for Indian students, who applied analytic and pragmatic strategies equally often.  Contrary to the findings in Vianty (2007), these findings suggest that students can make significant use of analytic and pragmatic strategies in a non-native language.

The most comprehensive cross-linguistic comparison was conducted recently by Gavora et al. (2019), who compared analytic and pragmatic strategy use, measured by variants of the MRSQ, among 2692 students from Poland, Hungary, Slovakia, and the Czech Republic, enrolled in education programs, primarily teacher and counseling.  Students in Hungary, Slovakia, and the Czech Republic reported significantly higher use of pragmatic over analytic strategies. Students in Poland showed a converse preference, reporting significantly more frequent use of analytic strategies. Quite striking in the results were the significant correlations between pragmatic strategy use and GPA, and analytic strategy use and GPA, for all four countries.  Specifically, the correlation showed that higher frequency use of both pragmatic and analytic strategies was associated with more successful academic performance.

Gavora et al. (2019) suggest that “In order to succeed academically, students direct their reading processes not towards comprehension but to remembering information, which is the core component of the pragmatic strategy” (p. 12). Their recommendation, that “educators’ attention should be focused on developing especially analytic strategies in students,” is strongly reminiscent of the ardor with which Pressley and colleagues began promoting metacognitive reading strategies beginning in the elementary grades. 

However, given the significant correlations between both analytic and pragmatic strategy use with GPA, it may be that the predominance of analytic strategies is not what is important, but whether application of either type of strategy – analytic or pragmatic – aids students in their academic achievement. The data from Vianty (2007) may be informative in this regard, specifically, the finding that those students applied pragmatic strategies more frequently than analytic strategies when the context – reading outside their native language – dictated a more pragmatic approach to reading and comprehension.

A relevant point made by Gavora et al. relates to the samples that have been tested to-date, and the relevance of context to strategy use. They point out that in contexts like engineering (e.g., Taraban et al., (2013), the context may support more analytic thinking and analytic strategy use.  The Gavora et al., sample consisted of humanities students, which, on their argument, may have resulted in an overwhelming affirmation of pragmatic strategies. Further comparisons across students in different programs is certainly warranted.

Changing Times: The Possible Influence of Technology on Reading

An additional question comes to mind, which is the effect of widespread technology in instructional settings. When I, like others, am uncertain about a definition, algorithm, theory, etc., I find it very easy to simply Google the point or look for a YouTube, which I simply need to read or watch for an explanation. This personal observation suggests that perhaps the strategies that are probed in the MRSQ may, at this point, be incomplete, and in some instances, somewhat irrelevant.  The next step should be to ask current students what strategies they use to aid comprehension. Their responses may lead to new insights into contemporary student metacognitions that assist them in learning.

In conclusion, there is no doubt that metacognitive strategies are essential to effective information processing.  However, there may be room to reconsider and update the strategies that students employ when reasoning and searching for information and insights to guide and expand comprehension and learning.  It may be that current technology has made students more pragmatic and a promising goal for further research would be to uncover the ways in which that pragmatism is being expressed through new search strategies.

References

Gavora, P., Vaculíková, J., Kalenda, J., Kálmán, O., Gombos, P., Świgost, M., & Bontová, A. (2019). Comparing metacognitive reading strategies among university students from Poland, Hungary, Slovakia and the Czech RepublicJournal of Further and Higher Education, 1-15.

Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Erlbaum.

Taraban R, Kerr M, Rynearson K (2004) Analytic and pragmatic factors in college students’ metacognitive reading strategies. Reading Psychology, 25(2), 67–81.

Taraban, R., Rynearson, K., & Kerr, M. (2000). College students’ academic performance and self-reports of comprehension strategy use. Reading Psychology, 21, 283–308.

Taraban, R., Suar, D., & Oliver, K. (2013). Information literacy of US and Indian engineering undergraduatesSpringerPlus2(1), 244.

Vianty, M. (2007). The comparison of students’ use of metacognitive reading strategies between reading in Bahasa Indonesia and in English. International Education Journal,8(2), 449–460.


Measuring Metacognitive Self-Assessment – Can it Help us Assess Higher-Order Thinking?

by Dr. Ed Nuhfer, California State Universities (retired)

Since 2002, I’ve built my “Developers’ Diary” columns for National Teaching and Learning Forum (NTLF) around the theme of fractals and six essential components in practice of college teaching: (1) affect, (2) levels of thinking (intellectual & ethical development), (3) metacognition. (4) content knowledge & skills, (5) pedagogy and (6) assessment. The first three focus on the internal development of the learner and the last three focus on the knowledge being learned. All six have interconnections through being part of the same complex neural networks employed in practice.

In past blogs, we noted that affect and metacognition, until recently, were deprecated and maligned by behavioral scientists, with the most deprecated aspect of metacognition being self-assessment. The highest levels of thinking discovered by Perry are heavily affective and metacognitive, so some later developmental models shunned these stages when only cognition seemed relevant to education. However, the fractal model advocates for practicing through drawing on all six components. Thus, metacognition is not merely important for its own merits; we instructors rely on metacognitive reflection to monitor whether we are facilitating students’ learning through attending to all six.

The most maligned components, affect and self-assessment may offer a key to measuring the overall quality of education and assessing progress toward highest levels of thinking. Such measurements have been something of a Grail quest for developers. To date, efforts to make such measures have proven to be labor intensive and expensive.

Measuring; What, Who, Why, and How?

The manifestation of affect in the highest Perry stages indicates that cognitive expertise and skills eventually connect to affective networks. At advanced levels of development, experts’ affective feelings are informed feelings that lead to rapid decisions for action that are usually effective. In contrast, novices’ feelings are not informed. Beginners’ approaches are tentative and take a trial-and-error approach rather than an efficient path to a solution. By measuring how well students’ affective feelings of their self-assessed competence have integrated with their cognitive expertise, we should be able to assess their stage of progress toward high-level thinking.

To assess a group’s (a class, class rank or demographic category) state of development, we can obtain the group’s mean self -assessments of competence on an item-by-item basis from a valid, reliable multiple-choice test that requires some conceptual thinking. We have such a test in the 25-item Science Literacy Concept Inventory (SLCI). We can construct a knowledge survey of this Inventory (KSSLCI) to give us 25 item-by-item self-assessed estimates of competence from each participant.

As demonstrated in 2016 and 2017, item-by-item averages of group responses attenuate the random noise present in individuals’ responses. Thus, assessments done by using aggregate information from groups can provide a clear self-assessment signal that allows us to see valid differences between groups.

If affective self-assessed estimates become increasingly informed as higher level thinking capacity develops, then we should see that the aggregate item-by item paired measures correlate with increasing strength as groups gain in participants who possess higher order thinking skill. We can indeed see this trend.

Picture the Results

For clear understanding, it is useful first to see what graphs of paired measures of random noise (meaningless nonsense) look like (Figure 1A) and how paired measures look when they correlate perfectly (Figure 1B). We produce these graphs by inputting simulated data into our SLCI and KSSLCI instruments (Figure 1).

Random nonsense produces a nearly horizontal line along the mean (“regression to the mean”) of 400 random simulated responses to each of the 25 items on both instruments. The best-fit line has values of nearly zero for both correlation (r) and line slope (Figure 1A).

We use a simulated set of data twice to get the pattern of perfect correlation when the participants’ mean SLCI and KSSLCI scores for each item are identical. The best-fit line (Figure 1B has a correlation (r) and a line slope, both of about unity (1). The patterns from actual data (Figure 2) will show slopes and correlations between random noise and perfect order.

Fig 1 Nuhfer Modeling Correlational Patterns

Figure 1. Modeling correlational patterns with simulated responses to a measure of competence (SLCI) and a measure of self-assessed competence (KSSLCI). A shows correlational pattern if responses are random noise. B shows the pattern if 400 simulated participants perfectly assessed their competence.

Next, we look at the actual data obtained from 768 novices (freshmen and sophomores—Figure 2 A). Novices’ self-assessed competence and actual competence have a significant positive correlation. The slope is 0.319 and r is .69. The self-assessment measures explain about half of the variance (r2) in SLCI scores. Even novices do not appear to be “unskilled and unaware of it.” Developing experts (juniors, seniors and graduate students, N = 831 in Figure 2B) produce a fit line with a slightly steeper slope of 0.326 and a stronger r of .77. Here, the self-assessment measures explain about 60% of the variance in the Inventory scores.

When we examine experts (109 professors in Figure 2C), the fit line steepens to a slope of 0.472, and a correlation of r = .83 explains nearly 70% of the variance in Inventory scores. The trend from novice to expert is clear.

Final Figure 2D shows the summative mean SLCI scores and KSSLCI ratings for the four undergraduate ranks plus graduate students and professors. The values of KSSLCI/SLCI increase in the order of academic rank. The correlation (r) between the paired measures is close to unity, and the slope of 0.87 produces a pattern very close to that of perfect self-assessment (Figure 1B).

Fig 2 Nuhfer SLCI data

Figure 2: Correlations from novice to expert of item-by-item group means of each of the 25 items addressed on the KSSLCI and the SLCI. Panel A contains the data from 768 novices (freshmen and sophomores). B consists of 831 developing experts (juniors, seniors and graduate students). C comes from 109 experts (professors). Panel D employs all participants and plots the means of paired data by academic rank. We filtered out random guessing by eliminating data from participants with SLCI scores of 32% and lower.

Figure 2 supports: that self-assessments are not random noise, that knowledge surveys reflect actual competence; that affective development occurs with cognitive development, and that a group’s ability to accurately self-assess seems indicative of the group’s general state of intellectual development.

Where might your students might fall on the continuum of measures illustrated above? By using the same instruments we employ, your students can get measures of their science literacy and self-assessment accuracy, and you can get an estimate of your class’s present state of intellectual development. The work that led to this blog is under IRB oversight, and getting these measures is free. Contact enuhfer@earthlink.net for further information.


In Remembrance of Dr. Gregg Schraw and Dr. Marty Carr

By Hillary Steiner, Ph.D., Kennesaw State University and Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

In this first blog post of 2018 we remember two educational psychologists with interests in metacognition who recently passed away. Aaron Richmond and Hillary Steiner describe how their personal and professional interactions with these scholars influenced their own work.

From Aaron: In my career as an educational psychologist, I was more than lucky to work with Gregg—I was honored. On September 16th, 2016, Gregg passed away with his sweet wife Lori by his side. Gregg was a prolific researcher in metacognition and in other educational research fields. He published over 90 journal articles, 15 books, and 45 book chapters. He sat on several editorial boards including the Journal of Educational Psychology, Metacognition and Learning, Educational Psychology Review, etc. He was an active member of Division C (Learning and instruction) in the American Educational Research Association, and several other regional academic conferences such as Northern Rocky Mountain Educational Research Association (NRMERA).photo of Dr. Gregg SchrawYes, Gregg was a prolific scholar, however, his greatest gift was to his students and colleagues. One of my dear friends and fellow metacognitive researcher Rayne Sperling at Pennsylvania State University wrote, “Gregg’s confidence in me and steady, supportive guidance provided the self-efficacy boost I needed in order to believe in myself as a scholar with something to say. As co-chair of my dissertation, mentor throughout my career, and dear friend always, Gregg was a strong, positive force in my life. Now, my own doc students tell me I am a wonderful, supportive mentor, and I always tell them, “I am just doing what I was taught; mentoring as I was mentored.” Gregg taught me this too. His mentoring continues with the students he mentored (and there are a lot of us) who now have students of our own.” (McCrudden, 2016, p. 681).

I had followed Gregg’s career, seen him at conferences—in awe of course with a star-struck gaze and for me, Gregg was a research icon. He was a mega-god for which I was not worthy. However, when I first met Gregg at NRMERA in the fall of 2003, I was a dewy-eyed graduate student who had plucked up the courage to introduce myself to discuss metacognitive research. I quickly realized that yes—he was a research god, but more importantly he was a kind, generous, supportive, and inclusive person / human. He listened to my good ideas and listened to my half-cocked ideas that needed serious fine-tuning. After that fateful day in Jackson Hole, Wyoming I knew that I had gained a mentor of all things. Gregg supported me through my career in both research and teaching. We published together, and he was one of my advocates. He advanced my career like so many others. He doled out sound and sincere professional advice willingly. For example, Gregg, Fred Kuch, and I were working on some metacognition research together and my students were working quite hard and doing a great job on the project. Mind you, I am at a large state university with no graduate students so these were undergraduate students. Gregg was so impressed with one of my students (because of the mentorship he and his students had provided me which I had passed on to my students) he offered to write her a letter of recommendation for graduate school. I found this simple but powerful and impactful gesture to be astonishing and yet typical of Gregg’s passion for advancing high quality scholars in the field of metacognition and educational psychology. This was just one simple example of how Gregg went out of his way to help people and support their goals and pursuits.

In the end, Gregg didn’t have to be, but he was my mentor like so many others. Gregg am I indebted to you and you will truly be missed. The field of metacognition lost a great scholar, mentor, and friend.

From Hillary:

On July 30, 2017, the field of metacognition lost another great. Dr. Martha “Marty” Carr, Professor of Educational Psychology at the University of Georgia, passed away at the young age of 59. A prolific researcher who mentored countless students to become scholars in their own right, Marty combined her interests in metacognition, motivation, giftedness, and mathematics achievement to impact the field of educational psychology in a unique way, asking big questions about how children’s metacognitive strategies influence the gender differences that emerge in mathematics achievement, and how metacognition differs in gifted children.

photo of Dr. Marty Carr

Marty began her career in developmental psychology at the University of Notre Dame under the tutelage of John Borkowski, followed by a postdoctoral stint at the Max Planck Institute for Psychological Research in Germany, where she quickly made important contributions related to the influence of motivation and metacognition on children’s learning strategy development. After joining the faculty of the University of Georgia in 1989, where she remained for her entire career, she began to cultivate additional interests in giftedness and mathematics strategy development. These varied interests dovetailed throughout the years, as she wrote about metacognition in gifted children, motivational and self-regulatory components of underachievement, and metacognitive influences on gender differences in math. Marty’s work was known for its methodological rigor, its unique application of developmental models and methods to learning processes, and its applicability to the classroom. She was recognized in particular for groundbreaking work on the predictors and influential factors of gender differences in mathematics. Her contributions led to national recognition and leadership, including presidency of the American Psychological Association’s Educational Psychology Division (Division 15), presidency of the Women in Mathematics Education division of the National Council for Teachers in Mathematics, and numerous awards, including the American MENSA Education and Research Foundation Award for Excellence.

As my dissertation advisor in the early 2000’s, Marty was the first person to make me feel like a scholar. She recognized my interests in giftedness and cognitive development and provided the perfect combination of support and encouragement that helped me craft a line of research that continues to this day. And I am not alone. At her memorial service, several students commented on how much her mentorship had meant to them. According to student Kellie Templeman, “her skill in striking the balance between technical knowledge, compassionate guidance, and tireless work ethic was what separated her from any other professor I have worked with.” She promoted metacognition in her own students by asking them to reflect constantly on the “why” questions of their individual projects and to remain goal-driven. As another former student noted, Marty pushed us to “keep going, get busy, and keep writing,” learning from our mistakes as we went. Yet, as a devoted mother who had many outside interests, including marathon running and working with animals (especially cats and horses) Marty was also an excellent model of work-life balance.

When I attended the American Educational Research Association conference as a graduate student, Marty introduced me to Gregg Schraw, who was to be my assigned mentor for the week. I was starry-eyed at meeting such a great figure in my field, but later realized that others were equally starry-eyed to meet Marty. Marty and Gregg were truly giants in educational psychology whose contributions have transformed the way we think about metacognition. May we continue to honor their memory in our own work.

References

McCrudden, M. T. (2016). Remembering Gregg Schraw. Educational Psychology Review28(4), 673-690.

 


Metacognitive Awareness of Learning Strategies in Undergraduates

This article by Jennifer McCabe presents the results of two studies focusing on metacognitive awareness of learning strategies in undergraduates. Participants were asked to evaluate and predict the outcomes of six educational scenarios describing the strategies of dual-coding, static-media presentations, low-interest extraneous details, testing, and spacing. Study 1 showed low awareness of all strategies except for generation; and a correlation of scenario prediction accuracy with an independent metacognition scale. Study 2 showed improved prediction accuracy for students who were specifically taught about these principles in college courses. “This research suggests that undergraduates are largely unaware of several specific strategies that could benefit memory for course information; further, training in applied learning and memory topics has the potential to improve metacognitive judgments in these domains.”

McCabe, J. (2011). Metacognitive awareness of learning strategies in undergraduates. Memory & Cognition, 39, 462–476. doi:10.3758/s13421-010-0035-2


Metacognition and Learning: Conceptual and Methodological Considerations

This is the first issue of the new international journal Metacognition and Learning. Journal provides “A kaleidoscopic view on research into metacognition.” It is a great introduction to metacognition and includes ten issues “Which are by no means exhaustive.”

Metacognition and Learning, 2006, Volume 1, Number 1, Page 3. Marcel V. J. Veenman, Bernadette H. A. M. Hout-Wolters, Peter Afflerbach

Metacognition and Learning: Conceptual and Methodological Considerations


Metacognition and Self-Regulated Learning Constructs

This article contains findings from several different studies, and the “Findings indicated convergence of self-report measures of metacognition, significant correlations between metacognition and academic monitoring, negative correlations between self-reported metacognition and accuracy ratings, and positive correlations between metacognition and strategy use and metacognition and motivation.”

Rayne A. Sperling, Bruce C. Howard, Richard Staley & Nelson DuBois

(2004) Metacognition and Self-Regulated Learning Constructs, Educational Research and

Evaluation: An International Journal on Theory and Practice, 10:2, 117-139

Metacognition and Self-Regulated Learning Constructs


Self-assessment and the Affective Quality of Metacognition: Part 2 of 2

Ed Nuhfer, Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029

In Part 1, we noted that knowledge surveys query individuals to self-assess their abilities to respond to about one hundred to two hundred challenges forthcoming in a course by rating their present ability to meet each challenge. An example can reveal how the writing of knowledge survey items is similar to the authoring of assessable Student Learning Outcomes (SLOs). A knowledge survey item example is:

I can employ examples to illustrate key differences between the ways of knowing of science and of technology.

In contrast, SLOs are written to be preceded by the phrase: “Students will be able to…,” Further, the knowledge survey items always solicit engaged responses that are observable. Well-written knowledge survey items exhibit two parts: one affective, the other cognitive. The cognitive portion communicates the nature of an observable challenge and the affective component solicits expression of felt confidence in the claim, “I can….” To be meaningful, readers must explicitly understand the nature of the challenges. Broad statements such as: “I understand science” or “I can think logically” are not sufficiently explicit. Each response to a knowledge survey item offers a metacognitive self-assessment expressed as an affective feeling of self-assessed competency specific to the cognitive challenge delivered by the item.

Self-Assessed Competency and Direct Measures of Competency

Three competing hypotheses exist regarding self-assessed competency relationship to actual performance. One asserts that self-assessed competency is nothing more than random “noise” (https://www.koriosbook.com/read-file/using-student-learning-as-a-measure-of-quality-in-hcm-strategists-pdf-3082500/; http://stephenporter.org/surveys/Self%20reported%20learning%20gains%20ResHE%202013.pdf).  Two others allow that self-assessment is measurable. When compared with actual performance, one hypothesis maintains that people typically overrate their abilities and generally are “unskilled and unaware of it” (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2702783/).  The other, “blind insight” hypothesis, indicates the opposite: a positive relationship exists between confidence and judgment accuracy (http://pss.sagepub.com/content/early/2014/11/11/0956797614553944).

Suitable resolution of the three requires data acquired from paired instruments of known reliability and validity. Both instruments must be highly aligned to collect data that addresses the same learning construct. The Science Literacy Concept Inventory (SLCI), a 25-item test tested on over 17,000 participants, produces data on competency with Cronbach Alpha Reliability .84, and possesses content, construct, criterion, concurrent, and discriminant validity. Participants (N=1154) who took the SLCI also took a knowledge survey (KS-SLCI with Cronbach Alpha Reliability of .93) that produced a self-assessment measure based on the identical 25 SLCI items. The two instruments are reliable and tightly aligned.

If knowledge surveys register random noise, then data furnished from human subjects will differ little from data generated with random numbers. Figure 1 reveals that data simulated from random numbers 0, 1, and 2 yield zero reliability, but real data consistently show reliability measures greater than R = .9 (Figure 1). Whatever quality(ies) knowledge surveys register is not “random noise.” Each person’s self-assessment score is consistent and characteristic.

randomksevenodd1154.eps

Figure 1. Split-halves reliabilities of 25-item KS-SLCI knowledge surveys produced by 1154 random numbers (left) and by 1154 actual respondents (right). 

Correlation between the 1154 actual performances on the SLCI and the self-assessed competencies through the KS-SLCI is a highly significant r = 0.62. Of the 1154 participants, 41.1% demonstrated good ability to self-assess their actual abilities to perform within ±10%, 25.1% of participants proved to be under-estimators, and 33.8% were over-estimators.

Because each of the 25 SLCI items poses challenges of varying difficulty, we could also test whether participants’ self-assessments gleaned from the knowledge survey did or did not show a relationship to the actual difficulty of items as reflected by how well participants scored on each of them. The collective self-assessments of participants revealed an almost uncanny ability to reflect the actual performance of the group on most of the twenty-five items (Figure 2), thus supporting the “blind insight” hypothesis. Knowledge surveys appear to register meaningful metacognitive measures, and results from reliable, aligned instruments reveal that people do generally understand their degree of competency.

knowledgesurveySLCIdifficulty

Figure 2. 1154 participants’ average scores on each of 25 SLCI items correspond well (r = 0.76) to their average scores predicted by knowledge survey self-assessments.

Advice in Using Knowledge Surveys to Develop Metacognition

  • In developing competency in metadisciplinary ways of knowing, furnish a bank of numerous explicit knowledge survey items that scaffold novices into considering the criteria that experts consider to distinguish a specific way of knowing from other ways of thinking.
  • Keep students in constant contact with self-assessing by redirecting them repeatedly to specific blocks of knowledge survey items relevant to tests and other evaluations and engaging them in debriefings that compare their self-assessments with performance.
  • Assign students in pairs to do short class presentations that address specific knowledge-survey items while having the class members monitor their evolving feelings of confidence to address the items.
  • Use the final minutes of the class period to enlist students in teams in creating alternative knowledge survey items that address the content covered by the day’s lesson.
  • Teach students the Bloom Taxonomy of the Cognitive Domain (http://orgs.bloomu.edu/tale/documents/Bloomswheelforactivestudentlearning.pdf) so that they can recognize both the level of challenge and the feelings associated with constructing and addressing different levels of challenge.

Conclusion: Why use knowledge surveys?

  • Their skillful use offers students many practices in metacognitive self-assessment over the entire course.
  • They organize our courses in a way that offers full transparent disclosure.
  • They convey our expectation standards to students before a course begins.
  • They serve as an interactive study guide.
  • They can help instructors enact instructional alignment.
  • They might be the most reliable assessment measure we have.

 

 


Comprehension Monitoring: The Role of Conditional Knowledge Part 2

by Antonio Gutierrez, Georgia Southern University

In my previous post, I discussed the role conditional knowledge (i.e., the why, when, and where to apply strategies given task demands) plays in learners’ ability to calibrate their performance against their actual performance. This is in light of debates about the relations between the various dimensions of metacognition. Calibration is a component of one’s ability to monitor comprehension, which is a regulatory function. Conditional knowledge, on the other hand, is part of the knowledge component of metacognition. As a researcher I often wonder whether instead of making assumptions that these various metacognitive functions are related whether perhaps we should empirically test these assumptions. In metacognitive research it is often assumed that the knowledge and regulation aspects of metacognition are related. From a theoretical perspective, this makes sense. However, for us to assume that this is the case with all samples and populations may be a stretch, especially given the diversity and individual differences among learners. In this vein, I am currently seeking ethics approval to conduct research with middle school students because this is an understudied population with respect to metacognition. In this proposed research I plan to not only investigate calibration among middle school students and the influence metacognitive strategy training has on learners’ calibration, but I plan to empirically assess the association between the eight dimensions of metacognition (Knowledge: declarative, procedural, and conditional; Regulation: planning, information management, debugging strategies; comprehension monitoring; and evaluation of learning). I will also attempt to test the predictive power of various components of metacognition on learners’ calibration. I am especially interested in empirically measuring the association between conditional knowledge and calibration as well as the predictive power of conditional knowledge on calibration. I expect that metacognitive strategy training will improve learners’ performance, confidence judgments, and also their calibration. I also suspect that those with greater conditional knowledge will have better calibration, and hence, I expect conditional knowledge to strongly predict calibration.

This particular study is one among a series of scientific investigations on the validity of theoretical claims made when researchers discuss metacognition. In my attempt to provide educators a toolkit of domain-general metacognitive strategies they can readily apply in their classrooms, this series of studies will help me provide the empirical evidence necessary to demonstrate the utility and relevance of metacognitive strategies to not only scholars but practitioners as well. These strategies have been adapted from an unpublished pilot study I conducted prior to my dissertation. This research will help me to continue to refine these strategies to better suit adolescents. Moreover, it will shed some light on the link between conditional metacognitive knowledge and calibration, which is a topic that began with earlier posts. Stay tuned for preliminary results of my first investigation.


Measuring Metacognitive Judgments

In Gregg Schraw’s (2009) chapter, Measuring Metacognitive Judgments, he artfully provides a taxonomy of calibration measures that attempt to assesses metacognitive judgment of learning. For more information, follow the hyperlink below.

Schraw, G. (2009). Measuring Metacognitive Judgments. In D. J. Hacker, J. Dunlosky, &  A. C. Graesser (Eds.). Handbook of metacognition in education, 415.


Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration

“This study examined the effect of strategy instruction and incentives on performance, confidence, and calibration accuracy. Individuals (N = 107) in randomly assigned treatment groups received a multicomponent strategy instruction intervention, financial incentives for high performance, or both. The authors predicted that incentives would improve performance, while strategy instruction would improve performance, confidence, and calibration accuracy as a result of better monitoring and self-regulation of learning. The authors compared pre- and posttest items and 20 new posttest-only items. They found significant effects for strategy training on performance, confidence, and calibration accuracy, as well as the interaction between strategy training and time on calibration accuracy. Incentives improved performance and calibration accuracy, either directly, or through an interaction with strategy training. Implications for future research are discussed.” For more information about this article, follow the link below.

Gutierrez, A. P., & Schraw, G. (2014). Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration. The Journal of Experimental Education, (ahead-of-print), 1-19.


Metacognitive Strategies: Are They Trainable?

by Antonio Gutierrez, Southern Georgia University

Effective learners use metacognitive knowledge and strategies to self-regulate their learning (Bol & Hacker, 2012; Bjork, Dunlosky & Kornell, 2013; Ekflides, 2011; McCormick, 2003; Winne, 2004; Zeidner, Boekaerts & Pintrich, 2000; Zohar & David, 2009). Students are effective self-regulators to the extent that they can accurately determine what they know and use relevant knowledge and skills to perform a task and monitor their success. Unfortunately, many students experience difficulty learning because they lack relevant knowledge and skills, do not know which strategies to use to enhance performance, and find it difficult to sequence a variety of relevant strategies in a manner that enables them to self-regulate their learning (Bol & Hacker, 2012; Grimes, 2002).

Strategy training is a powerful educational tool that has been shown to overcome some of these challenges in academic domains such as elementary and middle school mathematics (Carr, Taasoobshirazi, Stroud & Royer, 2011; Montague, Krawec, Enders & Dietz, 2014), as well as non-academic skills such as driving and anxiety management (Soliman & Mathna, 2009). Additional benefits of strategy training are that using a flexible repertoire of strategies in a systematic manner not only produces learning gains, but also empowers students psychologically by increasing their self-efficacy (Dunlosky & Metcalfe, 2009). Further, a common assumption is that limited instructional time with younger children produces life-long benefits once strategies are automatized (McCormick, 2003; Palincsar, 1991; Hattie et al., 1996).

In addition to beginning strategy instruction as early as possible, it should be embedded within all content areas, modeled by teachers and self-regulated students, practiced until automatized, and discussed explicitly in the classroom to provide the greatest benefit to students. Pressley and Wharton-McDonald (1997) recommend that strategy instruction be included before, during, and after the main learning episode. Strategies that occur before learning include setting goals, making predictions, determining how new information relates to prior knowledge, and understanding how the new information will be used. Strategies needed during learning include identifying important information, confirming predictions, monitoring, analyzing, and interpreting. Strategies typically used after learning include reviewing, organizing, and reflecting. Good strategy users should possess some degree of competence in each of these areas to be truly self-regulated.

Additional strategies have been studied by Schraw and his colleagues (Gutierrez & Schraw, in press; Nietfeld & Schraw, 2002). They demonstrated that a repertoire of seven strategies is effective at improving undergraduate students’ learning outcomes and comprehension monitoring, a main component of the regulatory dimension of metacognition. Table 1 contains the seven strategies explicitly taught to students. Moreover, these strategies can function not only in contrived laboratory settings but also in ecologically valid settings, such as classrooms.

Table 1. Summary of Metacognitive Strategies and their Relation to Comprehension Monitoring

 

Strategy

LearningProcesses

Hypothesized Influence on Comprehension

Review main objectives of the text and focus on main ideas and overall meaning Review and monitor Enhance calibration through clarifying misunderstandings and tying details to main ideas
Read and summarize material in your own words to make it meaningful; use elaboration and create your own examples Read and relate Enhances calibration by transforming knowledge into something personally meaningful
Reread questions and responses and reflect on what the question is asking; go through and take apart the question paying attention to relevant concepts Review, relate, and monitor Purposefully slowing information processing allows for a more accurate representation of the problem, thus decreasing errors in judgment
Use contextual cues in the items and responses, e.g., bolded, italicized, underlined, or capitalized words Relate Using contextual cues allows the mind to focus on salient aspects of the problem rather than seductive details, thereby increasing accuracy
Highlight text; underline keywords within the question to remind yourself to pay attention to them; use different colors to represent different meanings Review, relate, and monitor Highlighting and underlining can assist one to focus on main ideas and what is truly important, increasing accuracy; however, relying too much on this can be counterproductive and may potentially increase errors
Relate similar test questions together and read them all before responding to any Relate and monitor Relating information together provides a clearer understanding of the material and may highlight inconsistencies that need to be resolved; it may point to information the learner may have missed, increasing accuracy
Use diagrams, tables, pictures, graphs, etc. to help you organize information Review and relate These strategies help simplify complex topics by breaking them down to their constituent parts; this increases accuracy by decreasing errors

Adapted from Gutierrez and Schraw (in press).

However, while the studies by Shaw and colleagues have shown that teachers can effectively use these strategies to improve students’ comprehension monitoring and other learning outcomes, they have not thoroughly investigated why and how these strategies are effective. I argue that the issue is not so much that students are not aware of the metacognitive strategies, but rather that many lack the conditional metacognitive knowledge−that is, the where, when, and why to apply a given strategy taking into consideration task demands. Future research should investigate these process questions, namely when, how, and why different strategies are successful.

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013).  Self-regulated learning: Beliefs, techniques and illusions. Annual Review of Psychology, 64, 417-447.

Bol, L. & Hacker, D. J. (2012). Calibration research: where do we go from here? Frontiers in Psychology, 3, 1-6.

Carr, M., Taasoobshirazi, G., Stroud, R., & Royer, J. M. (2011). Combined fluency and cognitive strategies instruction improves mathematics achievement in early elementary school. Contemporary Educational Psychology, 36, 323–333.

Dunlosky, J., & Metcalfe, J. (2009).  Metacognition. Thousand Oaks, CA: Sage Publications.

Ekflides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: The MASRL model. Educational Psychologist, 46, 6-25.

Grimes, P. W. (2002). The overconfident principles of economics students: An examination of metacognitive skill. Journal of Economic Education, 1, 15–30.

Gutierrez, A. P., & Schraw, G. (in press). Effects of strategy training and incentives on students’ performance, confidence, and calibration. The Journal of Experimental Education: Learning, Instruction, and Cognition.

Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research, 66, 99-136. doi: 10.3102/00346543066002099

McCormick, C. B. (2003). Metacognition and learning. In W. M. Reynolds & G. E. Miller (Eds.), Handbook of psychology: Educational psychology (pp. 79-102). Hoboken, NJ: John Wiley & Sons.

Montague, M., Krawec, J., Enders, C. & Dietz, S. (2014). The effects of cognitive strategy instruction on math problem solving of middle-school students of varying ability. Journal of Educational Psychology,106,469 – 481.

Nietfeld, J. L., & Schraw, G. (2002). The effect of knowledge and strategy explanation on monitoring accuracy. Journal of Educational Research, 95, 131-142.

Palincsar, A. S. (1991). Scaffolded instruction of listening comprehension with first graders at risk for academic difficulty. In A. M. McKeough & J. L. Lupart (Eds.), Toward the practice of theory-based instruction (pp. 50–65). Mahwah, NJ: Erlbaum.

Pressley, M., & Wharton-McDonald, R.  (1997).  Skilled comprehension and its development through instruction.  School Psychology Review, 26, 448-466.

Soliman, A. M. & Mathna, E. K. (2009). Metacognitive strategy training improves driving situation awareness. Social Behavior and Personality,37, 1161-1170.

Winne, P. H. (2004). Students’ calibration of knowledge and learning processes: Implications for designing powerful software learning environments. International Journal of Educational Research, 41,466-488. doi:http://dx.doi.org/10.1016/j.ijer.2005.08.012

Zeidner, M., Boekaerts, M., & Pintrich, P. R.  (2000).  Self-regulation: Directions and challenges for future research.  In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.),  Handbook of self-regulation (pp. 13-39).  San Diego, CA: Academic Press.

Zohar, A., & David, A. (2009). Paving a clear path in a thick forest: a conceptual analysis of a metacognitive component. Metacognition & Learning4(3), 177-195.

 


The effects of distraction on metacognition and metacognition on distraction

Beaman CP, Hanczakowski M and Jones DM (2014) The effects of distraction on metacognition and metacognition on distraction: evidence from recognition memory. Front. Psychol. 5:439. doi: 10.3389/fpsyg.2014.00439

http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00439/abstract (open source full text)

According to the authors (p. 11), “The results documented in our study with free-report tests also reveal that effects of distraction do not end with impairing memory processes. Auditory distraction has important consequences for how accurate people are in monitoring their memory processes, as revealed by impaired resolution of confidence judgments under distraction. Even more importantly, auditory distraction modifies metacognitive control and thus shapes performance when the “don’t know”option is available in a memory test. Participants seem to be aware that auditory distraction is harmful for memory as they become much less confident in their correct responses when distraction is present (see also Ellermeier and Zimmer, 1997; Beaman, 2005b).

 


Are Current Metacognition Measures Missing the Target?

by Chris Was, Kent State University

Clearly, there is some agreement as to what metacognition is, or how to define it. In layman’s terms we often hear metacognition described as “thinking about thinking.” It is often defined as knowledge of and control of one’s cognitive processes.

There is also agreement that metacognition is necessary for one to successfully learn from instruction. Models such as Nelson and Naren’s (1990) model and that presented by Tobias and Everson (2009) stress the importance of knowledge of one’s state of knowledge as a key to learning.

In laboratory settings we have a number of “measures” of metacognition. Judgments of knowing, judgments of learning, feelings of knowing, etc. are all research paradigms used to understand individuals’ ability to assess and monitor their knowledge. These measures are demonstrated to predict differences in study strategies, learning outcomes and host of other performance measures.  However, individuals in a laboratory do not have the same pressures, needs, motivations, and desires as a student preparing for an exam.

How do we measure differences in students’ ability to monitor their knowledge so that we can help those who need to improve their metacognition? Not in the lab, but in the classroom. Although much of the research I have conducted with colleagues in metacognition has included attempts to both measure and increase metacognition in the college classroom (e.g., Isaacson & Was, 2010, Was, Beziat, & Isaacson, 2014), I am not convinced that we have always successfully measured these differences.

Simple measures of metacognitive knowledge monitoring administered at the beginning of a semester long course account for significant amounts of variance in end of the semester cumulative final exams (e.g,, Hartwig, Was, Dunlosky & Isaacson, 2013). However, the amount of the variance for which metacognitive knowledge monitoring in the models accounts is typically less than 15% and often much less. If knowledge monitoring is key to learning why then is it the case that it accounts for so little variance in measures of academic performance? Are the measures of knowledge monitoring inaccurate? Do scores on a final exam depend upon the life circumstances of the student during the semester? The answer to both questions is likely yes. But even more important, it could be that students are aware that their metacognitive monitoring is inaccurate and they therefore use other criteria to predict their academic performance.

The debate over whether the unskilled are unaware continues (cf. Krueger & Dunning, 2009; Miller & Geraci, 2011). Krueger and Dunning have provided evidence that poor academic performers carry a double burden. First, they are unskilled. Put differently, they lack the knowledge or skill to perform well. Second, they are unaware. That is, they do not know they lack the knowledge or skill and therefore have a tendency to be overconfident when predicting future performance.

There is however, a good deal of evidence that low-performing students are aware that when they are asked to predict how they will perform on an examination their predictions are overconfident. When asked to predict how well they will do on a test, the lowest performing students often predict scores well above how they eventually perform, but when asked how confident they are about their predictions these low performing students often report little confidence in their predictions.

So why does a poor performing student predict that they will perform well on an exam, when they are not confident in that prediction? Interestingly, my colleagues and I have (as have others) collected data that demonstrates that many students scoring near or above the class average under-predict their scores, and are just as uncertain as to what their actual scores will be.

An area we are beginning to explore is the relationship between ego-protection mechanisms and metacognition. As I stated earlier, students in a course, be it k-12, post-secondary or even adult education, are dealing with demands of the course, their goals in the course and the instructors goals, their attributes of success and failure in the course, and a multitude of other personal issues that may influence their performance predictions. The following is an anecdotal example from a student of mine. After several exams (in one of my undergraduate courses I administer 12 exams a semester plus a final exam) which students were required to predict their test scores, I asked a student why she consistently predicted her score to be 5 – 10 points lower then the grade she would receive. “Because when I do better than I predict, I feel good about my grade,” was her response.

My argument is that to examine metacognition of our students or to try to improve the metacognition of our students in isolation, without attempting to understand the other factors (e.g., motivation) that impact students’ perceptions of their knowledge and future performance, we are not likely to be successful in our attempts.

Isaacson, R., & Was, C. A.  (2010). Believing you’re correct vs. knowing you’re    correct: A significant difference?  The Researcher, 23(1), 1-12.

Krueger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in    recognizing one’s own incompetence lead to inflated self-assessments.    Journal of Personality and Social Psychology, 77(6), 1121-1134.

Miller, T. M., & Geraci, L. (2011). Unskilled but aware: reinterpreting overconfidence    in low-performing students. Journal of Experimental Psychology: Learning    Memory, and Cognition, doi:10.1037/a0021802

Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and some    new findings.  In G. H. Bower (Ed.), The psychology of learning and motivation    (Vol. 26, pp. 125–173).  New York: Academic Press.

Tobias, S., & Everson, H. (2009).  The importance of knowing what you know: A    knowledge monitoring framework for studying metacognition in education.    In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of    Metacognition in Education. (pp. 107-128). New York, NY: Routledge.
Beziat, T. R. L., Was, C. A., & Isaacson, R. M. (2014). Knowledge monitoring accuracy    and college success of underprepared students. The Researcher, 26(1), 8-13.


The Value of Integrative Learning

http://files.eric.ed.gov/fulltext/ED486247.pdf

Mary Huber and Pat Hutchings (2004) consider the value of integrative learning (e.g., to becoming a more motivated learner, a more nimble employee, a more responsible citizen, an educated person) and the role metacognition can play in making learning more integrative. They write, “Reflection. Metacognition. Learning how to learn. Whatever the language or lineage, the idea of making students more self-aware and purposeful –more intentional –about their studies is a powerful one, and it is key to fostering integrative learning. Assisting students to develop such capacities poses important challenges for campus reforms around teaching and learning” (7).

Huber, M. T., & Hutchings, P. (2004). Integrative Learning: Mapping the Terrain. The Academy in Transition. Association of American Colleges and Universities.