What Pandemics Can Teach Us about Critical Thinking and Metacognition

by Stephen L. Chew, Ph.D. Samford University (slchew@samford.edu)

Critical thinking leads to fewer errors and better outcomes, fueling personal and societal success (Halpern, 1998; Willingham, 2019). The current view is that critical thinking is discipline specific and arises out of subject expertise. For example, a chess expert can think critically about chess, but that analytical skill does not transfer to non-chess situations. The evidence for general critical thinking skills, and our ability to teach them to students, is weak (Willingham, 2019). But these are strange times that challenge that consensus.

The world is currently dealing with COVID-19, a pandemic unprecedented in our lifetime in scope, virulence, and level of contagion. No comprehensive expertise exists about the most effective policies to combat the pandemic. Virologists understand the virus, but not the epidemiology. Epidemiologists understand models of infection, but not public policy. Politicians understand public policy, but not viruses. We are still discovering the properties of COVID-19, fine tuning pandemic models, and trying out new policies.  As a result, different countries have responded to the pandemic in different ways. Unfounded beliefs and misinformation have proliferated to fill the void of knowledge, which range from useless to counterproductive and even harmful.

graph with virus molecule and question marks

The Relationship between Metacognition and Critical Thinking

If critical thinking can only occur with sufficient expertise, then virtually no one should be able to think critically about the pandemic, yet I believe that critical thinking can play a vital role. In this essay, I argue that metacognition is a crucial element of critical thinking and, because of this, critical thinking is both a general skill and teachable. While critical thinking is most often seen (and studied) in situations where  prior knowledge matters, it is in unprecedented situations like this pandemic where more general critical thinking skills emerge and can make a crucial difference in terms of decision making and problems solving.  

I’m building on the work of Halpern (1998) who argued that critical thinking is a teachable, general, metacognitive skill. She states, “When people think critically, they are evaluating the outcomes of their thought processes – how good a decision is or how well a problem is solved” (Halpern, 1998, p. 451). Reflection on one’s own thought processes is the very definition of metacognition. Based on Halpern’s work, we can break critical thinking down into five core components:

  1. Predisposition toward Engaging in Thoughtful Analysis
  2. Awareness of One’s Own Knowledge, Thought Processes and Biases
  3. Evaluation of the Quality and Completeness of Evidence
  4. Evaluation of the Quality of the Reasoning, Decision Making, or Problem Solving 
  5. An Ability to Inhibit Poor and Premature Decision Making

Predisposition toward Engaging in Thoughtful Analysis

Critical thinking involves a personal disposition toward engaging in thoughtful analysis. Strong critical thinkers display this tendency in situations where many people do not see the need, and they engage in more detailed, thorough analysis than many people feel necessary (Willingham, 2019). The variation in the predisposition to think analytically has been on display during the pandemic. Some people simply accept what they hear or read without verifying its validity. In social media, they might pass along information they find interesting or remarkable without distinguishing between valid information, conspiracy, opinion, and propaganda.

The penchant for complex thinking as a habit can be developed and trained. Our educational system should reinforce the value of detailed analysis in preventing costly errors and should give students extensive practice in carrying it out within whatever field the student is studying.

Awareness of One’s Own Knowledge, Thought Processes and Biases

Critical thinking requires insight into the accuracy of what one knows and the extent and importance of what one doesn’t know. It also involves insight into how one’s biases might influence judgment and decision making (West et al., 2008). Metacognition plays a major role in accurate self-awareness.

Self-awareness is prone to serious error and bias (Bjork et al., 2013; Metcalfe, 1998). Greater confidence is not the same as greater knowledge. Metacognitive awareness can be poor and misleading (McIntosh et al., 2019). The good news, though, is that poor self-awareness can be overcome through proper experience and feedback (Metcalfe, 1998).

In this pandemic, key critical thinking involves understanding the implications of what we know and continue to discover about COVID-19. One example is the exponential growth rate of COVID-19  infection. Effective responding to the exponential growth involves taking aggressive preventative measures before there is any symptomatic evidence of spread, which, intuitively, seems like an overreaction. Confirmation bias made it easy to accept what people wanted to be true as fact and reject what they did not want to be true as unlikely. Thus, people often ignored warnings about distancing and avoiding large gatherings until the pandemic was well underway.

Recognizing one’s own biases and how to avoid them is a general skill that can be developed through education. Students can be taught to recognize the many biases that can undermine rational, effective thinking (Kahneman, 2011). For example, students can learn to seek out disconfirming evidence to counter confirmation bias (Anglin, 2019). To guard against overconfidence, students can learn to assess their understanding against an objective standard (Chew, 2016).

Evaluation of the Quality and Completeness of Evidence

Critical thinkers understand the importance of evaluating the quality and completeness of their evidence, which involves a metacognitive appraisal. Do I have data of sufficient quality from sufficiently representative samples in order to make valid decisions? What data am I missing that I need? The quality of evidence continues to be of immense concern in the U.S. because of the lack of rapid testing for COVID-19. Critical thinkers understand that data vary in reliability, validity and measurement error. Early in the pandemic, some people believed that COVID-19 was milder than the flu. These people accepted early estimates at face value, without understanding the limitations of the data. What counts for valid data is one aspect of critical thinking that is more discipline specific. Critical thinkers may not be able to evaluate the quality of evidence outside their area of expertise, but they can at least understand that data can vary in quality and it matters greatly for making decisions.

Non-critical thinkers consider data in a biased manner. They may search only for information that supports their beliefs and ignore or discount contradictory data (Schulz-Hardt et al., 2000). Critical thinkers consider all the available data and are aware if there are data they need but do not have. During the pandemic, there were leaders who dismissed the severity of COVID-19 and waited too long to order a quarantine, and there were leaders who wanted to remove the quarantine restrictions despite the data.  

Evaluation of the Quality of the Reasoning, Decision Making, or Problem Solving

Critical thinking includes evaluating how well the evidence is used to create a solution or make a decision (e.g. Schwartz et al. 2005). There are general metacognitive questions that people can use to evaluate the quality of any argument. Have all perspectives been considered?  Have all alternative explanations been explored? How might a course of action go wrong? Like judgments of evidence, judgments of the strength of an argument is fraught with biases (e.g. Gilovich, 2008; Kraft et al., 2015; Lewandowsky et al., 2012). People more readily accept arguments that agree with their views and are more skeptical of arguments they disagree with, instead of considering the strength of the argument. The pandemic has already spawned dubious studies with selection bias, lack of a control group, or lack careful control, but the “findings” of these studies are embraced by people who want them to be true. Furthermore, people persist in beliefs in the face of clear contradictory evidence (Guenther & Alicke, 2008).

Students should learn about the pitfalls of bias and motivated cognition regardless of their major. Critical thinking involves intellectual humility, an openness to alternative views and a willingness to change beliefs in light of sufficient evidence (Porter, & Schumann, 2018).

An Ability to Inhibit Poor and Premature Decision Making

The last component of a critical thinker is resistance to drawing premature conclusions. Critical thinkers know the limitations of their evidence and keep their reasoning and decision making within its bounds (Noone et al., 2016). They resist tempting but premature conclusions. The inhibitory aspect of critical thinking is probably the least well understood of all the components and deserves more research attention.

Metacognition Supports Critical Thinking

Metacognition, the ability to reflect on one’s own knowledge, plays a crucial role in critical thinking. We see it in the awareness of one’s own knowledge (Component 2), awareness of the quality of evidence and possible biases (Component 3) and the evaluation of the strength of an argument (Component 4). If we wish to teach critical thinking, we need to emphasize these metacognitive skills, both as part of a student’s training in a major and as part of general education. The other two components of critical thinking, the predisposition to engage in critical thinking and the inhibition of premature conclusions, are habits that can be trained.

Critical thinking is hard to do. It takes conscious mental effort and requires overcoming powerful human biases. No one is immune to bad decisions. I assert that critical thinking is a general, teachable skill, especially in situations where decisions have to be made in unprecedented conditions. The pandemic shows that critical decisions often have to be made before sufficient evidence is available. Critical thinking leads to better outcomes by making the best use of available evidence and minimizing error and vulnerability to bias. In these situations, critical thinking is a vital skill, and metacognition plays a major role.

References

Anglin, S. M. (2019). Do beliefs yield to evidence? Examining belief perseverance vs Change in response to congruent empirical findings. Journal of Experimental Social Psychology, 82, 176–199. https://doi-org.ezproxy.samford.edu/10.1016/j.jesp.2019.02.004

Bjork, R.A., Dunlosky, J., & Kornell, N. (2013) Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444.  https://pdfs.semanticscholar.org/4efb/146e5970ac3a23b7c45ffe6c448e74111589.pdf

Chew, S. L. (2016, February). The Importance of Teaching Effective Self-Assessment. Improve with Metacognition Blog. Retrieved from https://www.improvewithmetacognition.com/the-importance-of-teaching-effective-self-assessment/

Gilovich T. (2008). How We Know What Isn’t So: Fallibility of Human Reason in Everyday Life (Reprint edition). Free Press.

Guenther, C. L., & Alicke, M. D. (2008). Self-enhancement and belief perseverance. Journal of Experimental Social Psychology44(3), 706-712. doi:10.1016/j.jesp.2007.04.010

Halpern, D. F. (1998). Teaching critical thinking for transfer across domains: Disposition, skills, structure training, and metacognitive monitoring. American Psychologist53(4), 449-455.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

Kraft, P. W., Lodge, M., & Taber, C. S. (2015). Why people ‘don’t trust the evidence’: Motivated reasoning and scientific beliefs. Annals of the American Academy of Political and Social Science, 658(1), 121–133. https://doi-org/10.1177/0002716214554758

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi-org.ezproxy.samford.edu/10.1177/1529100612451018

Metcalfe, J. (1998). Cognitive optimism: Self-deception or memory-based processing heuristics? Personality and Social Psychology Review, 2(2), 100–110. https://doi-org.ezproxy.samford.edu/10.1207/s15327957pspr0202_3

McIntosh, R. D., Fowler, E. A., Lyu, T., & Della Sala, S. (2019). Wise up: Clarifying the role of metacognition in the Dunning-Kruger effect. Journal of Experimental Psychology: General, 148(11), 1882–1897. https://doi.org/10.1037/xge0000579

Noone, C., Bunting, B., & Hogan, M. J. (2016). Does mindfulness enhance critical thinking? Evidence for the mediating effects of executive functioning in the relationship between mindfulness and critical thinking. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.02043

Porter, T., & Schumann K., (2018) Intellectual humility and openness to the opposing view, Self and Identity, 17(2), 139-162, DOI: 10.1080/15298868.2017.1361861

Schulz-Hardt, S., Frey, D., Lüthgens, C., & Moscovici, S. (2000). Biased information search in group decision making. Journal of Personality and Social Psychology, 78(4), 655–669. https://doi-org /10.1037/0022-3514.78.4.655

Schwartz, D., Bransford, J., & Sears, D. (2005). Efficiency and innovation in transfer. In J. Mestre (Ed.), Transfer of learning from a modern multidisciplinary perspective (pp. 1-51). Greenwich, CT: Information Age Publishing.

West, R. F., Toplak, M. E., & Stanovich, K. E. (2008). Heuristics and biases as measures of critical thinking: Associations with cognitive ability and thinking dispositions. Journal of Educational Psychology, 100(4), 930–941. https://doi-org/10.1037/a0012842

Willingham, D. T. (2019).  How to Teach Critical Thinking. Education Future Frontiers, New South Wales Department of Education.


What I Learned About Metacognition from Cooking Farro

by Stephen Chew, Ph.D., Samford University,  slchew@samford.edu

I like to take my wife out for dinner, but sometimes she insists on going to a place that doesn’t feature a drive through lane. That’s fine with me because It gives us a chance to see what is trendy in the food world. A few years ago, my wife ordered a salad made with quinoa. We’d vaguely heard about quinoa before, but had never tried it. We really liked it for its nutty taste. If you don’t know, Quinoa (typically pronounced KEEN-wah in English and kee-NO-ah in Spanish) is a grain that was first cultivated in the Andes several thousand years ago, and has become quite popular for its nutritional value. After we tried it, I decided to buy some and cook it myself. I found it in the store and next to it was another grain I had only vaguely heard of, farro. Farro (pronounced either FAY-roh or FAR-oh) is also an ancient grain, but it originated in the Mediterranean region around 10,000 years ago. I figured if I was going to try one ancient grain I might as well try another, so I bought them both. Little did I know that cooking them would be an adventure in good and bad metacognition.

First I cooked the quinoa, and that turned out fine. Next, I tried the farro, and that is where I ran into problems. I followed the directions on the package, but then I realized I had no idea how to tell if the farro was properly cooked. Unlike quinoa, I’d never eaten it before and I had no concept of what the desired end result was supposed to look or taste like. Was it supposed to have a mushy, al dente, or crunchy texture? I had no idea. Looking at photos and videos of cooked farro didn’t help much. There was nothing in the instructions about how to tell if it was done. For quinoa, I had already eaten some that was, presumably, expertly prepared. Furthermore, the cooking instructions had the helpful note that cooked quinoa becomes translucent and the germ bursts out in a spiral pattern. I had been able to check for that when I cooked it. No such luck with farro. As a result, my wife and I had to decide if we liked farro based on whatever version of it that I had cooked.

Now how does this story relate to metacognition? For effective metacognition, students must accurately judge how close or far their understanding is from the desired end goal. How can they do that if they have no concept (or an inaccurate concept) of the desired end goal? Consider self-regulated learning, which incorporates metacognition (Zimmerman, 1990). Pintrich (2004) makes explicit the necessity of students understanding the desired outcome for successful learning when he states that all models of self-regulated learning “assume that there is some type of goal, criterion, or standard against which comparisons are made in order to assess whether the learning process should continue as is or if some type of change is necessary” (p. 387). I’ve certainly made the mistake of believing students understood what the desired outcome of an assignment or activity was only to find out later (usually on the exam or final paper) that they did not understand the goal at all. I know what I mean when I tell them to use critical thinking or employ sound research methods or develop sound arguments, but I can’t assume that they know it unless I teach them what I mean and how to recognize when they have achieved it.

Failure to teach the desired level of understanding to students is a consequence of the curse of expertise. Because of our expertise, we tend to overestimate our ability to explain concepts thoroughly (Fisher & Keil, 2015) and we underestimate the difficulty for students to learn the concepts (Hinds, 1999). Fortunately, demonstrating to students what the desired understanding or end goal is for a concept is something we can accomplish through formative assessments such as think-pair-shares, “clicker” questions, and worked examples. We can assess their understanding of a concept using a low stakes activity before the high stakes assessment and demonstrate both the end result we are looking for and the strategies we use to achieve it. Not only are such formative assessments useful for students to monitor their understanding, they are also useful for helping us calibrate our teaching according to their understanding.

Recently I read the autobiography of Eric Ripert, a renowned chef. He makes the same point about the importance of understanding the desired outcome in recounting his development as a master chef.

Through repetition and determination to be great (or at least better than good), I began to understand the sauces I was preparing. I started to allow myself to feel my way through them, not just assemble them be rote. I knew when a sauce I had made was delicious—perfectly balanced and deeply flavored. (Ripert & Chambers, 2016, p. 215)

We must make sure students know what the desired goal is and how to recognize when they have achieved it in order to enable effective metacognition. It is a lesson I learned from cooking farro that I now apply to my teaching.

References

Fisher, M., & Keil, F. C. (2016). The curse of expertise: When more knowledge leads to miscalibrated explanatory insight. Cognitive Science, 40, 1251-1269. doi: 10.1111/cogs.12280

Hinds, P. J. (1999). The curse of expertise: The effects of expertise and debiasing methods on predictions of novice performance. Journal of Experimental Psychology: Applied, 5, 205-221. doi: 10.1037/1076-898X.5.2.205

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16, 385-407. doi: 10.1007/s10648-004-0006-x

Ripert, E., & Chambers, V. (2016). 32 yolks: From my mother’s table to working the line. New York: Random House.

Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25, 3-17. doi: 10.1207/s15326985ep2501_2


The Importance of Teaching Effective Self-Assessment

by Stephen Chew, Ph.D., Samford University,  slchew@samford.edu

Say we have two students who are in the same classes. For sentimental reasons, I’ll call them Goofus and Gallant[i]. Consider how they each react in the following scenarios.

In General Psychology, their teacher always gives a “clicker question” after each section. The students click in their response and the results are projected for the class to see. The teacher then explains the correct answer. Gallant uses the opportunity to check his understanding of the concept and notes the kind of question the teacher likes to use for quizzes. Goofus thinks clicker questions are a waste of time because they don’t count for anything.

In their math class, the teacher always posts a practice exam about a week before every exam. A day or two before the exam, the teacher posts just the answers, without showing how the problems were solved. Gallant checks his answers and if he gets them wrong, he finds out how to solve those problems from the book, the teacher, or classmates. Goofus checks the answers without first trying to work the problem. He tries to figure out how to work backwards from the answer. He considers that good studying. He memorizes the exact problems on the practice exam and is upset if the problems on the exam don’t match them.

In history class, the teacher returns an essay exam along with the grading rubric. Both boys were marked off for answers the teacher did not find sufficiently detailed and comprehensive. Gallant compares his answer to answers from classmates who scored well on the exam to figure out what he did wrong and how to do better next time. Goofus looks at the exam and decides the teacher gives higher scores to students who write more and use bigger words. For the next exam, he doesn’t change how he studies, but he gives long, repetitive answers and uses fancy words even though he isn’t exactly sure what they mean.

In each case, the teacher offers opportunities for improving metacognitive awareness, but the reactions of the two boys is markedly different. Gallant recognizes the opportunity and takes advantage of it, while Goofus fails to see the usefulness of these opportunities and, when given feedback about his performance, fails to take advantage of it. Just because teachers offer opportunities for improving metacognition does not mean that students recognize the importance of the activities or know how to take advantage of them. What is missing is an understanding of self-assessment, which is fundamental to developing effective metacognition.

For educational purposes, self-assessment occurs when students engage in an activity in order to gain insight into their level of understanding. The activity can be initiated either by the student or the teacher. Furthermore, to qualify as self-assessment, the student must understand and utilize the feedback from the activity. In summary, self-assessment involves students learning the importance and utility of self-assessments, teachers or students creating opportunities for self-assessment, and students learning how to use the results to improve their learning (Kostons, van Gog, & Paas, 2012).

Self-assessment is similar to formative assessment, which refers to any low-stakes activity designed to reveal student learning, but there are key differences (Angelo & Cross, 1993). First, students may undergo a formative assessment without understanding that it is an important learning opportunity. In self-assessment, the student understands and values the activity as an aid to learning. Second, students may not appreciate or use feedback from the formative assessment to improve their learning (Karpicke, Butler, & Roediger, 2009). Successful self-assessment involves using the feedback to identify misconceptions and knowledge gaps, and to hone learning strategies (Kostons et al., 2012). Third, even high stakes, summative assessments can be used for self-assessment. For example, students can use the results of an exam to evaluate how successful their learning strategies were and make modifications in preparation for the next exam. Fourth, formative assessments are usually administered by the teacher. Self-assessment can be initiated by either teachers or students. For example, students may take advantage of chapter review quizzes to test their understanding. If students do not understand the importance of self-assessment and how to do it effectively, they will not take advantage of formative assessment opportunities, and they fail to use feedback to improve their learning.

The importance of learning effective self-assessment is grounded in a sound empirical and theoretical foundation. Teaching students to conduct self-assessment will help them to become aware of and correct faulty metacognition, which in turn should contribute to more successful self-regulated learning (see Pintrich, 2004). Self-assessment also involves student recall and application of information, facilitating learning through the testing effect (see Roediger & Karpicke, 2006, for a review). The proper use of feedback has also been shown to improve student learning (Hattie & Yates, 2014). Finally, self-assessment activities can also provide feedback to teachers on the student level of understanding so that they can adjust their pedagogy accordingly.

Teachers play a critical role in both designing rich activities for self-assessment and also teaching students how to recognize valuable opportunities for self-assessment and to take advantage of them. Some activities are more conducive to self-assessment than others. In the psychology class example above, Goofus doesn’t understand the purpose of the clicker question nor the importance of the feedback. The teacher could have used a richer activity with the clicker questions to promote self-assessment (e.g. Crouch & Mazur, 2001). In the math class scenario, the teacher gives a practice exam, but only gives the correct answer for feedback. Richer feedback would model the reasoning needed to solve the problems (Hattie & Yates, 2014) and support self-assessment. And even when feedback is given, students need to learn how to use the feedback effectively and avoid misconceptions, such as in the history class example where Goofus wrongly concludes the teacher wants longer answers with fancy words.

I believe effective self-assessment is a critical link between assessment activities and improved metacognition. It is link that we teachers often fail to acknowledge. I suspect that effective teachers teach students how to carry out self-assessment on their understanding of course content. Less effective teachers may provide self-assessment opportunities, but they are either not effectively designed, or students may not recognize the importance of these opportunities or know how to take advantage of them.

There is not a lot of research on how to teach effective self-assessment. The existing research tends to focus mainly on the providing self-assessment opportunities and not how to get students to make use of them. I believe research on self-assessment would be highly valuable for teachers. Some of the key research questions are:

  • How can students be convinced of the importance of self-assessment?
  • Can self-assessment improve metacognition and self-regulation?
  • Can self-assessment improve student study strategies?
  • Can self-assessment improve long-term learning?
  • What are the best ways to design and implement self-assessments?
  • When and how often should opportunities for self-assessment be given?
  • What kind of feedback is most effective for different learning goals?
  • How can students be taught to use the feedback from self-assessments effectively?

Two fundamental learning challenges for college students, especially first-year students, are poor metacognitive awareness and poor study strategies (Kornell & Bjork, 2007; McCabe, 2011). The two problems are connected because using a poor study strategy increases false confidence without increasing learning (Bjork, Dunlosky, & Kornell, 2013). Improving both metacognitive awareness and study strategies of students is difficult to do (Susser & McCabe, 2013). I believe a promising but little studied intervention is to teach students the importance and the means of conducting effective self-assessment.

References

Angelo, T. A. and K. P. Cross (1993). Classroom Assessment Techniques: A Handbook for College Teachers, Jossey-Bass.

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444.

Crouch, C. H., & Mazur, E. (2001). Peer Instruction: Ten years of experience and results. American Journal of Physics, 69, 970-977.

Hattie, J. A. C., & Yates, G. C. R. (2014). Using feedback to promote learning. In V. Benassi, C. E. Overson, & C. M. Hakala (Eds.). Applying the science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php.

Karpicke, J. D., Butler, A. C., & Roediger, H. L. III. (2009). Metacognitive strategies in student learning: Do students practise retrieval when they study on their own? Memory, 17, 471-479.

Kornell, N., & Bjork, R. A. (2007). The promise and perils of self-regulated study. Psychonomic Bulletin & Review, 6, 219-224.

Kostons, D., van Gog, T., Paas, F. (2012). Training self-assessment and task-selection skills: A cognitive approach to improving self-regulated learning. Learning and Instruction, 22, 121-132.

McCabe, J. (2011). Metacognitive awareness of learning strategies in undergraduates. Memory & Cognition, 39, 462-476.

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16, 385-407.

Roediger, H. L., III., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181-210.

Susser, J. A., & McCabe, J. (2013). From the lab to the dorm room: Metacognitive awareness and use of spaced study. Instructional Science, 41, 345-363.

[i] Goofus and Gallant are trademarked names by Highlights for Children, Inc. No trademark infringement is intended. I use the names under educational fair use. As far as I know, Goofus and Gallant have never demonstrated good and poor metacognition.


Metacognition and Scaffolding Student Learning

Effective scaffolding requires metacognitive awareness. #metacognition #learning Click To Tweetby Dr. Stephen Chew, Samford University, slchew@samford.edu

Scaffolding learning involves providing instructional support for students so that they can develop a greater understanding of a topic than they could on their own. The concept of scaffolding originated with the work of Vygotsky and was later developed by Bruner. Scaffolding is not simply giving students the answers, but helping students understand the chain of reasoning or evidence that leads to an answer. I argue that metacognition plays a crucial role in effective scaffolding. Without metacognitive awareness, attempts at scaffolding may only create overconfidence in students without any learning. Let’s examine a common scaffolding activity, review sessions for exams.

Early in my career I used to give review sessions until I realized that they weren’t being helpful to the students who needed them most. I gave students old exams to try to answer for their review. Since I change textbooks regularly, there were questions on the old exams on topics that weren’t covered in the current class. I thought the discrepancy would be obvious when students got to those questions, but only the very best students noticed. Most students answered the questions, basically by guessing, completely unaware that we had never covered the topic. In addition, many students would simply read the question and then check the answer to see if they had guessed correctly without trying to reason through the question or using it as an indicator of their degree of understanding. I realized that students hadn’t studied the material before the review session. They were using the session as a substitute for actually studying. Just going through the review session increased their (false) confidence that they had studied without increasing their learning. It was my first encounter with poor metacognition. The issue with a lot of the struggling students wasn’t the content, but their metacognition and study skills, which my review sessions weren’t addressing. So I stopped doing them.

In recent years, though, I’ve thought about bringing them back with changes to address poor metacognition. First, we know that students who most need review sessions are least likely to think they need them, so I would somehow require participation. This is one reason why I believe that brief formative assessments in class, where everyone has to participate, are better than separate, voluntary review sessions. If I were to reinstate separate review session, I might make participation worth a small portion of the exam grade. Second, I would somehow require that students had done their best to study for the exam BEFORE coming to the review session so it is truly a review. Third, the review session would have to induce students to use good study strategies, such as self-testing with feedback and reflection, or interleaving. I might require students to generate and post three good questions they want to know about the material as their entry ticket to the review session. This would require students to review material before the review session and question generation is an effective learning strategy. Finally, I would require students to utilize the feedback from the review to recognize the level of their understanding and what they need to do to improve. I might have them predict their exam grade based on their review performance. All of these changes should increase student metacognition. I’m sure I’d have to experiment with the format to try to figure it out, and my solution may not work for other classes or faculty. It’s never a simple matter of whether or not an activity such as review sessions are a good or bad idea, it’s how they are implemented.

Without metacognitive awareness, scaffolding can backfire. Consider how poor metacognition can undermine other scaffolding activities such as releasing PowerPoint slides of lectures, guided note taking, allowing a formula “cheat sheet” in STEM classes, and allowing students to discard a certain number of exam items they think they got wrong. If students lack metacognition, each of these activities can actually be counterproductive for student learning.