Knowledge Surveys Part 1 — Benefits of Knowledge Surveys to Student Learning and Development

by Karl Wirth, Macalester College,
Ed Nuhfer, California State Universities (retired)
Christopher Cogan, Memorial University
McKensie Kay Phillips, University of Wyoming

Introduction

Knowledge surveys (KSs) present challenges like exam questions or assignments, but respondents do not answer these. Instead, they express their felt ability to address the challenges with present knowledge. Knowledge surveys focus on self-assessment, which is a special kind of metacognition. 

Overall, metacognition is a self-imposed internal dialogue that is a distinguishing feature of “expert learners” regardless of the discipline (e.g., Ertmer & Newby, 1996). Because all students do not begin college as equally aware and capable of thinking about their learning, instructors must direct students to keep them in constant contact with their metacognition. Paul Pintrich, a pioneer in metacognition, stressed that “instruction about metacognition must be explicit.” Knowledge surveys enable what Ertmer & Newby and Pintrich advocate in any class in any subject.

road sign with words "data" pointing to words "information" pointing to word "knowledge" with the word "learning above
Image by Gerd Altmann from Pixabay

Knowledge surveys began in 1992 during a conversation about annual reviews between the guest editor and a faculty member who stated: “They never ask about what I teach.” Upon hearing the faculty member, the guest editor decided to create a 200-item form to survey student ratings of their mastery of detailed content for his geology course at the start and end of the class. The items were simply an array of test and quiz questions, ordered in the sequence the students would encounter during the course. The students responded to each item through a 3-point response at the start and end of the course. 

The information from this first knowledge survey proved so valuable that the guest editor described this in 1996 in a geology journal as a formative assessment. As a result, geoscience faculty elsewhere started taking the lead in researching them and describing more benefits.

In 2003, U.S. Air Force Academy’s physics professor Delores Knipp and the guest editor published the first peer-reviewed paper (Nuhfer and Knipp, 2003) for multiple disciplines. If new to knowledge surveys, click the hotlink to that paper now and read at least the first page to gain a conceptual understanding of the instrument.

Self-assessment, Metacognition, and Knowledge Surveys

Becoming educated is a process of understanding self and the phenomena that one experiences. Knowledge surveys structure practices in understanding both. 

Our series’ earlier entries revealed the measurable influence of self-assessment on dispositions such as self-efficacy, mindset, and intellectual and ethical development that prove indispensable to the lifelong process of becoming educated. The entries on bias and privilege revealed that the privilege of having the kind of education that renders the unconscious conscious may determine the collective quality of a society and how well we treat one another within it.

Knowledge surveys prompt self-assessment reflections during learning every aspect of the content. Over a baccalaureate education, cumulative, repetitive practice can significantly improve understanding of one’s present knowledge and self-assessing accuracy.

Improving Learning

Knowledge surveys’ original purpose was to improve student learning (e.g., Nuhfer & Knipp, 2003Wirth et al., 20162021). Providing students with a knowledge survey at the beginning of a course or unit of instruction offered an interactive roadmap for an entire course that overtly disclosed the instructor’s intentions for learning to students. 

Early on, users recognized that knowledge surveys might offer a measure of changes in learning produced by a unit of instruction. Demonstrating the validity of such self-assessed competence measures was crucial but was finally achieved in 2016 and 2017.

Deeper Reading

Students quickly learned the value of prioritizing knowledge through engaging with the knowledge survey prior to and during engaging in reading. The structure of the KSs enabled reading with the purpose of illuminating known learning objectives. The structure also primed students to understand concepts by using the reading to clarify the connectedness between knowledge survey items.

Rather than just sitting down to “complete a reading,” students began reading assignments with appropriate goals and strategies; a characteristic of “expert readers” (Paris et al., 1996). When they encountered difficult concepts, they displayed increasing effort to improve their understanding of the topics identified as being essential to understand the concept. Further, knowledge surveys facilitated mentoring. When students did not understand the material, they proved more likely to follow up with a colleague or instructor to complete their understanding. 

Facilitating Acquiring Self-Regulation

Well-constructed knowledge surveys are detailed products of instructor planning and thinking. They communicate instructor priorities and coordinate the entire class to focus on specific material in unison. That students’ comments expressing they “didn’t know that would be on the exam” nearly disappeared from classroom conversations cannot be overly appreciated. 

Replacing scattered class-wide guessing of what to study allowed a collective focus on “How will we learn this material?” That reframing led to adopting learning strategies that expert learners employ when they have achieved self-regulation. Students increasingly consulted with each other or the instructor when they sensed or realized their current response to a knowledge survey item was probably inadequate. 

Levels and Degrees of Understanding

In preparing a knowledge survey for a course, the instructor carefully writes each survey item and learning objective so that learning addresses the desired mastery at the intended Bloom level (Krathwohl, 2002). Providing awareness of Bloom levels to students and reinforcing this throughout a course clarifies student awareness of the deep understanding required to teach the content at the required Bloom level to another person. Whereas it may be sufficient to remember or comprehend some content, demonstrating higher cognitive processes by having to explain to another how to apply, synthesize or evaluate central concepts and content of a course feels different because it is different. 

Knowledge surveys can address all Bloom levels and provide the practices needed to enable the paired understanding of knowing and “feeling of knowing” like no other instrument. Including the higher Bloom levels, combined with the explicitly stated advanced degree of understanding as the level of “teaching” or “explaining” to others, builds self-assessment skills and fosters the development of well–justified self-confidence. A student with such awareness can better focus efforts on extending the knowledge in which they recognize their weakness.

Building Skills with Feedback

The blog entries by Fleisher et al. in this series stressed the value of feedback in developing healthy self-assessments. Knowledge survey items that address the same learning outcomes as quizzes, exams, assignments, and projects promote instructional alignment. Such alignment allows explicit feedback from the demonstrated competence measures to calibrate the accuracy of self-assessments of understanding. Over time, knowledge surveys confer awareness that appropriate feedback builds both content mastery and better self-assessment skills.

A robust implementation directs students to complete the relevant portions of a knowledge survey after studying for an exam but before taking it. After the teacher grades the exams, students receive their self-assessed (knowledge survey score) and demonstrated (graded exam score) competence in a single package. From this information, the instructor can direct students to compare their two scores and to receive mentoring from the instructor when there is a large discrepancy (>10 points) between the two scores. 

Generally, a significant discrepancy from a single knowledge survey-exam pair comparison is not as meaningful as longer-term trends illuminated by cumulative data. Instructors who use KSs skillfully mentor students to become familiar with their trends and tendencies. When student knowledge survey responses consistently over- or under-estimate their mastery of the content, the paired data reveal this tendency to the student and instructor and open the opportunity for conversations about the student’s habitually favored learning strategies.

A variant implementation adds an easy opportunity for self-assessment feedback. Here, instructors assign students to estimate their score on an assignment or exam at the start of engaging the project and after completing the test or assignment prior to submission. These paired pre-post self-assessments help students to focus on their feelings of knowing and to further adjust toward greater self-assessment accuracy.

Takeaways

Knowledge surveys are unique in their utility for supporting student mastery of disciplinary knowledge, developing their affect toward accurate feelings of knowing, and improving their skills as expert learners. Extensive data show that instructors’ skillful construction of knowledge surveys as part of class design elicits deeper thinking and produces higher quality classes. After construction, class use facilitates mutual monitoring of progress and success by students and instructors. In addition to supporting student learning of disciplinary content, knowledge surveys keep students in constant contact with their metacognition and develop their capacity for lifelong learning. 

In Part 2, we follow from our more recent investigations on (1) more robust knowledge survey design, (2) learning about temporal qualities of becoming educated, (3) student authoring of knowledge surveys, and (4) mentoring students with large mis-calibrations in self-assessed competence toward greater self-assessment accuracy. 


Improving Metacognition By Understanding Cognitive Bias

by Dana Melone, Cedar Rapids Kennedy High School

This school year, I had the unique opportunity to continue to teach the Psychological Sciences as well as a course on Brain Based Study Techniques. As part of the psychology curriculum, I have always had a unit that taught students the various cognitive biases as well as how they impact their lives. In talking with my Study Technique students this year and reading their reflections in class on our lessons and how they are applying them to their classes, I noticed a common trend. They are making these cognitive errors in their own thinking and it was hurting their studying and learning.

I took this bit of anecdotal evidence and had students examine their own biases after their quizzes, exams, and course interactions. My hope was that this process would help them develop awareness of their own biases in their thinking and in turn help guide their future thinking and behaviors. It is not enough to just be metacognitive, but students must also be aware of when they might be relying on a biased interpretation of their studying and learning. The hope is that students will learn to recognize that bias has influenced their thinking and make adjustments as needed.

Photo of eyeglasses. Text through the lenses looks clear while other text is blurred.

Targeted Cognitive Biases

There are over 50 cognitive biases that psychologists consider when they examine thinking, but I chose 3 of the biases to have my students examine:

  1. Cognitive Dissonance: Cognitive dissonance occurs when we have a belief and we in turn do something that goes against that belief. That produces internal cognitive discomfort for us, so we develop an excuse to make ourselves feel better. Excuses can be truthful but they can also be non truths we tell ourselves to get over the discomfort. The most common way I saw this occurring with my students was after doing poorly on an exam.The students knew they should be getting help when needed and studying in the right way. When they did not do that thing, they often claimed that the teacher disliked them, they did not have time to use better study techniques, or the test was so hard no one could have passed it.
  2. Self-Serving Bias: This occurs when we attribute good things that happen to us as a result of our own actions and in turn, negative things that happen to us are attributed to an external cause. I saw this with my students in our discussions as well. When students succeeded on a quiz or test they were almost always attributing that to their study method, understanding of the material, and ease of the test. When they did poorly, they talked about the test having things on it they did not study, or that teacher purposely created hard exams, etc.
  3. Overconfidence: This bias occurs when we think we know more than we really do. We overestimate our ability on something. It can occur multiple ways, but my focus was post exam. Students receive a quiz or test score and really think they are about to get a great score. Instead, their score was much lower than they were expecting. I would often see students talk about this in their reflections. They would really think they knew the material and were shocked that they did not score well on the exam.

Overcoming the Biases

Once I realized that students were making these types of cognitive errors, I taught a lesson on the errors and we went over various examples. I then added an analysis of these biases to their weekly reflections of their classwork, test, and quiz outcomes.

  1. Cognitive Dissonance: Think about cognitive dissonance and the three phases: I have a belief, I do something that goes against that belief, I develop an excuse to relieve the discomfort. This class is all about using correct learning, studying, and communicating techniques to improve our learning outcomes. This knowledge about effective learning represents our beliefs. In reflecting on your week, did you engage in behaviors that did not align with those beliefs and then fall victim to this bias? If so, explain. Then describe how you will help yourself overcome this in the future. If you feel you did not fall victim to this bias, provide commentary on how you overcame it with an example from your week.
  2. Self-Serving Bias: Examine your reflection of the week. Choose a positive element from your reflection and explain how others helped to contribute to your success. Choose a negative element from your reflection and explain what you personally could have done differently to help change the outcome.
  3. Overconfidence: Did you have any tests or quizzes this week that produced a lower score than you were expecting? If so, what could you have done differently in your preparation that may have helped you avoid overconfidence?

The goal of adding these questions was to help students think about ways their own cognitive errors may be contributing to their studying, learning, and assessment scores. Metacognition is best when we also incorporate awareness of possible bias and errors in our cognitions. My hope is that students will think about these biases as they move through high school and life, and that in turn they will use that thinking to become better learners in all their courses.


Metacognition and the Fish in the Water

by Steven J. Pearlman, Ph.D. The Critical Thinking Initiative

As the saying goes, you cannot ask a fish about water. Having had no other environmental experience as a counter reference, the fish cannot understand what water is because the fish has never experienced what water isn’t.

Cognition—broadly meaning that the mind is working—is to homo sapiens as water is to fish. So steeped are we in the water of our own cognitive processing that we cannot recognize it. Even though we all possess an extensive list of examples of other people failing to think well, we nevertheless lack an internal reference point for being devoid of thought; every consideration we might make of what it would be like to not-think can only happen through the process of thinking about it.

cartoon drawing of a goldfish on a blue background

This is the difference between metacognition, which is being intentionally self-aware of what we are doing when we are thinking, and critical thinking, which, loosely speaking, is the capacity to reason through problems and generate ideas. In one sense, we seem to do just fine thinking critically without that metacognitive awareness. We solve problems. We invent the future. We cure diseases. We build communities. But in another all-too-real sense, we struggle, for if we lack the metacognitive acumen to understand what critical thinking is, then we equally lack the capacity to improve our capability to do it and to monitor and evaluate our progress.

The Problem and the Need

Case in point, even though research shows that critical thinking is typically listed among necessary outcomes at educational institutions, “it is not supported and taught systematically in daily instructions” because “teachers are not educated in critical thinking” (Astleitner, 2002). Worse than that, one study of some 30 educators found that not a single one could provide “a clear idea of critical thinking” (Choy & Cheah, 2009). Thus, even though “one would be hard pressed to construct a serious counterargument to the claim that we would like to see students become careful, rigorous thinkers as an outcome of the education we provide them. … By most accounts, we remain far from achieving it” (Kuhn, 1999).

But we do need to achieve that rigorous thinking, because nothing is arguably more important than improving our overall capacity to think. To do so, we must seek to understand the relationship between critical thinking and metacognition, for though interrelated, they’re not the same. In fact, we can think critically without being metacognitive, but we cannot be metacognitive without thinking critically. And that might make metacognition the seminal force of true critical thinking development.

Some Classroom Examples

Consider, for example, asking a student, “What is your thinking about the assigned readings about the Black Lives Matter movement”? Were the student to respond with anything substantive, then we could loosely say the student exercised at least some critical thinking, such as some analysis of the sources and some evaluation of their usefulness. For example, were the student to state that “by referencing statistics on black arrests, source A made a more compelling argument than source B,” then we could rightly say that the student generated some critical thinking. But we cannot say that the student engaged in any metacognitive effort.

But what if the student responded, “Because of its use of statistics on black arrests, Source A changed my thinking about the Black Lives Matter because I was previously unaware of the disparities between white arrest rates and black arrest rates”? Is that metacognitive? Not truly. Even though the student was aware of a change in their own thoughts, they expressed no self-awareness of the internal thinking process that catalyzed that change. There is not necessarily a meaningful distinction between what that student did and someone who says that they had not liked mashed potatoes until they tried these mashed potatoes. They recognized a shift in thought, but not necessarily the underlying mechanism of that shift.

However, if the student responded as follows, we would begin to see metacognition on top of critical thinking: “I realized upon reading Source A that I held a tacit bias about the issue, one that was framed from my own experience being white. I had been working under the assumption that race didn’t matter, and it wasn’t until the article presented the statistics that my thinking was impacted enough for me to become aware of my biases and change my position.” In that sentence, we see the student metacognitively recognizing an aspect of their own thinking process, namely their personal biases and the relationship between those biases and new information. As Mahdvi (2014) said, “Metacognitive thoughts do not spring from a person’s immediate external reality; rather, their source is tied to the person’s own internal mental representations of that reality, which can include what one knows about that internal representation, how it works, and how one feels about it.” And that’s what this example demonstrates: the student’s self-awareness of “internal mental representations of … reality.”

The Value of Metacognition to Critical Thinking

When metacognition is present, all thinking acts are critical because they are by nature under reflection, and scrutiny. While one could interpret a love poem without being metacognitive, one could not be metacognitive about why they interpret a poem a certain way—such as in considering one’s biases about “love” from their personal history—without thinking critically. Since metacognition can only happen when we are monitoring our thinking about something, the metacognition inherently makes the thinking act critical.

Yet, even though metacognition infuses some measure of criticality to thinking, metacognition nevertheless isn’t synonymous with critical thinking. Metacognition alone does not successfully critique existing ideas, analyze the world, develop meaningful questions, produce new solutions, etc. So, we can think without being metacognitive, but if we want to improve our thinking—if we want to understand and enhance the machinations of our mind—then we must seek and attain the metacognitive skills that reveal what our mind is doing and why it is doing it.

Accomplishing that goal requires an introspective humility. It means embracing the premise that our own thinking process is at best always warped, if not often mortally wounded, by our biases, predisposition, and measures of ignorance. It means that we often cannot efficiently solve problems unless we first solve for ourselves, and that’s not easy to do for a bunch of fish who are steeped in the waters cognitive.

References

Astleitner, H. (2002). Teaching Critical Thinking Online. Journal of Instructional Psychology, 29(2), 53-76.

Choy, S.C. & Cheah, P.K. (2009). Teacher perceptions of critical thinking among students and its influence on higher education. International Journal of Teaching and Learning in Higher Education, 20(2), 198-206.

Kuhn, D. (1999). A developmental model of critical thinking. Educational Researcher, 28(2), 16-25.

Madhavi, M. (2014). An Overview: Metacognition in Education. International Journal of Multidisciplinary and Current Research, 2.


Pandemic Metacognition: Distance Learning in a Crisis

By Jennifer A. McCabe, Ph.D., Center for Psychology, Goucher College

The college “classroom” certainly looks different these days. Due to campus closures in the wake of the COVID-19 pandemic, we no longer travel to a common space to learn together in physical proximity. Though most of us have transitioned to online instruction, there was insufficient time to prepare for this new model – instead, we are in the midst of “emergency distance learning,” with significant implications for teacher and student metacognition.

image of person at computer under emergency red light

New Demands for Self-regulation

Now that certain overt motivators are no longer present, self-regulated learning is more critical than ever (e.g., Sperling et al., 2004; Wolters, 2003). Students are no longer required to hand in work during class, to engage in in-person class discussions about learned material, or to come face-to-face with instructors who know whether students are keeping up with the course. Instead they must figure out how to engage in the work of learning (and to know it is, indeed, still supposed to be work), away from the nearby guidance of instructors, other on-campus support sources, and peers. What are the effects of isolation on student metacognition? We can only find out as the situation evolves, and it will surely prove to be a complex picture. Though some will continue to succeed and even find new sources of motivation and revised strategies during this unusual time, others may experience a decline in metacognitive accuracy in the absence of typically available sources of explicit and implicit feedback on learning.

What metacognitive and motivational challenges face students who began the semester in a traditional in-person classroom, and now log in to a device to “go to class?” When I invited my (now online) students to report their experiences in preparing for our first web-based exam, many reported that the learning strategies themselves do not feel different as implemented at home, but that they are especially struggling with motivation and time management. Though these are common issues for college students even in the best of (face-to-face) circumstances, it seems they may be magnified by the current situation. For example, distractions look very different at home. Even if students already had figured out a system to manage distractions, and to channel their motivation to find focused time to implement effective learning strategies, this campus-based skill set may not translate to their current settings. Students need to recognize barriers to learning in this new context, and should be supported in developing (perhaps new or at least tweaked) strategies for academic success.

Regarding time management, online course deadlines may be timed differently – perhaps more flexibly or perhaps not – on different days of the week (instead of in a class meeting), late at night (or early in the morning), or over the weekend. Students must strategically allocate their time in a manner different from traditional classroom learning. This is compounded by the fact that some courses meet synchronously, some are completely asynchronous, and some are a hybrid. Managing this new schedule requires the metacognitive skill of recognizing how long different types of learning will take, applying the appropriate strategies, and – oh yes – fitting all that in with other non-academic demands that may change day to day. Planning is especially challenging – and anxiety-provoking – with so much unknown about the future.

Stretched Too Thin to Think Well

Looming over the learning, we cannot forget, is the actual threat of the virus, and the myriad ways it is impacting students’ mental and physical health. In my cognition classes, we discuss the implications of cognitive load, or the amount of our limited attentional resources (and therefore working memory capacity) being used for various tasks in a given moment; this current load determines how much is left over for tasks central to learning and performance goals (e.g., Pass et al., 2003). If working memory is consumed with concerns about one’s own health or the health of loved ones, financial concerns, caregiving needs, food availability, or even basic safety, it is no surprise that the ability to focus on coursework would be compromised. Intrusive worries or negative thoughts may be particularly troublesome right now, and again leave fewer resources available for learning new information. Instructors may want to consider evidence-based educational interventions – such as writing about worries to manage anxiety – that have been effective in clearing ‘space’ in mental load for learning tasks (Ramirez & Beilock, 2011).

Most importantly, we all need to understand (and accept) the limitations of our cognitive system, the implications of having limited attentional resources, and how to most effectively manage this shifting load. To better support students in metacognitive awareness, instructors across disciplines can incorporate information about cognitive load management and self-regulated learning strategies as part of their courses.

Teachers should also think carefully about the line between desirable difficulties – those learning conditions that are challenging, slow, and error-prone, but lead to stronger long-term retention – and undesirable difficulties – those challenges that are simply hard but do not result in better learning (e.g., Yan et al., 2017). When faced with a choice to add work or effort, consider whether it is part of the learning that relates to the core learning outcomes for the class. If it does not, given the current uniquely high-load circumstances we find ourselves in, drop it.

Further, be explicit and transparent with students about why assignments were retained or changed (ideally connecting these to those core objectives), and share with them your thought process about course-related design and assessment decisions. Most of all, communicate early and often with students about expectations and assessments to help them with motivation, scheduling, and cognitive load. Acknowledge that this is a highly atypical situation, show compassion, allow flexibility as you can, and let them know we are all learning together.

Imperative Explicitness

Metacognition in the time of COVID-19 must be even more intentionally brought from the implicit “hidden curriculum” of college to the explicit. Factors important to student metacognition, including self-regulated learning, should be named as a skill set central to academic (and life) success. Help them better understand their own learning and memory processes, and how strategies may need to evolve in changing circumstances, which for now means “emergency distance learning.” Perhaps a silver lining is that this investment in metacognitive flexibility will pay off in supporting students’ future endeavors. For teachers, this unexpected transition just might help us improve our student-centered approaches – wherever our classrooms may exist in the future.

Suggested References

Pass, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38(1), 1-4. https://doi.org/10.1207/S15326985EP3801_1

Ramirez, G., & Beilock, S. L. (2011). Writing about testing worries boosts exam performance in the classroom. Science, 331(6014), 211-213. https://doi.org/ 10.1126/science.1199427

Sperling, R. A., Howard, B. C., Staley, R., & DuBois, N. (2004). Metacognition and self-regulated learning constructs. Educational Research and Evaluation, 10(2), 117–139. doi:10.1076/edre.10.2.117.27905

Wolters, C. A. (2003). Regulation of motivation: Evaluating an underemphasized aspect of self-regulated learning. Educational Psychologist, 38(4), 189–205. doi:10.1207/S15326985EP3804_1

Yan, V. X., Clark, C. M., & Bjork, R. A. (2017). Memory and metamemory considerations in the instruction of human beings revisited: Implications for optimizing online learning. In J. C. Horvath, J. Lodge, & J. A. C. Hattie (Eds.), From the Laboratory to the Classroom: Translating the Learning Sciences for Teachers (pp. 61-78). Routledge.


The First Instinct Fallacy: Metacognition Helps You Decide to Stick With It or Revise Your Answer

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

When giving guidance to students on how to take tests in your class, do you tell your students to always go with their first answer (go with their gut), or to always revise their answers, or that it depends on the question?  Because many of you are fans of metacognition, likely you are wise and you choose the latter—it depends—and you would be correct. However, most students andmany teachers would choose “go with your gut instinct”, otherwise known as the First Instinct Fallacy (Kruger, Wirtz, & Miller, 2005). In this well-known article by Kruger and colleagues, they found (in 4 separate experiments) that when students change their answers, they typically change from incorrect to correct answers, they underestimate the number of changes from incorrect to correct answers, and overestimate the number of changes from incorrect to correct. Ironically, but not surprisingly, because students like to “go-with-their-gut”, they also tend to be very hesitant to switch their answers and regret doing so, even though they get the correct answer. However, what Kruger and colleagues did not investigate was the role that metacognition may play in the First Instinct Fallacy.

The [First] Instinct Fallacy: The Metacognition of Answering and Revising During College Exams

In two recent studies by Couchman et al. (2016), they investigated the mediating effects that metacognition may have on the First Instinct Fallacy. The procedure of both studies required students to complete a normal multiple-choice exam, indicate their confidence in their answers(whether they knew it or guessed the answer), and to indicate whether or not they changed their initial answer. Consistent with Kruger et al. (2005) results, Couchman and colleagues found that students more often change their initial response from incorrect to correct answers than the reverse. What was interesting is that when students thought they knew the answer and didn’t change their answer, they were significantly more likely to get the answer correct (indicating higher metacognition).  When students guessed, and didn’t change their answer, they were significantly more likely to get the answer incorrect (indicating low metacognition). Moreover, when compared to questions students thought they knew, when students revised guessed questions, they choose the correct answer significantly more often than when they didn’t change their answer. In other words, students did better on questions when they guessed and changed their answer to when they thought they knew the answer and changed their answer. These results suggested that students were using the metacognitive construct of cognitive monitoring to deliberately choose when to revise their answers or when to stick with their gut on a question-by-question basis.

Moral of the Story: Real-Time Metacognitive Monitoring is Key to Falling Prey to the First-Instinct Fallacy

As demonstrated in Couchman and colleagues’ results, when student metacognitively monitor their knowledge and performance on a question-by-question basis, they will perform better. Metcalfe (2002) called this adaptive control—focusing on process that you can control in order to improve performance. Koriat et al. (2004) suggests that instead of reflective thinking in general on performance, in-the-moment and item-by-item assessment of performance may be more productive and effective.

So, you were correct in telling your students that “it depends”, but as a practitioner, what do you do to facilitate students’ ability to increase the metacognitive skills of adaptive control and monitoring? Couchman and colleagues suggested that teachers instruct their students to simplyindicate a judgment of confidence for each question on the test (either use a categorical judgment such as low vs. medium vs. high confidence or use a 0-100 confidence scale). Then, if students are low in their confidence, instructors should encourage them to change or revise their answer. However, if student confidence is high, they should consider not changing or revising their answer. Interestingly enough, this must be done in real-time, because if students make this confidence judgment at post-assessment (i.e., at a later time), they tend to be overconfident and inaccurate in their confidence ratings. Thus, the answer to the First Instinct Fallacy is—like most things—complicated. However, don’t just respond with a simple “it depends”—even though you are correct in this advice. Go the step further and explain and demonstrate how to improve adaptive control and cognitive monitoring.

References

Couchman, J. J., Miller, N. E., Zmuda, S. J., Feather, K., & Schwartzmeyer, T. (2016). The instinct fallacy: The metacognition of answering and revising during college exams. Metacognition and Learning, 11(2), 171-185. doi:10.1007/s11409-015-9140-8

Kruger, J., Wirtz, D., & Miller, D. T. (2005). Counterfactual thinking and the first instinct fallacy. Journal of Personality and Social Psychology, 88(5), 725–35.

Koriat, A., Bjork, R. A., Sheffer, L., & Bar, S. K. (2004). Predicting one’s own forgetting: the role of experience based and theory-based processes. Journal of Experimental Psychology: General, 133, 643–656.

Metcalfe, J. (2002). Is study time allocated selectively to a region of proximal learning? Journal of Experimental Psychology: General, 131, 349–363.