Democratizing a Classroom: The Challenge of Student Engagement and Metacognition

by Dr. Sonya Abbye Taylor, Associate Professor of Education and Chair of Education Division

INTRODUCTION: DEMOCRATIZING A CLASSROOM AND CREATING A COMMUNITY OF LEARNERS

 John Dewey in 1917, introduced the concept of student engagement. In creating classroom democracies, “communities … permeated throughout with the spirit of art, history, and science”— environments where students became self-directed, committed to service, and engaged in experiences reflective of society. Dewey believed in relevant curriculum, meaningful to students’ lives—in which students are thoroughly engaged with content, among a community of learners functioning as and in society. I believe that a community of learners relies upon student engagement and metacognition is intrinsically connected to being engaged. Student engagement provides the core of democratic classrooms, with the role of promoting and educating towards diversity, equity, and inclusion.

THE CHALLENGE OF METACOGNITION AND STUDENT ENGAGEMENT

I prepare teacher candidates to work with diverse populations, including students with special needs. I have rarely had complaints regarding grades. Students could predict grades because they had processed feedback and reflected on their work throughout the semester. Yet, last spring I received a complaint. Considering her grades averaged within the “A” range, a student questioned why she received an “A-.” She questioned if her grade was related to participation for which she received 35/75 points; she was correct. This experience required me to reflect on Dewey’s concepts of engagement in relation to class participation and to reflect on connections between learning and engagement. Aligning Dewey’s concept of “community of learners” with student engagement, I see how metacognition—the student’s and my own—underlies this pedagogical struggle. What was the student’s understanding of engagement and participation? How did her understanding equate with my own?

During this course, held synchronously, online, this student was invisible except for two presentations for which visibility was required. However, when reviewing videos, I found times she responded in chat and discussions: brief responses, one-word comments or a sentence, emanating from a black box on the screen.

My expectations for students in online environments are comparable to in-person classes. Classes include discussions that require sharing ideas, asking questions, analyzing, and deliberating. The 75 points allocated, places value on discussions. If a student is silent throughout a semester, they have not contributed to the community. Yet, this student perceived themselves to have been engaged, to have demonstrated engagement, and therefore eligible for full- credit. 

As I thought about this contradiction the following questions emerged:

What do we learn when we:

  • learn in isolation?
  • interact with individuals within a learning community?
  • delve into content while engaging within a learning community?

Most importantly, what is the relationship between learning, engagement, and learning within the community, in preparing students to be teachers for the democratic society Dewey describes?

For teacher educators and teacher candidates, answers to these questions provide more fodder for thought. We can learn in isolation. We can memorize information alone, anywhere or engage with a text while reading thoughtfully, asking silent questions of the text and author. We can learn sitting silently in a classroom. We CAN learn in isolation, but WHAT do we learn? What opportunities are missed when we function in this manner?

A silent classroom has always been troublesome to me. Student engagement and class participation are synonymous to me. It is joyful to hear students discussing content, actively listening, asking questions, and sharing opinions based on their frame of reference.

  • Isn’t that the vision of democracy Dewey conveys?
  • Don’t these interactions provide practice communicating, and enable students to see another’s point of view, to understand perspectives and experiences of individuals from other cultures?
  • Aren’t these the interactions that enable us to understand and appreciate one another?

Students can learn content without opportunities and expectations for engagement with others. However, in vibrant learning communities, students ask questions and question themselves; they think and rethink concepts because of the various influences. They use metacognitive processes to evolve as learners.

 I changed the student’s grade because she perceived her actions constituted participation. Though the student received an “A,” I did her no favor. Informed by Dewey, I believe the “A” she received was not as valuable as the “B’s,” “A-’s,” or “A’s,” earned by other students who demonstrated engagement. Those students were visible: the expressions on their faces illuminated their thoughts. Posing thoughtful questions and adding to discussions, those students enriched the experience for us all. They responded while they were processing. When their responses were askew, conversation ensued that brought the level of thought and understanding to higher levels. They demonstrated their commitment to learning and being part of a community were more important than having the “right” answer. They earned my respect and my confidence, knowing that their classrooms will be full of engaged learners, that they will encourage their students to think and will engage them in metacognitive practices.

CONCLUSION

I agonized over this grade, and by doing so, I know I served myself and my future students well. I will redouble my efforts to define and explain the value of engagement in relation to “community” and to create more activities, pose more questions and provide collaborative assignments that inspire engagement in efforts to make engagement palpable. I will do a better job of calling attention to student engagement. I will continue to honor askew responses because of the thought processes they illuminate. Metacognition has brought me to these conclusions, and will be the catalyst for changes that I’ll incorporate in my instruction and the emphasis I will place on my students to  be  self-directed.

In processing and metacognitively rethinking ideas for my classes and revisions for this paper, I reflected further on Dewey’s perception of engagement for a community of learners reflective of the society, and have come to this conclusion. We teach classes with diverse students therefore we have the opportunity and obligation to create environments in our classrooms where students can be productive members of the classroom society. We must create environments where students can function efficiently and develop the skills and behaviors desirable in a democratic society. In these environments students are respected and are comfortable to take risks; they listen to each other, communicate effectively, accept each other’s opinions, settle disagreements, celebrate each other’s successes, and support each other through difficult times. By creating these environments in teacher education classrooms perhaps there would be consensus as to the quality and value of engagement.

Works Cited

Dewey, John. Democracy and Education. Simon and Brown, 2012


Being humble with a little help from metacognition

By John Draeger, PhD  (SUNY Buffalo State)

As a political philosopher, I worry about our deeply divided world and the need to find the wherewithal to interact with those with whom we disagree. I am interested in the role humility plays in civil discourse. I argue that being humble, or being aware that we don’t have all the answers, can open the door to more respectful dealings with others and offer the prospect of more productive dialogue. Being humble isn’t easy, but metacognition can help us stay on track. It can, for example, encourage us to check-in on whether we’re actually listening to what others have to say or lapsing into dismissive name-calling.

Metacognition focuses attention on a process in hopes of evaluating what’s working, what’s not, and what needs adjusting. In this case, metacognition can help us check-in on the process of being humble. If humility involves understanding the ways we can be prone to bias, prejudice, and blind spots, then metacognition can help us identify those times when we lapse into those errors and make the appropriate adjustments. This post explores the relationship between metacognition and humility.

Humility

There’s a long tradition in philosophy on character traits, such as humility, that promote good living. However, I’ve recently become interested in the work of social psychologist Daryl Van Tongeren (2022). If you’ve come across the vast literature on problematic ways of thinking (such as bias, self-deception, and blind spots) and wondered how to avoid such things, then scholars like Van Tongeren are exploring humility as a potential answer.

Van Tongeren highlights the fact that humble people are open to what they don’t know. They learn to tolerate uncertainty and they are on the lookout for times when they are in the grips of illusory (or mistaken) forms of thinking. They also accepts their own strengths. Humility, as an approach to the world, prompts us to look inwards to assess what we might be missing instead of quickly concluding that someone else is wrong, foolish, or worse.

Without humility, relationships can degenerate into people selfishly putting their own needs over others, being insecure and distrustful, and being toxically defensive at the mere whiff of feedback. Humility, in contrast, invites a spirit of openness to change, to feedback, and to the perspectives of others. This offers the prospect of more authentic relationships and greater satisfaction.

An illustration: Road trip anyone?

Suppose you and I are going on a road trip. I happen to be driving and you happen to notice that we seem to be turned around. Humility would nudge me to at least consider that I’m driving in the wrong direction, especially when the GPS, the map, the road signs, and even the sun confirm that we are off course. If humble, then I might respond with a “yup, my bad. Where’s the best place to turn around?” If not, then I might get defensive by questioning the authority of the map, appealing to some “special shortcut” that only I know about, angrily changing the topic of conversation, and then silently (though stubbornly) driving on. If we find ourselves in this situation, then humility, as a process of openness towards the world, has broken down. Enter metacognition.

Metacognition can prompt me to check-in on my process (humility). Why am I behaving this way? Am I being defensive because I am embarrassed? Am I annoyed because I didn’t want to take the trip anyway? Am I flummoxed because I want the trip to go perfectly and I fear that I’ve messed things up? Or am I frustrated because my bad back is acting up and I am so uncomfortable that I can’t think straight or manage anything going wrong? Metacognition reminds me to check-in on whether I’m being open to evidence or being hijacked by some other factor. Once alerted, I can recommit to humility and adjust my course.

More generally, metacognition can prompt me to notice that I tend to be open to criticisms about my cooking (because my identity is not tied up with it) and those offered by my close friends (because I trust their judgment). However, feedback from certain family members and any feedback about my teaching has the tendency to put me on edge. In these cases, metacognition can alert me to those contexts where I’m more likely to be humble and those where I’m more likely to be closed.

Making the connection: Humility and metacognition

Neither humility nor metacognition can guarantee good thinking, good feeling, or good action (whatever that means). But humility reminds us to be open to our own foibles and open to the ways we often miss the mark. Metacognition encourages us to check-in on our humility and become aware of how we might get back on track.

Applied to civil discourse, neither humility nor metacognition can solve contentious disagreements in a polarized political environment, but they can help set the stage for progress. A willingness to check-in on why and how we think, feel, and act as we do can position us for dialogue with those with whom we deeply disagree (even those who question our most cherished beliefs about god or human rights). Humility, for example, encourages us to appreciate the points of view of those with whom we disagree and suspend judgment until the evidence is in. Van Tongeren argues that humble people recognize that it is not all about us. Other people know things that we don’t. Others bring experiences to the table that can be hard for us to imagine. Humility holds space for those possibilities. Metacognition reminds us to check-in on our presence in that space. If we’re not there, then an adjustment is in order.

References

Van Tongeren, D. (2022). Humble: Free yourself from the traps of a narcissistic world. The Experiment.


Guest Edited Series on Self-Assessment: Synthesis

by Ed Nuhfer, California State Universities (retired)

Self-assessment is a metacognitive skill that employs both cognitive competence and affective feelings. After over two decades of scholars’ misunderstanding, misrepresenting, and deprecating self-assessment’s value, recognizing self-assessment as valid, measurable, valuable, and connected to a variety of other beneficial behavioral and educational properties is finally happening. The opportune time for educating to strengthen that ability is now. We synthesize this series into four concepts to address when teaching self-assessment.

Image of a face silhouette watching a schematic of a man interfacing with mathematical symbols and a human brain
Image by Gerd Altmann from Pixabay

Teach the nature of self-assessment

Until recently, decades of peer-reviewed research popularized a misunderstanding of self-assessment as described by the Dunning-Kruger effect. The effect portrayed the natural human condition as most people overestimating their abilities, lacking the ability to recognize they do so, the most incompetent being the most egregious offenders, and only the most competent possessing the ability to self-assess themselves accurately.

From founding to the present, that promotion relied on mathematics that statisticians and mathematicians now recognize as specious. Behavioral scientists can no longer argue for “the effect” by invoking the unorthodox quantitative reasoning used to propose it. Any salvaging of “the effect” requires different mathematical arguments to support it.

Quantitative approaches confirm that a few percent of the populace are “unskilled and unaware of it,” as described by “the effect.” However, these same approaches affirm that most adults, even when untrained for self-assessment accuracy, are generally capable of recognizing their competence or lack thereof. Further, they overestimate and underestimate with about the same frequency.

Like the development of higher-order or “critical” thinking, the capacity for self-assessment accuracy develops slowly with practice, more slowly than required to learn specific content, and through more practice than a single course can provide. Proficiency in higher-order thinking and self-assessment accuracy seem best achieved through prolonged experiences in several courses.

During pre-college years, a deficit of relevant experiences produced by conditions of lesser privilege disadvantages many new college entrants relative to those raised in privilege. However, both the Dunning-Kruger studies and our own (https://books.aosis.co.za/index.php/ob/catalog/book/279 Chapter 6) confirm that self-assessment accuracy is indeed learnable. Those undeveloped in self-assessment accuracy can become much more proficient through mentoring and practice.

Teach the importance of self-assessment

As a nation that must act to address severe threats to well-being, such as healthcare, homelessness, and climate change, we have rarely been so incapacitated by polarization and bias. Two early entries on bias in this guest-edited series explained bias as a ubiquitous survival mechanism in which individuals relinquish self-assessment to engage in modern forms of tribalism that marginalize others in our workplaces, institutions, and societal cultures. Marginalizing others prevents holding the needed consensus-building conversations between diverse groups that bring creative solutions and needed action.

Relinquishing metacognitive self-assessment to engage in bias obscures perceiving the impacts and consequences of what one does. Developing the skill to exercise self-assessment and use evidence, even under peer pressure not to do so, seems a way to retain one’s perception and ability to act wisely.

Teach the consequences of devaluing self-assessment

The credibility “the effect” garnered as “peer-reviewed fact” helped rationalize the public’s tolerating bias and supporting hierarchies of privilege. A quick Google® search of the “Dunning Kruger effect” reveals widespread misuse to devalue and taunt diverse groups of people as ignorant, unskilled, and inept at recognizing their deficiency.

Underestimating and disrespecting other peoples’ abilities is not simply innumerate and dismal; it cripples learning. Subscribing to the misconception disposes the general populace to avoid trusting in themselves, in others who merit trust, and to dismiss implementing or even respecting effective practices developed by others presumed to be inferiors. It discourages reasoning from evidence and promotes unfounded deference to “authority.” Devaluing self-assessment encourages individuals to relinquish their autonomy to self-assess, which weakens their ability to resist being polarized by demagogues to embrace bias.

Teach self-assessment accuracy

As faculty, we have frequently heard the proclamation “Students can’t self-assess.” Sadly, we have yet to hear that statement confronted by, “So, what are we going to do about it?”

Opportunities exist to design learning experiences that develop self-assessment accuracy in every course and subject area. Knowledge surveys, assignments with required self-assessments, and post-evaluation tools like exam wrappers offer straightforward ways to design instruction to develop this accuracy.

Given the current emphasis on the active learning structures of groups and teams, teachers easily mistake these as the sole domains for active learning and deprecate study alone. The interactive engagements are generally superior to the conventional structure of lecture-based classes for cognitive mastery of content and skills. However, these structures seldom empower learners to develop affect or recognize the personal feelings of knowing that come with genuine understanding. Those feelings differ from those that rest on shallow knowledge and often launch the survival mechanism of bias at critically inopportune times.

Interactive engagement for developing cognitive expertise differs from the active engagement in self-assessment needed to empower individuals to direct their lifelong learning. When students employ quiet reflection time alone to practice self-assessment by enlisting understanding for content for engaging in knowing self, this too is active learning. Ability to distinguish the feeling of deep understanding requires repeated practices in such reflection. We contend that active learning design that attends to both cognition and affect is superior to design that attends only to one of these.

To us, John Draeger was particularly spot-on in his IwM entry, recognizing that instilling cognitive knowledge alone is insufficient as an approach for educating students or stakeholders within higher education institutions. Achievement of successful outcomes depends on educating for proficiency in both cognitive expertise and metacognition. In becoming proficient in controlling bias, “thinking about thinking” must include attention to affect to recognize the reactive feelings of dislike that often arise when confronting the unfamiliar. These reactive feelings are probably unhelpful to the further engagement required to achieve understanding.

The ideal educational environment seems one in which stakeholders experience the happiness that comes from valuing one another during their journey to increase content expertise while extending the knowing of self.


Knowledge Surveys Part 2 — Twenty Years of Learning Guiding More Creative Uses

by Ed Nuhfer, California State Universities (retired)
Karl Wirth, Macalester College
Christopher Cogan, Memorial University
McKensie Kay Phillips, University of Wyoming
Matthew Rowe, University of Oklahoma

Early adopters of knowledge surveys (KSs) recognized the dual benefits of the instrument to support and assess student learning produced by a course or program. Here, we focus on a third benefit: developing students’ metacognitive awareness through self-assessment accuracy.

Communicating self-assessed competence

Initially, we just authored test and quiz questions as the KS items. After the importance of the affective domain became more accepted, we began stressing affect’s role in learning and self-assessment by writing each knowledge survey item with an overt affective self-assessment root such as “I can…” or “I am able to…” followed by a cognitive content outcome challenge. When explaining the knowledge survey to students, we focus their attention on the importance of these affective roots for when they rate their self-assessed competence and write their own items later.

We retain the original three-item response scale expressing relative competence as no competence, partial competence, and high competence. Research reveals three-item scales as valid and reliable as longer ones, but our attraction to the shorter scale remains because it promotes addressing KS items well. Once participants comprehend the meaning of the three items and realize that the choices are identical for every item, they can focus on each item and rate their authentic feeling about meeting the cognitive challenge without distraction by more complex response choices.

photo of woman facing a black board with the words "trust yourself"
Image by Gerd Altmann from Pixabay

We find the most crucial illumination for a student’s self-assessment dilemma: “How do I know when I can rate that I can do this well?” is “When I know that I can teach how to meet this challenge to another person.”

Backward design

We favor backward design to construct topical sections within a knowledge survey by starting with the primary concept students must master when finally understanding that topic. Then, we work backward to build successive items that support that understanding by constantly considering, “What do students need to know to address the item above?” and filling in the detail needed. Sometimes we do this down to the definitions of terms needed to address the preceding items.

Such building of more detail and structure than we sensed might be necessary, especially for introductory level undergraduates, is not “handing out the test questions in advance.” Instead, this KS structure uses examples to show that deceptively disconnected observations and facts allow understanding of the unifying meaning of  “concept” through reaching to make connections. Conceptual thinking enables transferability and creativity when habits of mind develop that dare to attempt to make “outrageous connections.”

The feeling of knowing and awareness of metadisciplinary learning

Students learn that convergent challenges that demand right versus wrong answers feel different from divergent challenges that require reasonable versus unreasonable responses. Consider learning “What is the composition of pyrite?” and “Calculate the area of a triangle of 50 meters in length and a base of 10 meters?” Then, contrast the feeling required to learn, “What is a concept?” or “What is science?”

The “What is science?” query is especially poignant. Teaching specialty content in units of courses and the courses’ accompanying college textbooks essentially bypass teaching the significant metadisciplinary ways of knowing of science, humanities, social science, technology, arts, and numeracy. Instructors like Matt Rowe design courses to overcome the bypassing and strive to focus on this crucial conceptual understanding (see video section at times 25.01 – 29.05).

Knowledge surveys written to overtly provoke metadisciplinary awareness aid in designing and delivering such courses. For example, ten metadisciplinary KS items for a 300-item general geology KS appeared at its start, two of which follow.

  1. I can describe the basic methods of science (methods of repeated experimentation, historical science, and modeling) and provide one example each of its application in geological science.
  2. I can provide two examples of testable hypotheses statements, and one example of an untestable hypothesis.

Students learned that they would develop the understanding needed to address the ten throughout the course. The presence of the items in the KS ensured that the instructor did not forget to support that understanding. For ideas about varied metadisciplinary outcomes, examine this poster.

Illuminating temporal qualities

Because knowledge surveys establish baseline data and collect detailed information through an entire course or program, they are practical tools from which students and instructors can gain an understanding of qualities they seldom consider. Temporal qualities include magnitudes (How great?), rates (How quickly?), duration (How long?), order (What sequence?), frequency (How often?), and patterns (What kind?).

More specifically, knowledge surveys reveal magnitude (How great were changes in learning?), rates (How quickly we cover material relative to how well we learned it?), duration (How long was needed to gain an understanding of specific content?), order (What learning should precede other learning?), and patterns (Does all understanding come slowly and gradually or does some come in time as punctuated “Aha moments?”).

Knowledge survey patterns reveal how easily we underestimate the effort needed to do the teaching that makes significant learning change. A typical pattern from item-by-item arrays of pre-post knowledge surveys reveals a high correlation. Instructors may find it challenging to produce the changes where troughs of pre-course knowledge surveys revealing areas of lowest confidence become peak areas in post-course knowledge surveys showing high confidence. Success requires attention to frequency (repetition with take-home drills), duration (extending assignments addressing difficult contents with more time), order (giving attention to optimizing sequences of learning material), and likely switching to more active learning modalities, including students authoring their own drills, quizzes, and KS items.

Studies in progress by author McKensie Phillips showed that students were more confident with the material at the end of the semester rather than each individual unit. This observation even held for early units where researchers expected confidence would decrease given the time elapsed between the end of the unit and when the student took the post-semester KS. The results indicate that certain knowledge mastery is cumulative, and students are intertwining material from unit to unit and practicing metacognition by re-engaging with the KS to deepen understanding over time.

Student-authored knowledge surveys

Introducing students to the KS authoring must start with a class knowledge survey authored by the instructor so that they have an example and disclosure of the kinds of thinking utilized to construct a KS. Author Chris Cogan routinely tasks teams of 4-5 students to summarize the content at the end of the hour (or week) by writing their own survey items for the content. Typically, this requires about 10 minutes at the end of class. The instructor compiles the student drafts, looks for potential misconceptions, and posts the edited summary version back to the class.

Beginners’ student-authored items often tend to be brief, too vague to answer, or too focused on the lowest Bloom levels. However, feedback from the instructor each week has an impact, and students become more able to write helpful survey items and – more importantly – better acquire knowledge from the class sessions. The authoring of items begins to improve thinking, self-assessment, and justified confidence.

Recalibrating for self-assessment accuracy

Students with large miscalibrations in self-assessment accuracy should wonder, “What can I do about this?” The pre-exam knowledge survey data enables some sophisticated post-exam reflection through exam wrappers (Lovett, 2013). With the responses to their pre-exam knowledge survey and the graded exam in hand, students can do a “deep dive” into the two artifacts to understand what they can do.

Instructors can coach students to gain awareness of what their KS responses indicate about their mastery of the content. If large discrepancies between the responses to the knowledge survey and the graded exam exist, instructors query for some introspection on how these arose. Did students use their KS results to inform their actions (e.g., additional study) before the exam? Did different topics or sections of the exam produce different degrees of miscalibration? Were there discrepancies in self-assessed accuracy by Bloom levels?

Most importantly, after conducting the exam wrapper analysis, students with significant miscalibration errors should each articulate doing one thing differently to improve performance. Reminding students to revisit their post-exam analysis well before the next exam is helpful. IwM editor Lauren Scharff noted that her knowledge surveys and tests reveal that most psychology students gradually improved their self-assessment accuracy across the semester and more consistently used them as an ongoing learning tool rather than just a last-minute knowledge check.

Takeaways

We construct and use surveys differently than when we began two decades ago. For readers, we provide a downloadable example of a contemporary knowledge survey that covers this guest-edited blog series and an active Google® Forms online version.

We have learned that mentoring for metacognition can measurably increase students’ self-assessment accuracy as it supports growing their knowledge, skills, and capacity for higher-order thinking. Knowledge surveys offer a powerful tool for instructors who aim to direct students toward understanding the meaning of becoming educated, becoming learning experts, and understanding themselves through metacognitive self-assessment. There remains much to learn.

 


Knowledge Surveys Part 1 — Benefits of Knowledge Surveys to Student Learning and Development

by Karl Wirth, Macalester College,
Ed Nuhfer, California State Universities (retired)
Christopher Cogan, Memorial University
McKensie Kay Phillips, University of Wyoming

Introduction

Knowledge surveys (KSs) present challenges like exam questions or assignments, but respondents do not answer these. Instead, they express their felt ability to address the challenges with present knowledge. Knowledge surveys focus on self-assessment, which is a special kind of metacognition. 

Overall, metacognition is a self-imposed internal dialogue that is a distinguishing feature of “expert learners” regardless of the discipline (e.g., Ertmer & Newby, 1996). Because all students do not begin college as equally aware and capable of thinking about their learning, instructors must direct students to keep them in constant contact with their metacognition. Paul Pintrich, a pioneer in metacognition, stressed that “instruction about metacognition must be explicit.” Knowledge surveys enable what Ertmer & Newby and Pintrich advocate in any class in any subject.

road sign with words "data" pointing to words "information" pointing to word "knowledge" with the word "learning above
Image by Gerd Altmann from Pixabay

Knowledge surveys began in 1992 during a conversation about annual reviews between the guest editor and a faculty member who stated: “They never ask about what I teach.” Upon hearing the faculty member, the guest editor decided to create a 200-item form to survey student ratings of their mastery of detailed content for his geology course at the start and end of the class. The items were simply an array of test and quiz questions, ordered in the sequence the students would encounter during the course. The students responded to each item through a 3-point response at the start and end of the course. 

The information from this first knowledge survey proved so valuable that the guest editor described this in 1996 in a geology journal as a formative assessment. As a result, geoscience faculty elsewhere started taking the lead in researching them and describing more benefits.

In 2003, U.S. Air Force Academy’s physics professor Delores Knipp and the guest editor published the first peer-reviewed paper (Nuhfer and Knipp, 2003) for multiple disciplines. If new to knowledge surveys, click the hotlink to that paper now and read at least the first page to gain a conceptual understanding of the instrument.

Self-assessment, Metacognition, and Knowledge Surveys

Becoming educated is a process of understanding self and the phenomena that one experiences. Knowledge surveys structure practices in understanding both. 

Our series’ earlier entries revealed the measurable influence of self-assessment on dispositions such as self-efficacy, mindset, and intellectual and ethical development that prove indispensable to the lifelong process of becoming educated. The entries on bias and privilege revealed that the privilege of having the kind of education that renders the unconscious conscious may determine the collective quality of a society and how well we treat one another within it.

Knowledge surveys prompt self-assessment reflections during learning every aspect of the content. Over a baccalaureate education, cumulative, repetitive practice can significantly improve understanding of one’s present knowledge and self-assessing accuracy.

Improving Learning

Knowledge surveys’ original purpose was to improve student learning (e.g., Nuhfer & Knipp, 2003Wirth et al., 20162021). Providing students with a knowledge survey at the beginning of a course or unit of instruction offered an interactive roadmap for an entire course that overtly disclosed the instructor’s intentions for learning to students. 

Early on, users recognized that knowledge surveys might offer a measure of changes in learning produced by a unit of instruction. Demonstrating the validity of such self-assessed competence measures was crucial but was finally achieved in 2016 and 2017.

Deeper Reading

Students quickly learned the value of prioritizing knowledge through engaging with the knowledge survey prior to and during engaging in reading. The structure of the KSs enabled reading with the purpose of illuminating known learning objectives. The structure also primed students to understand concepts by using the reading to clarify the connectedness between knowledge survey items.

Rather than just sitting down to “complete a reading,” students began reading assignments with appropriate goals and strategies; a characteristic of “expert readers” (Paris et al., 1996). When they encountered difficult concepts, they displayed increasing effort to improve their understanding of the topics identified as being essential to understand the concept. Further, knowledge surveys facilitated mentoring. When students did not understand the material, they proved more likely to follow up with a colleague or instructor to complete their understanding. 

Facilitating Acquiring Self-Regulation

Well-constructed knowledge surveys are detailed products of instructor planning and thinking. They communicate instructor priorities and coordinate the entire class to focus on specific material in unison. That students’ comments expressing they “didn’t know that would be on the exam” nearly disappeared from classroom conversations cannot be overly appreciated. 

Replacing scattered class-wide guessing of what to study allowed a collective focus on “How will we learn this material?” That reframing led to adopting learning strategies that expert learners employ when they have achieved self-regulation. Students increasingly consulted with each other or the instructor when they sensed or realized their current response to a knowledge survey item was probably inadequate. 

Levels and Degrees of Understanding

In preparing a knowledge survey for a course, the instructor carefully writes each survey item and learning objective so that learning addresses the desired mastery at the intended Bloom level (Krathwohl, 2002). Providing awareness of Bloom levels to students and reinforcing this throughout a course clarifies student awareness of the deep understanding required to teach the content at the required Bloom level to another person. Whereas it may be sufficient to remember or comprehend some content, demonstrating higher cognitive processes by having to explain to another how to apply, synthesize or evaluate central concepts and content of a course feels different because it is different. 

Knowledge surveys can address all Bloom levels and provide the practices needed to enable the paired understanding of knowing and “feeling of knowing” like no other instrument. Including the higher Bloom levels, combined with the explicitly stated advanced degree of understanding as the level of “teaching” or “explaining” to others, builds self-assessment skills and fosters the development of well–justified self-confidence. A student with such awareness can better focus efforts on extending the knowledge in which they recognize their weakness.

Building Skills with Feedback

The blog entries by Fleisher et al. in this series stressed the value of feedback in developing healthy self-assessments. Knowledge survey items that address the same learning outcomes as quizzes, exams, assignments, and projects promote instructional alignment. Such alignment allows explicit feedback from the demonstrated competence measures to calibrate the accuracy of self-assessments of understanding. Over time, knowledge surveys confer awareness that appropriate feedback builds both content mastery and better self-assessment skills.

A robust implementation directs students to complete the relevant portions of a knowledge survey after studying for an exam but before taking it. After the teacher grades the exams, students receive their self-assessed (knowledge survey score) and demonstrated (graded exam score) competence in a single package. From this information, the instructor can direct students to compare their two scores and to receive mentoring from the instructor when there is a large discrepancy (>10 points) between the two scores. 

Generally, a significant discrepancy from a single knowledge survey-exam pair comparison is not as meaningful as longer-term trends illuminated by cumulative data. Instructors who use KSs skillfully mentor students to become familiar with their trends and tendencies. When student knowledge survey responses consistently over- or under-estimate their mastery of the content, the paired data reveal this tendency to the student and instructor and open the opportunity for conversations about the student’s habitually favored learning strategies.

A variant implementation adds an easy opportunity for self-assessment feedback. Here, instructors assign students to estimate their score on an assignment or exam at the start of engaging the project and after completing the test or assignment prior to submission. These paired pre-post self-assessments help students to focus on their feelings of knowing and to further adjust toward greater self-assessment accuracy.

Takeaways

Knowledge surveys are unique in their utility for supporting student mastery of disciplinary knowledge, developing their affect toward accurate feelings of knowing, and improving their skills as expert learners. Extensive data show that instructors’ skillful construction of knowledge surveys as part of class design elicits deeper thinking and produces higher quality classes. After construction, class use facilitates mutual monitoring of progress and success by students and instructors. In addition to supporting student learning of disciplinary content, knowledge surveys keep students in constant contact with their metacognition and develop their capacity for lifelong learning. 

In Part 2, we follow from our more recent investigations on (1) more robust knowledge survey design, (2) learning about temporal qualities of becoming educated, (3) student authoring of knowledge surveys, and (4) mentoring students with large mis-calibrations in self-assessed competence toward greater self-assessment accuracy. 


Metacognition and Mindset for Growth and Success: Part 2 – Documenting Self-Assessment and Mindset as Connected

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

Self-assessment measures and categorizing mindset preference both employ self-reported metacognitive responses that produce noisy data. Interpreting noisy data poses difficulties and generates peer-reviewed papers with conflicting results. Some published peer-review works question the legitimacy and value of self-assessment and mindset.

Yeager and Dweck (2020) communicate frustration when other scholars deprecate mindset and claim it makes no difference under what mindset students pursue education. Indeed, that seems similar to arguing that enjoyment of education and students’ attitudes toward it makes no difference in the quality of their education.

We empathize with that frustration when we recall our own from seeing in class after class that our students were not “unskilled and unaware of it” and reporting those observations while a dominant consensus that “Students can’t self-assess” proliferated. The fallout that followed from our advocacy in our workplaces (mentioned in Part 2 of the entries on privilege) came with opinions that since “the empiricists have spoken,” there was no reason we should study self-assessment further. Nevertheless, we found good reason to do so. Some of our findings might serve as an analogy to demonstrating the value of mindsets despite the criticisms being leveled against them.

How self-assessment research became a study of mindset

In the summer of 2019, the guest editor and the first author of this entry taught two summer workshops on metacognition and learning at CSU Channel Islands to nearly 60 Bridge students about to begin their college experience. We employed a knowledge survey for the weeklong program, and the students also took the paired-measures Science Literacy Concept Inventory (SLCI). Students had the option of furnishing an email address if they wanted a feedback letter. About 20% declined feedback, and their mean score was 14 points lower (significant at the 99.9% confidence level) than those who requested feedback.

In revisiting our national database, we found that every campus revealed a similar significant split in performance. It mattered not whether the institution was open admissions or highly selective; the mean score of the majority who requested feedback (about 75%) was always significantly higher than those who declined feedback. We wondered if the responses served as an unconventional diagnosis of Dweck’s mindset preference.

Conventional mindset diagnosis employs a battery of agree-disagree queries to determine mindset inclination. Co-author Michael Roberts suggested we add a few mindset items on the SLCI, and Steven Fleisher selected three items from Dweck’s survey battery. After a few hundred student participants revealed only a marginal definitive relationship between mindset diagnosed by these items and SLCI scores, Steve increased our items to five.

Who operates in fixed, and who operates in growth mindsets?

The personal act of choosing to receive or avoid feedback to a concept inventory offers a delineator to classify mindset preference that differs from the usual method of doing so through a survey of agree-disagree queries. We compare here the mindset preferences of 1734 undergraduates from ten institutions using (a) feedback choice and (b) the five agree-disagree mindset survey items that are now part of Version 7.1a of the SLCI. That version has been in use for about two years.

We start by comparing the two groups’ demonstrable competence measured by the SLCI. Both methods of sorting participants into fixed or growth mindset preferences confirmed a highly significant (99.9% confidence) greater cognitive competence in the growth mindset disposition (Figure 1A). As shown in the Figure, feedback choice created two groups of fixed and growth mindsets whose mean SLCI competency scores differed by 12 percentage points (ppts). In contrast, the agree-disagree survey items defined the two groups’ means as separated by only 4 ppts. However, the two methods split the student populace differently, with the feedback choice determining that about 20% of the students operated in the fixed mindset. In contrast, the agree-disagree items approach determined that nearly 50% were operating in that mindset.

We next compare the mean self-assessment accuracy of the two mindsets. In a graph, it is easy to compare mean skills between groups by comparing the scatter shown by one standard deviation (1 Sigma) above and below the means of each group (Figure 1B). The group members’ scatter in overestimating or underestimating their actual scores reveals a group’s developed capacity for self-assessment accuracy. Groups of novices show a larger scatter in their group’s miscalibrations than do groups of those with better self-assessment skills (see Figure 3 of resource at this link).

Graphs showing how fixed and growth mindsets relate to SLCI scores, differing based on how mindset is categorized.

Figure 1. A. Comparisons of competence (SLCI scores) of 1734 undergraduates between growth mindset participants (color-coded blue) and fixed mindset participants (color-coded red) mindsets as deduced by two methods: (left) agree-disagree survey items and (right) acceptance or opting-out or receiving feedback. “B” displays the measures of demonstrated competence spreads of one standard deviation (1 Sigma) in growth (blue) and fixed mindset (red) groups as deduced by the two methods. The thin black line at 0 marks a perfect self-assessment rating of 0, above which lie overconfident estimates and below which lie underconfident estimates in miscalibrations of self-assessed accuracy. The smaller the standard deviation revealed by the height of the rectangles in 2B, the better the group’s ability to self-assess accurately. Differences shown in A of 4 and 12 ppts and B of 2.3 and 3.5 ppts are differences between means.

On average, students classified as operating in a growth mindset have better-calibrated self-assessment skills (less spread of over- and underconfidence) than those operating in a fixed mindset by either classification method (Figure 1B). However, the difference between fixed and growth was greater and more statistically significant when mindset was classified by feedback choice (99% confidence) rather than by the agree-disagree questions (95% confidence).

Overall, Figure 1 supports Dweck and others advocating for the value of a growth mindset as an asset to learning. We urge contextual awareness by referring readers to Figure 1 of Part 1 of this two-part thematic blog on self-assessment and mindset. We have demonstrated that choosing to receive or decline feedback is a powerful indicator of cognitive competence and at least a moderate indicator of metacognitive self-assessment skills. Still, classifying people into mindset categories by feedback choice addresses only one of the four tendencies of mindset shown in that Figure. Nevertheless, employing a more focused delineator of mindset preference (e.g., choice of feedback) may help to resolve the contradictory findings reported between mindset type and learning achievement.

At this point, we have developed the connections between self-assessment, mindset, and feedback we believe are most valuable to the readers of the IwM blog. Going deeper is primarily of value to those researching mindset. For them, we include an online link to an Appendix to this Part 2 after the References, and the guest editor offers access to SLCI Version 7.1a to researchers who would like to use it in parallel with their investigations.

Takeaways and future direction

Studies of self-assessment and mindset inform one another. The discovery of one’s mindset and gaining self-assessment accuracy require knowing self, and knowing self requires metacognitive reflection. Content learning provides the opportunity for developing the understanding of self by practicing for self-assessment accuracy and acquiring the feeling of knowing while struggling to master the content. Learning content without using it to know self squanders immense opportunities.

The authors of this entry have nearly completed a separate stand-alone article for a follow-up in IwM that focuses on using metacognitive reflection by instructors and students to develop a growth mindset.

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487


Metacognition and Mindset for Growth and Success: APPENDIX to Part 2 – Documenting Self-Assessment and Mindset as Connected

by Ed Nuhfer, Guest Editor, California State University (retired)
Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

This Appendix stresses numeracy and employs a dataset of 1734 participants from ten institutions to produce measures of cognitive competence, self-assessed competence, self-assessment accuracy, and mindset categorization. The database is sufficient to address essential issues introduced in our blogs.

Finding replicable relationships in noisy data employs groups from a database collected from instruments proven to produce high-reliability measures. (See Figure 10 at this link.). If we assemble groups, say, groups of 50, as shown in Figure 1 B, we can attenuate the random noise in individuals’ responses (Fig. 1A) and produce a clearer picture of the signal hidden within the noise (Fig. 1B).

graphs showing postdicted self-assessment and SLCI a) individual data and b) group data

Figure 1 Raw data person-by-person on over 9800 participants (Fig. 1 A) shows a highly significant correlation between measures of actual competence from SLCI scores and postdicted self-assessed competence ratings. Aggregating the data into over 180 groups of 50 (Fig. 1 B) reduces random noise and clarifies the relationship.

Random noise is not simply an inconvenience. In certain graphic types, random noise generates patterns that do not intuitively appear random. Researchers easily interpret these noise patterns as products of a human behavior signal. The “Dunning-Kruger effect” appears built on many researchers doing that for over twenty years. 

Preventing confusing noise with signal requires knowing what randomness looks like. Researchers can achieve this by ensuring that the surveys and test instruments used in any behavioral science study have high reliability and then constructing a simulated dataset by completing these instruments with random number responses. The simulated population should equal that of the participants in the research study, and graphing the simulated study should employ the same graphics researchers intend to present the participants’ data in a publication.

The 1734 participants addressed in Parts 1 and 2 of this blog’s theme pair on mindset are part of the larger dataset represented in Figure 1. The number is smaller than 9800 because we only recently added mindset questions. 

The blog containing this Appendix link showed the two methods of classifying mindset as consistent in designating growth mindset as associated with higher scores on cognitive measures and more accurate self-assessments. However, this finding does not directly test how the two classification methods are related to one another. The fact noted in the blog that the two methods classified people differently indicated a reason to anticipate that the two may not prove to be directly statistically related.

We need to employ groups to attenuate noise, and ideally, we want large groups with good prospects of a spread of values. We first picked the groups associated with furnishing information about privilege (Table 1) because these are groups large enough to attenuate random noise. Further, the groups displayed highly significant statistical spreads when we looked at self-assessed and demonstrable competence within these categories. Note well: we are not trying to study privilege aspects here. Our objective, for now, is to understand the relationship between mindset defined by agree-disagree items and mindset defined by requests for feedback.

We have aggregated our data in Table 1 from four parameters to yield eight paired measures and are ready to test for relationships. Because we already know the relationship between self-assessed competence and demonstrated competence, we can verify whether our existing dataset of 1734 participants presented in 8 paired measures groups is sufficient to deduce the relationship we already know. Looking at self-assessment serves as a calibration to help answer, “How good is our dataset likely going to be for distinguishing the unknown relationships we seek about mindset?”

Mindset and self-assessment indicators by large groups.

Table 1. Mindset and self-assessment indicators by large groups. The table reveals each group’s mindset composition derived from both survey items and feedback and the populace size of each group.

Figure 2 shows that our dataset in Table 1 proved adequate in capturing the known significant relationship between self-assessed competence and demonstrated competence (Fig. 2A). The fit-line slope and intercept in Figure 2A reproduce the relationship established from much larger amounts of data (Fig. 1 B). However, the dataset did not confirm a significant relationship between the results generated by the two methods of categorizing people into mindsets (Fig. 2B).

In Figure 2B, there is little spread. The plotted points and the correlation are close to significant. Nevertheless, the spread clustered so tightly that we are apprehensive that the linear relationship would replicate in a future study of a different populace. Because we chose categories with a large populace and large spreads, more data entered into these categories probably would not change the relationships in Figure 2A or 2B. More data might bump the correlation in Figure 2B into significance. However, this could be more a consequence of the spread of the categories chosen for Table 1 than a product of a tight direct relationship between the two methods employed to categorize mindset. However, we can resolve this by doing something analogous to producing the graph in Figure 1B above.

Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

Figure 2. Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

We next place the same participants from Table 1 into different groups and thereby remove the spread advantages conferred by the groups in Table 1. We randomize the participants to get a good mix of the populace from the ten schools, sort the randomized data by class rank to be consistent with the process used to produce Figure 1B and aggregate them into groups of 100 (Table 2).

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

The results employing different participant groupings appear in Figure 3. Figure 3A confirms that the different groupings in Table 2 attenuate the spread introduced by the groups in Table 1.

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

The matched pairs of self-assessed competence and demonstrable competence continue in Figure 3A to reproduce a consistent line-fit that despite diminished correlation that still attains significance like Figures 1B and 2A. 

In contrast, the ability to show replication between the two methods for categorizing mindsets has completely broken down. Figure 2B shows a very different relationship from that displayed in 1B. Deducing the direct relationship between the two methods of categorizing mindset proves not replicable across different groups.

To allow readers who may wish to try different groupings, we have provided the raw dataset used for this Appendix that can be downloaded from https://profcamp.tripod.com/iwmmindsetblogdata.xls.

Takeaways

The two methods of categorizing mindset, in general, designate growth mindset as associated with higher scores on tests of cognitive competence and, to a lesser extent, better self-assessment accuracy. However, the two methods do not show a direct relationship with each other. This indicates the two are addressing different dimensions of the multidimensional character of “mindsets.”


Metacognition and Mindset for Growth and Success: Part 1 – Understanding the Metacognitive Connections between Self-Assessment and Mindset

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

When I first entered graduate school, I was flourishing. I was a flower in full bloom. My roots were strong with confidence, the supportive light from my advisor gave me motivation, and my funding situation made me finally understand the meaning of “make it rain.” But somewhere along the way, my advisor’s support became only criticism; where there was once warmth, there was now a chill, and the only light I received came from bolts of vindictive denigration. I felt myself slowly beginning to wilt. So, finally, when he told me I did not have what it takes to thrive in academia, that I wasn’t cut out for graduate school, I believed him… and I withered away.                                                                              (actual co-author experience)

schematic of person with band aid and flowers growing who is facing other people
Image by Moondance from Pixabay

After reading the entirety of this two-part blog entry, return and read the shared experience above once more. You should find that you have an increased ability to see the connections there between seven elements: (1) affect, (2) cognitive development, (3) metacognition, (4) self-assessment, (5) feedback, (6) privilege, and (7) mindset. 

The study of self-assessment as a valid component of learning, educating, and understanding opens up fascinating areas of scholarship for new exploration. This entry draws on the same paired-measures research described in the previous blog entries of this series. Here we explain how measuring self-assessment informs understanding of mindset and feedback. Few studies connect self-assessment with mindset, and almost none rest on a sizeable validated data set. 

Mindset, self-assessment, and privilege

Mindset theory proposes that individuals lean toward one of two mindsets (Dweck, 2006) that differ based on internalized beliefs about intelligence, learning, and academics. According to Dweck and others, people fall along a continuum that ranges from having a fixed mindset defined by a core belief that their intelligence and thinking abilities remain fixed, and effort cannot change them. In contrast, having a growth mindset comes with the belief that, through their effort, people can expand and improve their abilities to think and perform (Figure 1). 

Indeed, a growth mindset has support in the stages of intellectual, ethical, and affective development discovered by Bloom & Krathwohl and William Perry mentioned earlier in this series. However, mindset theory has evolved into making broader claims and advocating that being in a state of growth mindset also enhances performance in high-stakes functions such as leadershipteaching, and athletics

diagram showing the opposite nature of fixed and growth mindset with respect to how people view effort, challenge, failure and feedback. From https://trainugly.com/portfolio/growth-mindset/

Figure 1. Fixed – growth mindset tendencies. (From https://trainugly.com/portfolio/growth-mindset/)

Do people choose their mindset or do their experiences place them in their positions on the mindset continuum?  Our Introduction to this series disclosed that people’s experiences from degrees of privilege influence their positioning along the self-assessment accuracy continuum, and self-assessment has some commonalities with mindset. However, a focused, evidence-based study of privilege on determining mindset inclination seems lacking.

Our Introduction to this series indicated that people do not choose their positions along the self-assessment continuum. People’s cumulative experiences place them there. Their positions result from their individual developmental histories, where degrees of privilege influence the placement through how many experiences an individual has that are relevant and helpful to building self-assessment accuracy. The same seems likely for determining positions along the mindset continuum.

Acting to improve equity in educational success

Because the development during pre-college years primarily occurs spontaneously by chance rather than by design, people are rarely conscious of how everyday experiences form their dispositions. College students are unlikely even to know their positions on either continuum unless they receive a diagnostic measure of their self-assessment accuracy or their tendency toward a growth or a fixed mindset. Few get either diagnosis anywhere during their education. 

Adapting a more robust growth mindset and acquiring better self-assessment accuracy first requires recognizing that these dispositions exist. After that, devoting systematic effort to consciously enlisting metacognition during learning disciplinary content seems essential. Changing the dispositions takes longer than just learning some factual content. However, the time required to see measurable progress can be significantly reduced by a mentor/coach who directs metacognitive reflection and provides feedback.

Teaching self-assessment to lower-division undergraduates by providing numerous relevant experiences and prompt feedback is a way to alleviate some of the inequity produced by differential privilege in pre-college years. The reason to do this early is to allow students time in upper-level courses to ultimately achieve healthy self-efficacy and graduate with the capacity for lifelong learning. A similar reason exists for teaching students the value of affect and growth mindset by providing awareness, coaching, and feedback. Dweck describes how achieving a growth mindset can mitigate the adverse effects of inequity in privilege.

Recognizing good feedback

Dweck places high value on feedback for achieving the growth mindset. The Figure 1 in our guest series’ Introduction also emphasizes the importance of feedback in developing self-assessment accuracy and self-efficacy during college.

Depending on a person’s beliefs about their particular skill to address a challenge, they will respond in predictable ways when a skill requires effort, when it seems challenging, when effort affects performance, and when feedback informs performance. Those with a fixed mindset realize that feedback will indicate imperfections, which they take as indicative of their fixed ability rather than as applicable to growing their ability. To them, feedback shames them for their imperfections, and it hurts. They see learning environments as places where stressful competitions occur between their own and others’ fixed abilities. Affirmations of success rest in grades rather than growing intellectual ability.

Those with a growth mindset value feedback as illuminating the opportunities for advancing quickly in mastery during learning. Sharing feedback with peers in their learning community is a way to gain pleasurable support from a network that encourages additional effort. There is little doubt which mindset promotes the most enjoyment, happiness, and lasting friendships and generates the least stress during the extended learning process of higher education.

Dweck further stresses the importance of distinguishing feedback that is helpful from feedback that is damaging. Our lead paragraph above revealed a devastating experience that would influence any person to fear feedback and seek to avoid it. A formative influence that disposes us to accept or reject feedback likely lies in the nature of feedback that we received in the past. A tour through traits of Dweck’s mindsets suggests many areas where self-perceptions can form through just a single meaningful feedback event. 

Australia’s John Hattie has devoted his career to improving education, and feedback is his specialty area. Hattie concluded that feedback is “…the most powerful single moderator that enhances achievement” and noted in this University of Auckland newsletter “…arguably the most critical and powerful aspect of teaching and learning.” 

Hattie and Timperley (2007) synthesized many years of studies to determine what constitutes feedback helpful to achievement. In summary, valuable feedback focuses on the work process, but feedback that is not useful focuses on the student as a person or their abilities and communicates evaluative statements about the learner rather than the work. Hattie and Dweck independently arrived at the same surprising conclusion: even praise directed at the person, rather than focusing on the effort and process that led to the specific performance, reinforces a fixed mindset and is detrimental to achievement.

Professors seldom receive mentoring on how to provide feedback that would promote growth mindsets. Likewise, few students receive mentoring on how to use peer feedback in constructive ways to enhance one another’s learning. 

Takeaways

Scholars visualize both mindset and self-assessment as linear continuums with two respective dispositions at each of the ends: growth and fixed mindsets and perfectly accurate and wildly inaccurate self-assessments. In this Part 1, we suggest that self-assessment and mindset have surprisingly close connections that scholars have scarcely explored.

Increasing metacognitive awareness seems key to tapping the benefits of skillful self-assessment, mindset, and feedback and allowing effective use of the opportunities they offer. Feedback seems critical in developing self-assessment accuracy and learning through the benefits of a growth mindset. We further suggest that gaining benefit from feedback is a learnable skill that can influence the success of individuals and communities. (See Using Metacognition to Scaffold the Development of a Growth Mindset, Nov 2022.)

In Part 2, we share findings from our paired measures data that partially explain the inconsistent results that researchers have obtained between mindset and learning achievement. Our work supports the validity of mindset and its relationship to cognitive competence. It allows us to make recommendations for faculty and students to apply this understanding to their advantage.

 

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Heft, I. & Scharff, L. (July 2017). Aligning best practices to develop targeted critical thinking skills and habits. Journal of the Scholarship of Teaching and Learning, Vol 17(3), pp. 48-67. http://josotl.indiana.edu/article/view/22600

Isaacson, Randy M., and Frank Fujita. 2006. “Metacognitive Knowledge Monitoring and Self-Regulated Learning: Academic Success and Reflections on Learning.” Journal of Scholarship of Teaching and learning6, no. 1: 39–55. Retrieved from https://eric.ed.gov/?id=EJ854910

Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies? American Psychologist, 75(9), 1269–1284. https://doi.org/10.1037/amp0000794

 


Metacognitive Self-assessment in Privilege and Equity – Part 2: Majority Privilege in Scientific Thinking

Ed Nuhfer, California State University (Retired)
Rachel Watson, University of Wyoming
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Being in the majority carries the privilege of empowerment to set the norms for acceptable beliefs. Minority status for any group invites marginalization by the majority simply because the group appears different from the familiar majority. Here, we explore why this survival mechanism (bias) also operates when a majority perceives an idea as different and potentially threatening established norms.

Young adult learners achieve comfort in ways of thinking and explaining the world from their experiences obtained during acculturation. Our Introduction stressed how these experiences differ in the majority and minority cultures and produce measurable effects. Education disrupts established states of comfort by introducing ideas that force reexaminations that contradict earlier beliefs established from experiences.

Even the kind of college training that promotes only growing cognitive expertise is disruptive but more critical; research verifies that the disruptions are felt. While discovering the stages of intellectual development, William Perry Jr. found that, for some learners, the feelings experienced during transitions toward certain higher stages of thinking were so discomforting that the students ceased trying to learn and withdrew. Currently, about a third of first-year college students drop out before their sophomore year.

Educating for self-assessment accuracy to gain control over bias

We believe that the same survival mechanisms that promote prejudice and suppress empathizing and understanding different demographic groups also cripple understanding in encounters with unfamiliar or contrarian ideas. In moments that introduce ideas disruptive to beliefs or norms, unfamiliar ideas become analogous to unfamiliar groups—easily marginalized and thoughtlessly devalued in snap judgments. Practice in doing self-assessment when new learning surprises us should be valuable for gaining control over the mechanism that triggers our own polarizing bias. Image of a maze on a black background with each branch of the maze showing different words such as "response, meaning, bias, memory." credit: Image by John Hain from Pixabay

Earlier (Part 2 entry on bias), we recommended teaching students to frequently self-assess, “What am I feeling that I want to be true, and why do I have that feeling?” That assignment ensures that students encounter disruptive surprises mindfully by becoming aware of affective feelings involved in triggering their bias. Awareness gives the greater control over self needed to prevent being captured by a reflex to reject unfamiliar ideas out of hand or to marginalize those who are different.

Teaching by employing self-assessment routinely for educating provides the prolonged relevant practice with feedback required for understanding self. Educating for self-assessment accuracy constitutes a change from training students to “know stuff” to educating students to know how they can think to understand both “stuff” and self.

When the first encounter with something or someone produces apprehension, those who gain a capacity for self-assessment accuracy from practice can exercise more control over their learning through recognizing the feeling that accompanies incipient activation of bias in reaction to discomfort. Such self-awareness allows a pause for reflecting on whether enlisting this vestigial survival mechanism serves understanding and can prevent bias from terminating our learning and inducing us to speak or act in ways that do not serve to understand.

Affect, metacognition, and self-assessment: minority views of contrarian scholars

We address three areas of scholarship relevant to this guest-edited series to show how brain survival mechanisms act to marginalize ideas that contradict an established majority consensus.

Our first example area involves the marginalization of the importance of affect by the majority of behavioral scientists. Antonio Damasio (1999, p. 39) briefly described this collective marginalization:

There would have been good reason to expect that, as the new century started, the expanding brain sciences would make emotion part of their agenda…. But that…never came to pass. …Twentieth Century science…moved emotion back into the brain, but relegated it to the lower neural strata associated with ancestors whom no one worshipped. In the end, not only was emotion not rational, even studying it was probably not rational.

A past entry in Improve with Metacognition (IwM) also noted the chilling prejudice against valuing affect during the 20th Century. Benjamin Bloom’s Taxonomy of the Affective Domain (Krathwohl et al. 1964) received an underwhelming reception from educators who had given unprecedented accolades to the team’s earlier volume on Taxonomy of the Cognitive Domain (Bloom, 1956). Also noted in that entry was William G. Perry’s purposeful avoidance of referring to affect in his landmark book on intellectual and ethical development (Perry, 1999). The Taxonomy of the Affective Domain also describes a developmental model that maps onto the Perry model of development much better than Bloom’s Taxonomy of the Cognitive Domain.

Our second example involved resistance against valuing metacognition. Dunlosky and Metcalfe (2009) traced this resistence to French philosopher Auguste Comte (1798-1854), who held that an observer trying to observe self was engaged in an impossible task like an eye trying to see itself by looking inwardly. In the 20th Century, the behaviorist school of psychology gave new life to Comte’s views by professing that individuals’ ability to do metacognition, if such an ability existed, held little value. According to Dunlosky and Metcalfe (2009, p. 20), the behaviorists held “…a stranglehold on psychology for nearly 40 years….” until the mid-1970s, when the work of John Flavell (see Flavell, 1979) made the term and concept of metacognition acceptable in academic circles.

Our third example area involves people’s ability to self-assess. “The Dunning-Kruger effect” holds that most people habitually overestimate their competence, with those least competent holding the most overly inflated views of their abilities and those with real expertise revealing more humility by consistently underestimating their abilities by modest amounts. Belief in “the effect” permeated many disciplines and became popular among the general public. As of this writing, a Google search brought up 1.5 million hits for the “Dunning Kruger effect.” It still constitutes the majority view of American behavioral scientists about human self-assessment, even after recent work revealed that the original mathematical arguments for “the effect” were untenable. 

Living a scholars’ minority experience

Considering prejudice against people and bias against new ideas as manifestations of a common, innate survival mechanism obviates fragmentation of these into separate problems addressed through unrelated educational approaches. Perceiving that all biases are related makes evident that the tendency to marginalize a new idea will certainly marginalize the proponents of an idea.

Seeing all bias as related through a common mechanism supports using metacognition, particularly self-assessment, for gaining personal awareness and control over the thoughts and feelings produced as the survival mechanism starts to trigger them. Thus, every learning experience providing discomfort in every subject offers an opportunity for self-assessment practice to gain conscious control over the instinct to react with bias

Some of the current blog series authors experienced firsthand the need for higher education professionals to acquire such control. When publishing early primary research in the early 1990s, we were naively unaware of majority consensus, had not yet considered bias as a survival reaction, and we had not anticipated marginalization. Suggesting frequent self-assessments as worthwhile teaching practices in the peer-reviewed literature brought reactions that jolted us from complacency into a new awareness.

Scholars around the nation, several of them other authors of this blog series, read the guest editor’s early work, introduced self-assessment in classes and launched self-assessment research of their own. Soon after, many of us discovered disparagements at the departmental, college, and university levels, and even at professional meetings followed for doing so. Some disparagements led to damaged careers and work environments.

The bias imparted by marginalization led to our doubting ourselves. Our feelings for a time were like those of the non-binary gender group presented in the earlier Figure 1 in the previous Part 1 on privilege: We “knew our stuff,” but our feelings of competence in our knowledge lagged. Thanks to the feedback from the journal peer-reviewers of Numeracy, we now live with less doubt in ourselves. For those of us who weathered the storm, we emerged with greater empathy for minority status and minority feelings and greater valuing of self-assessment. 

Self-assessment, a type of metacognition employing affect, seems in a paradigm change that recapitulates the history of affect and metacognition. Our Numeracy articles have achieved over 10,000 downloads, and psychologists in Europe, Asia, and Australia now openly question “the effect” (Magnus and Peresetsky, 2021; Kramer et al., 2022; Hofer et al., 2022; Gignac, 2022) in psychology journals. The Office of Science and Society at McGill University in Canada reached out to the lay public (Jarry, 2020) to warn how new findings require reevaluating “the effect.” We recently discovered that paired measures could even unearth unanticipated stress indicators among students (view section at time 21.38 to 24.58) during the turbulent times of COVID and civil disruption.

Takeaways

Accepting teaching self-assessment as good practice for educating and self-assessment measures as valid assessments open avenues for research that are indeed rational to study. After one perceives bias as having a common source, developing self-assessment accuracy seems a way to gain control over personal bias that triggers hostility against people and ideas that are not threatening, just different. 

“Accept the person you are speaking with as someone who has done amazing things” is an outstanding practice stressed at the University of Wyoming’s LAMP program. Consciously setting one’s cognition and affect to that practice erases all opportunities for marking anyone or their ideas for inferiority.

References

Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives, handbook 1: Cognitive domain. New York, NY: Longman.

Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: a new area of cognitive-developmental inquiry. American Psychologist 34, 906-911.

Gignac, Gilles E. (2022). The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect. Personality and Individual Differences 184: 111224. https://doi.org/10.1016/j.paid.2021.111224

Hofer, G., Mraulak, V., Grinschgl, S., & Neubauer, A.C. (2022). Less-Intelligent and Unaware? Accuracy and Dunning–Kruger Effects for Self-Estimates of Different Aspects of Intelligence. Journal of Intelligence, 10(1). https://doi.org/10.3390/jintelligence10010010

Kramer, R. S. S., Gous, G., Mireku, M. O., & Ward, R. (2022). Metacognition during unfamiliar face matching. British Journal of Psychology, 00, 1– 22. https://doi.org/10.1111/bjop.12553

Krathwohl, D.R., Bloom, B.S. and Masia, B.B. (1964) Taxonomy of Educational Objectives: The Affective Domain. New York: McKay.

Magnus, Jan R., and Peresetsky, A. (October 04, 2021). A statistical explanation of the Dunning-Kruger effect. Tinbergen Institute Discussion Paper 2021-092/III, http://dx.doi.org/10.2139/ssrn.3951845

Nicholas-Moon, Kali. (2018). “Examining Science Literacy Levels and Self-Assessment Ability of University of Wyoming Students in Surveyed Science Courses Using the Science Literacy Concept Inventory with Expanded Inclusive Demographics.” Master’s thesis, University of Wyoming.

Perry, W. G. Jr. (1999). Forms of Ethical and Intellectual Development in the College Years. San Francisco, CA: Jossey-Bass (a reprint of the original 1968 work with minor updating).

Tarricone, P. (2011). The Taxonomy of Metacognition (1st ed.). Psychology Press. 288p. https://doi.org/10.4324/9780203830529


Metacognitive Self-assessment in Privilege and Equity – Part 1 Conceptualizing Privilege and its Consequences

by Rachel Watson, University of Wyoming
Ed Nuhfer, California State University (Retired)
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Demographics of metacognition and privilege

The Introduction to this series asserted that lives of privilege in the K-12 years confer relevant experiences advantageous to acquire the competence required for lifelong learning and entry into professions that require college degrees. Healthy self-efficacy is necessary to succeed in college. Such self-efficacy comes only after acquiring self-assessment accuracy through practice in using the relevant experiences for attuning the feelings of competence with demonstrable competence. We concur with Tarricone (2011) in her recognition of affect as an essential component of the self-assessment (or awareness) component of metacognition: the “‘feeling of knowing’ that accompanies problem-solving, the ability to distinguish ideas about which we are confident….” 

A surprising finding from our paired measures is how closely the mean self-assessments of performance of groups of people track with their actual mean performances. According to the prevailing consensus of psychologists, mean self-assessments of knowledge are supposed to confirm that people, on average, overestimate their demonstrable knowledge. According to a few educators, self-reported knowledge is supposed to be just random noise with no meaningful relationship to demonstrable knowledge. Data published in 2016 and 2017 in Numeracy from two reliable, well-aligned instruments revealed that such is not the case. Our reports in Numeracy shared earlier on this blog (see Figures 2 and 3 at this link) confirm that people, on average, self-assess reasonably well. 

In 2019, by employing the paired measures, we found that particular groups of peoples’ average competence varied measurably, and their average self-assessed competence closely tracked their demonstrable competence. In brief, different demographic groups, on average, not only performed differently but also felt differently about their performance, and their feelings were accurate.

Conceptualizing privilege and its consequences

Multiple systems (structural, attitudinal, institutional, economic, racial, cultural, etc.) produce privilege, and all individuals and groups experience privilege and disadvantage in some aspects of their lives. We visualize each system as a hierarchical continuum, along which at one end lie those systematically marginalized/minoritized, and those afforded the most advantages lie at the other. Because people live and work within multiple systems, each person likely operates at different positions along different continuums.

Those favored by privilege are often unaware of their part in maintaining a hierarchy that exerts its toll on those of lesser privilege. As part of our studies of the effects on those with different statuses of privilege, we discovered that instruments that can measure cognitive competence and self-assessments of their competence offer richer assessments than competency scores. They also inform us about how students feel and how accurately they self-assess their competence. Students’ histories of privilege seem to influence how effectively they can initially do the kinds of metacognition conducive to furthering intellectual development when they enter college.

Sometimes a group’s hierarchy results from a lopsided division into some criterion-based majority/minority split. There, advantages, benefits, status, and even acceptance, deference, and respect often become inequitably and systematically conferred by identity on the majority group but not on the underrepresented minority groups. 

Being a minority can invite being marked as “inferior,” with an unwarranted majority negative bias toward the minority, presuming the latter have inferior cognitive competence and even lower capacity for feeling than the majority. Perpetual exposure to such bias can influence the minority group to doubt themselves and unjustifiably underestimate their competence and capacity to perform. By employing paired measures, Wirth et al. (2021, p. 152 Figs.6.7 & 6.8) found recently that undergraduate women, who are the less represented binary gender in science, consistently underestimated their actual abilities relative to men (the majority) in science literacy.

We found that in the majority ethnic group (white Caucasians), both binary genders, on average, significantly outperformed their counterparts in the minority group (all other self-identified ethnicities combined) in both the competence scores of science literacy and the mean self-assessed competency ratings (Figure 1). 

Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories.

Figure 1. Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories. This graph represents ten years of data collection of paired measures, but we only recently began to collect non-binary gender data within the last year, so this group is sparsely represented. Horizontal colored lines coded to the colored circles’ legend mark the positions of the means of scores and ratings in percent at the 95% confidence level. 

Notably, in Figure 1, the non-binary gender groups, majority or minority, were the strongest academic group of the three gender categories based on SLCI scores. Still, relative to their performance, the non-binary groups felt that they performed less well than they actually did.  

On a different SLCI dataset with a survey item on sexual preference rather than gender, researcher Kali Nicholas Moon (2018) found the same degree of diminished self-assessed competence relative to demonstrated competence for the small LGBT group (see Fig. 7 p. 24 of this link). Simply being a minority may predispose a group to doubt their competence, even if they “know their stuff” better than most.

These mean differences in performance shown in Figure 1 are immense. For perspective, pre-post measures in a GE college course or two in science rarely produce more than mean differences of more than a couple of percentage points on the SLCI. In both majority and minority groups, females, on average, underestimated their performance, whereas males overestimated theirs. 

If a group receives constant messages that their thinking may be inferior, it is hardly surprising that they internalize feelings of inferiority that are damaging. Our evidence above from several groups verifies such a tendency. We showed that lower feelings of competence parallel significant deficit performance on a test of understanding science, an area relevant to achieving intellectual growth and meeting academic aspirations. Whether this signifies a general tendency of underconfidence in minority groups for meeting their aspirations in other areas remains undetermined.

Perpetuating privilege in higher education

Academe nurtures many hierarchies. Across institutions, “Best Colleges” rating lists perpetuate a myth that institutions that make the list are, in all ways, for all students “better than” those not on the list. Some state institutions actively promote a “flagship reputation,” implying the state’s other schools as “inferior.” Being in a community of peers that reinforces such hierarchical valuing confers damaging messaging of inferiority to those attending the “inferior” institutions, much as an ethnic majority casts negative messages to the minority.  

Within institutions, different disciplines are valued differently, and people experience differential privileges across the departments and programs that focus on educating to support different disciplines. The degrees of consequences of stress, alienation, and physical endangerment are minor compared to those experienced by socially marginalized/minoritized groups. Nevertheless, advocating for any change in an established hierarchy in any community is perceived as disruptive by some and can provide consequences of diminished privilege. National communities of academic research often prove no exception. 

Takeaways

Hierarchies usually define privilege, and the majority group often supports hierarchies detrimental to the well-being of minority groups. Although test scores are the prevalent measures used to measure learning mastery, paired measures of cognitive competence and self-assessed competence provide additional information about students’ affective feelings about content mastery and their developing capacity for accurate self-assessment. This information helps reveal the inequity across groups and monitors how well students can employ the higher education environment for advancing their understanding of specialty content and understanding of self. Paired measures confirm that groups of varied privilege fare differently in employing that environment for meeting their aspirations. 


Understanding Bias in the Disciplines: Part 2 – the Physical and Quantitative Sciences 

by Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University
Simone Mcknight (Simone Erchov), Global Systems Technology

In Part 1, we summarized psychologists’ current understanding of bias. In Part 2, we connect conceptual reasoning and metacognition and show how bias challenges clear reasoning even in “objective” fields like science and math.

Science as conceptual

College catalogs’ explanations of general education (GE) requirements almost universally indicate that the desired learning outcome of the required introductory science course is to produce a conceptual understanding of the nature of science and how it operates. Focusing only on learning disciplinary content in GE courses squeezes out stakeholders’ awareness that a unifying outcome even exists. 

Wherever a GE metadisciplinary requirement (for example, science) specifies a choice of a course from among the metadiscipline’s different content disciplines (for example, biology, chemistry, physics, geology), each course must communicate an understanding of the way of knowing established in the metadiscipline. That outcome is what the various content disciplines share in common. A student can then understand how different courses emphasizing different content can effectively teach the same GE outcome.

The guest editor led a team of ten investigators from four institutions and separate science disciplines (biology, chemistry, environmental science, geology, geography, and physics). Their original proposal was to investigate ways to improve the learning in the GE science courses. While articulating what they held in common as professing the metadiscipline of “science,” the investigators soon recognized that the GE courses they took as students had focused on disciplinary content but scarcely used that content to develop an understanding of science as a way of knowing. After confronting the issue of teaching with such a unifying emphasis, they later turned to the problem of assessing success in producing this different kind of understanding.

Upon discovering no suitable off-the-shelf assessment instrument to meet this need, they constructed the Science Literacy Concept Inventory (SLCI). This instrument later made possible this guest-edited series and the confirmation of knowledge surveys as valid assessments of student learning.

Concept inventories test understanding the concepts that are the supporting framework for larger overarching blocks of knowledge or thematic ways of thinking or doing. The SLCI tests nine concepts specific to science and three more related to the practice of science and connecting science’s way of knowing with contributions from other requisite GE metadisciplines.

Self-assessment’s essential role in becoming educated

Self-assessment is partly cognitive (the knowledge one has) and partly affective (what one feels about the sufficiency of that knowledge to address a present challenge). Self-assessment accuracy confirms how well a person can align both when confronting a challenge.

Developing good self-assessment accuracy begins with an awareness that having a deeper understanding starts to feel different from merely having surface knowledge needed to pass a multiple-choice test. The ability to accurately feel when deep learning has occurred reveals to the individual when sufficient preparation for a challenge has, in fact, been achieved. We can increase learners’ capacity for metacognition by requiring frequent self-assessments that give them the practice needed to develop self-assessment accuracy. No place needs teaching such metacognition more than the introductory GE courses.

Regarding our example of science, the 25 items on the SLCI that test understanding of the twelve concepts derive from actual cases and events in science. Their connection to bias lies in learning that when things go wrong when doing or learning science, some concept is unconsciously being ignored or violated. Violations are often traceable to bias that hijacked the ability to use available evidence.

We often say: “Metacognition is thinking about thinking.” When encountering science, we seek to teach students to “think about” (1) “What am I feeling that I want to be true and why do I have that feeling?” and (2) “When I encounter a scientific topic in popular media, can I articulate what concept of science’s way of knowing was involved in creating the knowledge addressed in the article?”

Examples of bias in physical science

“Misconceptions research” constitutes a block of science education scholarship. Schools do not teach the misconceptions. Instead, people develop preferred explanations for the physical world from conversations that mostly occur in pre-college years. One such explanation addresses why summers are warm and winters are cold. The explanation that Earth is closer to the sun in summer is common and acquired by hearing it as a child. The explanation is affectively comfortable because it is easy, with the ease coming from repeatedly using the neural network that contains the explanation to explain the seasonal temperatures we experience. We eventually come to believe that it is true. However, it is not true. It is a misconception.

When a misconception becomes ingrained in our brain neurology over many years of repeated use, we cannot easily break our habit of invoking the neural network that holds the misconception until we can bypass it by constructing a new network that holds the correct explanation. Still, the latter will not yield a network that is more comfortable to invoke until usage sufficiently ingrains it. Our bias tendency is to invoke the most ingrained explanation because doing so is easy.

Even when individuals learn better, they often revert to invoking the older, ingrained misconception. After physicists developed the Force Concept Inventory (FCI) to assess students’ understanding of conceptual relationships about force and motion, they discovered that GE physics courses only temporarily dislodged students’ misconceptions. Many students soon reverted to invoking their previous misconceptions. The same investigators revolutionized physics education by confirming that active learning instruction better promoted overcoming misconceptions than did traditional lecturing.

The pedagogy that succeeds seemingly activates a more extensive neural network (through interactive discussing, individual and team work on problem challenges, writing, visualizing through drawing, etc.) than was activated to initially install the misconception (learning it through a brief encounter).

Biases that add wanting to believe something as true or untrue are especially difficult to dislodge. An example of the power of bias with emotional attachment comes from geoscience.

Nearly all school children in America today are familiar with the plate tectonics model, moving continents, and ephemeral ocean basins. Yet, few realize that the central ideas of plate tectonics once were scorned as “Germanic pseudoscience” in the United States. That happened because a few prominent American geoscientists so much wanted to believe their established explanations as true that their affect hijacked these experts’ ability to perceive the evidence. These geoscientists also exercised enough influence in the U. S. to keep plate tectonics out of American introductory level textbooks. American universities introduced plate tectonics in introductory GE courses only years later than did Europe.

Example of Bias in Quantitative Reasoning

People usually cite mathematics as the most dispassionate discipline and the least likely for bias to corrupt. However, researchers Dan Kahan and colleagues demonstrated that bias also disrupts peoples’ ability to use quantitative data and think clearly.

Researchers asked participants to resolve whether a skin cream effectively treated a skin rash. Participants received data for subjects who did or did not use the skin cream. Among users, the rash got better in 223 cases and got worse in 75 cases. Of subjects who did not use the skin cream, the rash got better in 107 cases and worse in 21 cases.

Participants then used the data to select from two choices: (A) People who used the cream were more likely to get better or (B) People who used the cream were more likely to get worse. More than half of the participants (59%) selected the answer not supported by the data. This query was primarily a numeracy test in deducing the meaning of numbers.

Then, using the same numbers, the researchers added affective bait. They replaced the skin cream query with a query about the effects of gun control on crime in two cities. One city allowed concealed gun carry, and another banned concealed gun carry. Participants had to decide whether the data showed that concealed carry bans increased or decreased crime.

Self-identified conservative Republicans and liberal Democrats responded with a desire to believe acquired from their party affiliations. The result was even more erroneous than the skin cream case participants. Republicans greatly overestimated increased crime from gun bans, but no more than Democrats overestimated decreased crime from gun bans (Figure 1). When operating from “my-side” bias planted by either party, citizens significantly lost their ability to think critically and use numerical evidence. This was true whether the self-identified partisans had low or high numeracy skills.

Graph showing comparing responses from those with low and high numeracy skills. Those with high numeracy always have better accuracy (smaller variance around the mean). When the topic was non-partisan, the means for those with low and high numeracy skills were roughly the same and showed little bias regarding direction of error. When the topic was partisan, then then those with lower skill showed, the strong bias and those with higher skill showed some bias.

Figure 1. Effect of bias on interpreting simple quantitative information (from Kahan et al. 2013, Fig. 8). Numerical data needed to answer whether a cream effectively treated a rash triggered low bias responses. When researchers employed the same data to determine whether gun control effectively changed crime, polarizing emotions triggered by partisanship significantly subverted the use of evidence toward what one wanted to believe.

Takeaway

Decisions and conclusions that appear based on solely objective data rarely are. Increasing metacognitive capacity produces awareness of the prevalence of bias.


Understanding Bias in the Disciplines: Part 1 – the Behavioral Sciences 

by Simone Mcknight (Simone Erchov), Global Systems Technology
Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University

Bias as conceptual

Bias arises from human brain mechanisms that process information in ways that make decision-making quicker and more efficient at the cognitive/neural level. Bias is an innate human survival mechanism, and we all employ it.

Bias is a widely known and commonly understood psychological construct. The common understanding of bias is “an inclination or predisposition for or against something.” People recognize bias by its outcome—the preference to accept specific explanations or attributions as true.

In everyday conversation, discussions about bias occur in preferences and notions people have on various topics. For example, people know that biases may influence the development of prejudice (e.g., ageism, sexism, racism, tribalism, nationalism), political, or religious beliefs.

the words "Bias in the Behavioral Sciences" on a yellow backgroundA deeper look reveals that some of these preferences are unconscious. Nevertheless, they derive from a related process called cognitive bias, a propensity to use preferential reasoning to assess objective data in a biased way. This entry introduces the concept of bias, provides an example from the behavioral sciences, and explains why metacognition can be a valuable tool to counteract bias. In Part 2, which follows this entry, we provide further examples from hard science, field science, and mathematics.

Where bias comes from

Biases develop from the mechanisms by which the human brain processes information as efficiently as possible. These unconscious and automatic mechanisms make decision-making more efficient at the cognitive/neural level. Most mechanisms that help the human brain make fast decisions are credited to adaptive survival. Like other survival mechanisms, bias loses value and can be a detriment in a modern civilized world where threats to our survival are infrequent challenges. Cognitive biases are subconscious errors in thinking that lead to misinterpreting future information from the environment. These errors, in turn, impact the rationality and accuracy of decisions and judgments.

When we frame unconscious bias within the context of cognitive bias and survival, it is easier to understand how all of us have inclinations to employ bias and why any discipline that humans manage is subject to bias. Knowing this makes it easier to account for the frequent biases affecting the understanding and interpreting of diverse kinds of data.

People easily believe that bias only exists in “subjective” disciplines or contexts where opinions and beliefs seem to guide decisions and behavior. However, bias manifests in how humans process information at the cognitive level. Although it is easier to understand bias as a subjective tendency, the typical way we process information means that bias can pervade all of our cognition.

Intuitively, disciplines relying on tangible evidence, logical arguments, and natural laws of the physical universe would seem factually based and less influenced by feelings and opinion. After all, “objective disciplines” do not predicate their findings on beliefs about what “should be.” Instead, they measure tangible entities and gather data. However, even in the “hard science” disciplines, the development of a research question, the data collected, and the interpretations of data are vulnerable to bias. Tangible entities such as matter and energy are subject to biases as simple as differences in perception of the measured readings on the same instrument. In the behavioral sciences, where investigative findings are not constrained by natural law, bias can be even harder to detect. Thus, all scientists carry bias into their practice of science, and students carry bias into their learning of it.

Metacognition can help counter our tendencies toward bias because it involves bringing relevant information about a process (e.g., conducting research, learning, or teaching) into awareness and then using that awareness to guide subsequent behaviors.

Consequences of bias

Bias impacts individual understanding of the world, the self, and how the self navigates the world – our schemas. These perceptions may impact elements of identity or characterological elements that influence the likelihood of behaving in one way versus another.

Bias should be assumed as a potentially influential factor in any human endeavor. Sometimes bias develops for an explanation after hearing it in childhood and then invoking that explanation for years. Even after seeing the evidence against that bias, our initial explanations are difficult to replace with ones better supported by evidence because we remain anchored to that initial knowledge. Adding a personal emotional attachment to an erroneous explanation makes replacing it even more difficult. Scientists can have emotional attachments to particular explanations of phenomena, especially their own explanations. Then, it becomes easy to selectively block out or undervalue evidence that modifies or contradicts the favored explanation (also known as confirmation bias).

Self-assessment, an example of long-standing bias in behavioral science

As noted in the introduction, this blog series focuses on our team’s work related to self-assessment. Our findings countered results from scores of researchers who replicated and verified the testing done in a seminal paper by Kruger and Dunning (1999). Their research asserted that most people were overconfident about their abilities, and the least competent people had the most overly optimistic perceptions of their competence. Researchers later named the phenomenon the “Dunning-Kruger effect,” and the public frequently deployed “the effect” as a label to disparage targeted groups as incompetent. “The effect” held attraction because it seemed logical that people who lacked competence also lacked the skills needed to recognize their deficits. Quite simply, people wanted to believe it, and replication created a consensus with high confidence in concluding that people, in general, cannot accurately self-assess.

While a few researchers did warn about likely weaknesses in the seminal paper, most behavioral scientists selectively ignored the warnings and repeatedly employed the original methodology. This trend of replication continued in peer-reviewed behavioral science publications through at least 2021.

Fortunately, the robust information storage and retrieval system that characterizes the metadiscipline of science (which is a characteristic distinguishing science from technology as ways of knowing) makes it possible to challenge a bias established in one discipline by researchers from another. Through publications and open-access databases, the arguments that challenge an established bias then become available. In this case, the validity of “the effect” resided mainly in mathematical arguments and not, as presumed, arguments that resided solely within the expertise of behavioral scientists.

No mathematics journal had ever hosted arguments addressing the numeracy of arguments that established and perpetuated the belief in “the effect.” However, mathematics journals offered the benefit of reviewers who specialized in quantitative reasoning and were not emotionally attached to any consensus established in behavioral science journals. These reviewers agreed that the long-standing arguments for supporting the Dunning-Kruger effect were mathematically flawed.  

In 2016 and 2017, Numeracy published two articles from our group that detailed the mathematical arguments that established the Dunning-Kruger effect conclusions and why these arguments are untenable. When examined by methods the mathematics reviewers verified as valid, our data indicated that people were generally good at self-assessing their competence and confirmed that there were no marked tendencies toward overconfidence. Experts and novices proved as likely to underestimate their abilities as to overestimate them. Further, the percentage of those who egregiously overestimated their abilities was small, in the range of about 5% to 6% of participants. However, our findings confirmed a vital conclusion of Kruger and Dunning (1999): experts self-assess better than novices (variance decreases as expertise increases), and self-assessment accuracy is attainable through training and practice.

By 2021, the information released in Numeracy began to penetrate the behavioral science journals. This blog series, our earlier posts on this site, and archived presentations to various audiences (e.g., the National Numeracy Network, the Geological Society of America) further broadened awareness of our findings.

Interim takeaways

Humans construct their learning from mentally processing life experiences. During such processing, we simultaneously construct some misconceptions and biases. The habit of drawing on a misconception or bias to explain phenomena ingrains it and makes it difficult to replace with correct reasoning. Affective attachments to any bias make overcoming the bias extremely challenging, even for the most accomplished scholars.

It is essential to realize that we can reduce bias by employing metacognition to recognize bias originating from within us at the individual level and by considering bias that influences us but is originated from or encouraged by groups. In the case above, we were able to explain the bias within the Behavioral Sciences disciplines by showing how repeatedly mistaking mathematical artifacts as products of human behavior produced a consensus that held understanding self-assessment captive for over two decades.

Metacognitive self-assessment seems necessary for initially knowing self and later for recognizing one’s own personal biases. Self-assessment accuracy is valuable in using available evidence well and reducing the opportunity for bias to hijack our ability to reason. Developing better self-assessment accuracy appears to be a very worthy objective of becoming educated.


Introduction: Why self-assessment matters and how we determined its validity 

By Ed Nuhfer, Guest Editor, California State University (retired)

There are few exercises of thinking more metacognitive than self-assessment. For over twenty years, behavioral scientists accepted that the “Dunning-Kruger effect,” which portrays most people as “unskilled and unaware of it,” correctly described the general nature of human self-assessment. Only people with significant expertise in a topic were capable of self-assessing themselves accurately, while those with the least expertise supposedly held highly overinflated views of their abilities. 

The authors of this guest series have engaged in a collaborative effort to understand self-assessment for over a decade. They documented how the “Dunning-Kruger effect,” from its start, rested on specious mathematical arguments. Unlike what the “effect” asserts, most people do not hold overly inflated views of their competence, regardless of their level of expertise. We summarized some of our peer-reviewed work in earlier articles in “Improve with Metacognition (IwM).” These are discoverable by using “Dunning-Kruger effect” in IwM’s search window. 

Confirming that people, in general, are capable of self-assessing their competence affirms the validity of self-assessment measures. The measures inform efforts in guiding students to improve their self-assessment accuracy. 

This introduction presents commonalities that unify the series’ entries to follow. In the entries, we hotlink the references available as open-source within the blogs’ text and place all other references cited at the end. 

Why self-assessment matters

After an educator becomes aware of metacognition’s importance, teaching practice should evolve beyond finding the best pedagogical techniques for teaching content and assessing student learning. The “place beyond” focuses on teaching the student how to develop a personal association with content as a basis for understanding self and exercising higher-order thinking. Capturing the changes in developing content expertise together with self in a written teaching/learning philosophy expedites understanding how to achieve both. Self-assessment could be the most valuable of all the varieties of metacognition that we employ to deepen our understanding. 

Visualization is conducive to connecting essential themes in this series of blogs that stress becoming better educated through self-assessment. Figure 1 depicts the role and value of self-assessment from birth at the top of the figure to becoming a competent, autonomous lifelong learner by graduation from college at the bottom. diagram illustrating components that come together to promote life-long learning: choices & effort through experiences; self-assessment; self-assessment accuracy; self-efficacy; self-regulation

Figure 1. Relationship of self-assessment to developing self-regulation in learning. 

Let us walk through this figure, beginning with early life Stage #1 at the top. This stage occurs throughout the K-12 years, when our home, local communities, and schools provide the opportunities for choices and efforts that lead to experiences that prepare us to learn. In studies of Stage 1, John A. Ross made the vital distinction between self-assessment (estimating immediate competence to meet a challenge) and self-efficacy (perceiving one’s personal capacity to acquire competence through future learning). Developing healthy self-efficacy requires considerable practice in self-assessment to develop consistent self-assessment accuracy.

Stage 1 is a time that confers much inequity of privilege. Growing up in a home with a college-educated parent, attending schools that support rich opportunities taught in one’s native language, and living in a community of peers from homes of the well-educated provide choices, opportunities, and experiences relevant to preparing for higher education. Over 17 or 18 years, these relevant self-assessments sum to significant advantages for those living in privilege when they enter college. 

However, these early-stage self-assessments occur by chance. The one-directional black arrows through Stage 2 communicate that nearly all the self-assessments are occurring without any intentional feedback from a mentor to deliberately improve self-assessment accuracy. Sadly, this state of non-feedback continues for nearly all students experiencing college-level learning too. Thereby, higher education largely fails to mitigate the inequities of being raised in a privileged environment.

The red two-directional arrows at Stage 3 begin what the guest editor and authors of this series advocate as a very different kind of educating to that commonly practiced in American institutions of education. We believe education could and should provide self-assessments by design, hundreds in each course, all followed by prompt feedback, to utilize the disciplinary content for intentionally improving self-assessment accuracy. Prompt feedback begins to allow the internal calibration needed for improving self-assessment accuracy (Stage #4). 

One reason to deliberately incorporate self-assessment practice and feedback is to educate for social justice. Our work indicates that we can enable the healthy self-efficacy needed to succeed in the kinds of thinking and professions that require a college education by strengthening the self-assessment accuracy of students and thus make up for the lack of years of accumulated relevant self-assessments in the backgrounds of those lesser privileged.

By encouraging attention to self-assessment accuracy, we seek to develop students’ felt awareness of surface learning changing toward the higher competence characterized by deep understanding (Stage #5). Awareness of the feeling characteristic when one attains the competence of deep understanding enables better judgment for when one has adequately prepared for a test or produced an assignment of high quality and ready for submission. 

People attain Stage #6, self-regulation, when they understand how they learn, can articulate it, and can begin to coach others on how to learn through effort, using available resources, and accurately doing self-assessment. At that stage, a person has not only developed the capacity for lifelong learning, but has developed the capacity to spread good habits of mind by mentoring others. Thus the arrows on each side of Figure 1 lead back to the top and signify both the reflection needed to realize how one’s privileges were relevant to their learning success and cycling that awareness to a younger generation in home, school, and community. 

A critical point to recognize is that programs that do not develop students’ self-assessment accuracy are less likely to produce graduates with healthy self-efficacy or the capacity for lifelong learning than programs that do. We should not just be training people to grow in content skills and expertise but also educating them to grow in knowing themselves. The authors of this series have engaged for years in designing and doing such educating.

The common basis of investigations

The aspirations expressed above have a basis in hard data from assessing the science literacy of over 30,000 students and “paired measures” on about 9,000 students with peer-reviewed validated instruments. These paired measures allowed us to compare self-assessed competence ratings on a task and actual performance measures of competence on that same task. 

Knowledge surveys serve as the primary tool through which we can give “…self-assessments by design, hundreds in each course all followed by prompt feedback.” Well-designed knowledge surveys develop each concept with detailed challenges that align well with the assessment of actual mastery of the concept. Ratings (measures of self-assessed competence) expressed on knowledge surveys, and scores (measures of demonstrated competence) expressed on tests and assignments are scaled from 0 to 100 percentage points and are directly comparable.

When the difference between the paired measures is zero, there is zero error in self-assessment. When the difference (self-assessed minus demonstrated) is a positive number, the participant tends toward overconfidence. When the difference is negative, the participant has a tendency toward under-confidence.

In our studies that established the validity of self-assessment, our demonstrated competence data in our paired measures came mainly from the validated instrument, the Science Literacy Concept Inventory or “SLCI.” Our self-assessed competence data comes from knowledge surveys and global single-queries tightly aligned with the SLCI. Our team members incorporate self-created knowledge surveys of course content into their higher education courses. Knowledge surveys have proven to be powerful research tools and classroom tools for developing self-assessment accuracy. 

Summary overview of this blog series

IwM is one of the few places where the connection between bias and metacognition has directly been addressed (e.g., see a fine entry by Dana Melone). The initial two entries of this series will address metacognitive self-assessment’s relation to the concept of bias. 

Later contributions to this series consider privilege and understanding the roles of affect, self-assessment, and metacognition when educating to mitigate the disadvantages of lesser privilege. Other entries will explore the connection between self-assessment, participant use of feedback, mindset, and metacognition’s role in supporting the development of a growth mindset. Near the end of this series, we will address knowledge surveys, the instruments that incorporate the disciplinary content of any college course to improve learning and develop self-assessment accuracy. 

We will conclude with a final wrap-up entry of this series to aid readers’ awareness that what students should “think about” when they “think about thinking” ought to provide a map for reaching a deeper understanding of what it means to become educated and to acquire the capacity for lifelong learning.


Metacognitive Discourse—Final Course Presentations that Foster Campus Conversations about Learning

by Gina Burkart, Ed.D., Learning Specialist, Clarke University

colored people and conversation bubblesPrior to the pandemic and now since returning to campus, there has been a shift in students’ use of group study and ability to learn and work in groups. When I began my position as Learning Specialist 10 years ago, it was not uncommon to find 30 students at group study sessions at 9 pm in the evening. Now, one group study session remains, and 2-3 students might attend the sessions (unless they are teams’ sessions required by athletic coaches). Colleagues have also shared in conversations that they have found it problematic that students avoid interacting with one another in the classroom and are not able to work and learn in physical groups. Further in my learning resource center year-end reports, data have shown a steady decline in group study attendance and a steady increase of students relying on support from me, the Learning Specialist. They want to work one/one with adults. In conversations with students and in online discussion blogs, students’ have shared a lack of inter- and intrapersonal communication skills as affecting their ability to work with their peers. In simple terms—overuse of electronic communication pre-pandemic and during the pandemic has left them unable to communicate interact with their classmates. This is problematic for a variety of reasons. In terms of learning, pedagogy is clear—learning is social (Bandura, 1977).

An Assignment to Reinforce Social Learning and Metacognition

In response, this semester, to reinforce social learning and metacognition, I changed the final assessment for the College Study Strategy course to be a final presentation that embedded metacognition and social discourse. The College Study Strategy course is metacognitive in nature in that it begins by having students reflect on their prior learning experiences, assess themselves and their skills, and set goals for the semester. It is a 1-credit course open to any student below 90 credits and can be retaken. However, in the second semester, it is almost entirely filled with students placed on academic probation or warning who are required to take the course. Curriculum includes theorists such as Marzano (2001), Bandura (1994), Ducksworth (2013), Dweck (2014), and Covey (2004) and requires them to begin applying new motivation, emotional intelligence, learning, reading, time management, study, note-taking, and test-taking strategies to their courses. In the past, students created a portfolio that demonstrated the use of their new strategies and presented their growth to me in a midterm and final conference. This year, I wanted them to share their new growth with more than me—I wanted them to share their growth with the entire community.

By changing the final project to be more outward-facing, the assignment would still be metacognitive in nature—requiring students to reflect on past learning, show how they made adjustments to learning and applied new methods and strategies, share in conversation how they made the adjustments, and finally explain how they will continue to apply strategies and continue their growth in the future with the new knowledge and strategies. Again,  it would require students to share with more than me. They would need to envision a larger audience and needs—the entire campus community (administrators, students, Athletic coaches, staff, professors, recruits) and create a presentation that could be adjusted to the audience. They would practice inter and intra-personal skills as they made adjustments to their presentation over the course of 2 hours while they remained at station in the library, prepared to share their presentation as members of the campus community approached. This also allowed for the campus community to benefit from the students’ new knowledge and growth of the semester. And, being on a small scale, it re-introduced students to the art of in-person, face-face conversation between each other and the value of seeking information from each other. This is something that has been eroding due to a heavy use of electronic communication and isolated learning that occurred during the pandemic.

Students were introduced to this assignment in week one of the semester. They were told that in week 6 they would choose any topic from the course curriculum that they felt they needed to focus on more intently based on their semester goals. Once choosing the curriculum they would focus on (ex: motivation, reading, procrastination, time management, studying, growth mindset), they would then research a different article each week related to their chosen topic (weeks 6-12) and apply the new critical reading strategy taught in class to create journal entries that would be used to prepare content for the final presentation. In weeks 14 or 15, they would present in the library at a table (poster session style) during a time of their choosing (two-hour block) to the campus community about their topic. The presentation needed to include some type of visual and the content needed to include all of the following metacognitive information about the topic:

  • past struggles
  • reasons for choosing the topic
  • strategies learned in class
  • information learned in their research
  • recommendations for other students struggling
  • strategies for continued growth

Positive Impact and Take-Aways

While students were nervous and hesitant prior to the presentations, during and after the presentations, they admitted to having fun sharing about their growth and learning. Staff, faculty, and students were also appreciative of the presentations and made a point of attending. Some future students/recruits even attended as they were touring. Not surprising, most students chose to present about motivation, time management and procrastination. A few students chose to present about growth mindset, Bloom’s Taxonomy as a study strategy, and reading. A surprising take-away was that in the metacognitive process of the presentation, many students connected improved reading strategies to increased motivation and reduction in procrastination.

While observing the presentations, it was encouraging to see students learn to adapt their presentations as people approached. Since they were stationed at a table for two hours, they needed to present the material many times to different types of audiences—and they had to field questions. As they presented and represented, they learned how to interact and present differently based on the needs of the audience. This adaptation required the use of metacognition and rhetorical analysis. It also built inter- and intrapersonal communication skills. It also came at a good time in the semester, as students were authentically seeking many of the strategies and skills to prepare for finals, conclude the semester, and look forward to the next semester. Many of the presenters had friends and team members, coaches, and faculty come to hear their presentations (as I had advertised the presentations to the campus in advance). In conclusion, metacognitive presentations that engage the entire campus community in discourse about learning may be a helpful step toward rebuilding learning communities post-pandemic. Next semester, I will continue this assignment. Additionally, next semester, I am working on embedding group reading labs into targeted courses to improve learning, motivation and reduce procrastination in the classroom.

 


Writing metacognitive learning objectives for metacognitive training that supports student learning

by Patrick Cunningham, Ph.D., Rose-Hulman Institute of Technology

Teaching through the COVID-19 pandemic has highlighted disparities in how students approach their learning. Some have continued to excel with hybrid and online instruction while others, and more than usual, have struggled. Compounding these struggles, these students also find themselves behind or with notable gaps in their prerequisite knowledge for following courses. A significant component of these struggles may be due to not having developed independence in their learning. Engaging in explicit metacognitive activities directly addresses this disparity, improving students’ abilities to overcome these struggles. Given the present challenges of living through COVID-19, this is more important now than ever. However, creating activities with metacognitive focus is likely unfamiliar and there are not a lot of resources to guide their development. Here I seek to demonstrate an accessible approach, an entry point, for supporting students’ growth as more skillful and independent learners grounded in metacognition.

Cognitive Learning Objectives are Just the Start

Creating explicit learning objective is one means by which educators commonly try to support students’ independence in learning. Typically learning objectives focus on the cognitive domain, often based on Bloom’s Taxonomy. The cognitive domain refers to how we think about or process information. Bloom’s taxonomy for the cognitive domain is comprised of Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating (Krathwohl, 2002). Each of these gives an indication how a student is expected to engage or use the material we are teaching. For constructing learning objectives, there are lists of action verbs associated with each Bloom category.

Consider this cognitive learning objective for a computer programming course.

Students will be able to create and implement functions with inputs and an output in C++ programs to accomplish a specified task on an Arduino board with a prewired circuit.

This learning objective is specific to a lesson and targets the Apply level of Bloom’s taxonomy. (The approach I am presenting could equally apply to broader course-level learning objectives, but I think the specificity here makes the example more tangible.) This objective uses good action verbs (bolded) and has a prescribed scope and context. But is it adequate for guiding student learning if they are struggling with it?

Metacognitive Learning Objectives can Direct Learning Activities

silhouette shape of brain with the words "metacognitive learning objectives"inside the shape

Cognitive learning objectives point students to what they should be able to do with the information but do not usually provide guidance for how they should go about developing their ability to do so. Metacognition illuminates the path to developing our cognitive abilities. As a result, metacognitive training can support students’ attainment of cognitive learning objectives. Such training requires metacognitive learning objectives.

Metacognitive learning objectives focus on our awareness of the different ways we process information and how we regulate and refine how we process information. Metacognitive knowledge includes knowledge of how people (and we as individuals) process information, strategies for processing information and monitoring our thinking, and knowledge of the cognitive demands of specific tasks (Cunningham, et al., 2017). As we engage in learning we draw on this knowledge and regulate our thinking processes by planning our engagement, monitoring our progress and processes, adjusting or controlling our approaches, and evaluating the learning experience (Cunningham, et al., 2017). Metacognitive monitoring and evaluation feed back into our metacognitive knowledge, reinforcing, revising, or adding to it.

Example Implementation of Metacognitive Learning Objectives

Considering our example cognitive learning objective, how could we focus metacognitive training to support student attainment of it? Two possibilities include 1) focusing on improving students’ metacognitive knowledge of strategies to practice and build proficiency with writing functions or 2) supporting students’ accurate self-assessment of their ability to demonstrate this skill. Instructors can use their knowledge of their students’ current strategies to decide which approach (or both) to take. For example, if it appears that most students are employing limited learning strategies, such as memorizing examples by reviewing notes and homework, I might focus on teaching students about a wider range of effective learning strategies. The associated metacognitive learning objective could be:

Students will select and implement at least two different elaborative learning strategies and provide a rationale for how they support greater fluency with functions.

The instructional module could differentiate categories of learning objectives (e.g., memorization, elaboration, and organization), demonstrate a few examples, and provide a more complete list of elaborative learning strategies (Seli & Dembo, 2019). Then students could pick one to do in class and one to do as homework. If, on the other hand, it appears that most students are struggling to self-assess their level of understanding, I might focus on teaching students how to better monitor their learning. The associated metacognitive learning objective could be:

Students will compare their function written for a specific application, and completed without supports, to a model solution, using this as evidence to defend and calibrate their learning self-assessment.

Here the instructional module could be a prompt for students to create and implement a function, from scratch without using notes or previously written code. After completing their solutions, students would be given access to model solutions. In comparing their solution to the model, they could note similarities, differences, and errors. Then students could explain their self-assessment of their level of understanding to a neighbor or in a short paragraph using the specific comparisons for evidence. These examples are metacognitive because they require students to intentionally think about and make choices about their learning and to articulate their rationale and assessment of the impact on their learning. I believe it is important to be explicit with students about the metacognitive aim – to help them become more skillful learners. This promotes transfer to other learning activities within the class and to their learning in other classes.

Implementing and Supporting Your Metacognitive Outcomes

In summary, to create actionable metacognitive learning objectives I recommend,

  • clarifying the cognitive learning objective(s) you aim to support
  • investigating and collecting evidence for what aspect(s) of learning students are struggling with
  • connecting the struggle(s) to elements of metacognition
  • drafting a metacognitive learning objective(s) that address the struggle(s)

Armed with your metacognitive learning objectives you can then craft metacognitive training to implement and assess them. Share them with a colleague or someone from your institution’s teaching and learning center to further refine them. You may want to explore further resources on metacognition and learning such as Nilson’s (2013) Creating Self-Regulated Learners, Seli and Dembo’s (2019) Motivation and learning strategies for college success, and Svinicki’s GAMES© survey in (Svinicki, 2004). Or you could watch my Skillful Learning YouTube video, What is Metacognition and Why Should I Care?.

If metacognition is less familiar to you, avoid overwhelm by choosing one element of metacognition at a time. For example, beyond the above examples, you could focus on metacognitive planning to support students better navigating an open-ended project. Or you could help students better articulate what it means to learn something or experience the myth of multitasking (we are task switchers), which are elements pertaining to metacognitive knowledge of how people process knowledge. Learn about that element of metacognition, develop a metacognitive learning objective for it, create the training materials, and implement them with your students. You will be supporting your students’ development as learners generally, while you also promote deeper learning of your cognitive course learning objectives. Over time, you will have developed a library of metacognitive learning objectives and training, which you could have students explore and self-select from based on their needs.

Acknowledgements

This blog post is based upon metacognition research supported by the National Science Foundation under Grant Nos. 1932969, 1932958, and 1932947. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

References

Cunningham, P. J., Matusovich, H. M., Hunter, D. A., Williams, S. A., & Bhaduri, S. (2017). Beginning to Understand Student Indicators of Metacognition. In the proceedings of the American Society for Engineering Education (ASEE) Annual Conference & Exposition, Columbus, OH.

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into practice41(4), 212-218.

Nilson, L. (2013). Creating self-regulated learners: Strategies to strengthen students? self-awareness and learning skills. Stylus Publishing, LLC.

Seli, H., & Dembo, M. H. (2019). Motivation and learning strategies for college success: A focus on self-regulated learning. Routledge.

Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. Anker Publishing Company.


Promoting Learning Integrity Through Metacognition and Self-Assessment

by Lauren Scharff, Ph.D., U. S. Air Force Academy*

When we think of integrity within the educational realm, we typically think about “academic integrity” and instances of cheating and plagiarism. While there is plenty of reason for concern, I believe that in many cases these instances are an unfortunate end result of more foundational “learning integrity” issues rather than deep character flaws representing lack of moral principles and virtues.

photo of a hand holding a compass with a mountain scene background (by Devon Luongo)Learning integrity occurs when choices for learning behaviors match a learner’s goals and self-beliefs. Integrity in this sense is more like a state of wholeness or integrated completeness. It’s hard to imagine this form of integrity without self-assessment; one needs to self-assess in order to know oneself. For example, are one’s actions aligned with one’s beliefs? Are one’s motivations aligned with one’s goals? Metacognition is a process by which we gain awareness (self-assess) and use that awareness to self-regulate. Thus, through metacognition, we can more successfully align our personal goals and behaviors, enhancing our integrity.

Metacognitive Learning and Typical Challenges

When students are being metacognitive about their learning, they take the time to think about (bring into awareness) what an assignment or task will require for success. They then make a plan for action based on their understanding of that assignment as well their understanding of their abilities and current context. After that, they begin to carry out that plan (self-regulation). As they do so, they take pauses to reflect on whether or not their plan is working (self-awareness/self-assessment). Based on that interim assessment, they potentially shift their plan or learning strategies in order to better support their success at the task at hand (further self-regulation).

That explanation of a metacognitive learning may sound easy, but if that were the case, we should see it happening more consistently. As a quick example, imagine a student is reading a text and then realizes that they are several pages into the assignment and they don’t remember much of what they’ve read (awareness). If they are being metacognitive, they should come up with a different strategy to help them better engage with the text and then use that alternate strategy (self-regulation). Instead, many students simply keep reading as they had been (just to get the assignment finished), essentially wasting their time and short-cutting their long-term goals.

Why don’t most students engage in metacognition? There are several meaningful barriers to doing so:

  • Pausing to self-assess is not a habitual behavior for them
  • It takes time to pause and reflect in order to build awareness
  • They may not be aware of effective alternate strategies
  • They may avoid alternate strategies because they perceive them to take more time or effort
  • They are focused on “finishing” a task rather than learning from it
  • They don’t realize that some short-term reinforcements don’t really align with their long-term goals

These barriers prevent many students engaging in metacognition, which then makes it more likely that their learning choices are 1) not guided by awareness of their learning state and 2) not aligned with their learning goals and/or the learning expectations of the instructor. This misalignment can then lead to a breakdown of learning integrity with respect to the notion of “completeness” or “wholeness.”

For example, students often claim that they want to develop expertise in their major in order to support their success in their future careers. They want to be “good students.” But they take short-cuts with their learning, such as cramming or relying on example problem workout steps, both of which lead to illusions of learning rather than deep learning and long-term retention. These actions are often rewarded in the short term by good grades on exams and homework assignments. Unfortunately, if they engage in short-cutting their learning consistently enough, when long-term learning is expected or assessed, some students might end up feeling desperate and engage in blatant cheating.

Promoting Learning Integrity by Providing Support for Self-Assessment and Metacognition

Promoting learning integrity will involve more than simply encouraging students to pause, self-reflect, and practice self-regulation, i.e. engage in metacognition. As alluded to by the list of barriers above, being metacognitive requires effort, which also implies that learning integrity requires effort. Like many other self-improvement behaviors, developing metacognition requires multiple opportunities to practice and develop into a way of doing things.

Fortunately, as instructors we can help provide regular opportunities for reflection and self-assessment, and we can share possible alternative learning strategies. Together these should promote metacognition, leading to alignment of goals and behaviors and to increased learning integrity. The Improve with Metacognition website offers many suggestions and examples used by instructors across the disciplines and educational levels.

To wrap up this post, I highlight knowledge surveys as one way by which to promote the practice and skill of self-assessment within our courses. Knowledge surveys are shared with students at the start of a unit so students can use them to guide their learning and self-assess prior to the summative assessment. Well-designed knowledge survey questions articulate granular learning expectations and are in clear alignment with course assessments. (Thus, their implementation also supports teaching integrity!)

When answering the questions, students rate themselves on their ability to answer the question (similar to a confidence rating) as opposed to fully writing out the answer to the question. Comparisons can be made between the confidence ratings and actual performance on an exam or other assessment (self-assessment accuracy). For a more detailed example of the incorporation of knowledge surveys into a course, as well as student and instructor reflections, see “Supporting Student Self-Assessment with Knowledge Surveys” (Scharff, 2018).

By making the knowledge surveys a meaningful part of the course (some points assigned, regular discussion of the questions, and sharing of students’ self-assessment accuracy), instructors support the development of self-assessment habits, which then provide a foundation to metacognition, and in turn, learning integrity.

———————————————–

* Disclaimer: The views expressed in this document are those of the author and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


Change Instead of Continuity: Using Metacognition to Enhance Student Motivation for Learning

by Benjamin A. Johnson, Ph.D., Utah Valley University

“New occasions teach new duties”
James Russell Lowell

In 2020, as we start a new decade, continuity appears to have taken a backseat to change. While change rapidly spreads through and disrupts such areas as health (including a viral pandemic), education, culture, economy, and technology, continuity offers stability, knowledge gained from the past on the human condition and peoples’ capacity to flourish. In this current climate of change, the expression, “new occasions teach new duties” resonates well (Lowell, 1890, p. 184) and appears to have a double meaning for our situation: not only do the events or “new occasions,” require us to “teach new duties,” but the “new occasions” can actually teach us “new duties.” As students navigate the many disruptions in our schools and communities, they urgently need professors who encourage metacognitive strategies to assess and enhance student motivation for engagement and learning.

It seems that student resistance and motivation not to engage is often driven by the motivation to maintain continuity, to stay in safe territory, to avoid anxiety. Students are often motivated to do what is familiar, such as engaging in surface approaches to learning: to memorize, recite, or do the minimum. As students advance through their majors, they may become more intrinsically motivated because they see the courses as relevant to their career goals. Even then, they can become overwhelmed at exams, and may have other priorities that push them towards continuity. On the other hand, effective teaching and engaged learning is all about change. Considering the current pandemic and the sudden transition to online and hybrid learning, faculty and students must be metacognitively aware as never before. 

A 2-Step Framework for Change

Some students will not come prepared to class, will not engage at a high level with the assignments, and will not take owernship for their own learning. This can be extremely challenging for professors. Rather than staying frustrated, here is a two-step framework a professor could cultivate to enhance student strengths and encourage positive change:

Step 1: Become more broadly aware of student resistance behaviors, including types and contributing factors.

Step 2: Create self-assessment opportunities for students to become more self-aware of their own openness to change. 

Step 1: Identifying Student Resistance to Change

The Forms of Student Resistance Matrix (Tolman & Kremling, 2017) below offers insight on how students may reveal their motives for resistance to learning. The matrix categorizes fundamental forms of resistance, each with different emotional foundations. It shows that students in the accommodation/anxiety (“Preserving Self”) column want continuity, education delivered in the way they find comfortable and familiar, while those in the anger/frustration (“Asserting Autonomy”) column seek change and validation.

cartoon image of man showing dismay over a broken arrow

Once we have identified types of student resistance, we can effectively focus our energy on helping students think metacognitively about their own learning needs and goals, and help them evaluate their own contributions to obstacles to their success (such as use of relatively ineffective learning strategies). As they better understand their own needs and challenges through self-assessment, they are more likely to decrease their resistance to learning.

Forms of Student Resistance Matrix
Adapted from Tolman & Kremling, 2017

 

Asserting Autonomy
(Seeking Change)

Pushing against external influence
Emotions: anger, frustration, resentment

 

Preserving Self
(Seeking Continuity)

Trying to accommodate to external influence
Emotions: anxiety, fear

 

Active Resistance

  • Arguing or disagreeing with professor in the classroom
  • Repeatedly asking for the rationale for assignments
  • Saying they paid for the class and want it taught how they like
  • Inciting other students to rebel or not collaborate; disrupting class activities
  • Complaining to higher authority

 

  • Repeatedly asking for detailed clarification of grading criteria
  • Taking over group assignments to ensure an adequate grade
  • Arguing with the professor over grades received, seeking additional points or consideration
  • Focus on surface approach to learning

Passive Resistance

  • Refusing to come to class
  • Refusing to participate during in-class exercises (does not get into groups, does not comply with assignment tasks)
  • Does not turn in assignments at all or is consistently late
  • Complaining about the professor to other students

 

  • Expressing concerns about working with others
  • Avoidance of conflicts and refusing to resolve situations or bring them to the professor’s awareness
  • Minimal participation in class (withdrawn, doesn’t speak or give feedback, lets others make all decisions)

Step 2: Student Self-Assessment—Helping Students Recognize Stages of Change         

Helping students assess their own openess to adopting new behaviors is key to supporting their learning. Self-assessment helps them recognize that their own attitudes and choices shape their educational outcomes and that their learning is not simply a product of their professors’ work. One assessment tool that can help students self-assess and self-regulate is the TTM Learning Survey (Tolman & Kremling, 2017), based on Prochaska and DiClemente’s (1983) Transtheoretical Model (TTM). Most often utilized in clinical settings, TTM theory provides a useful model for understanding a person’s path towards adopting new behaviors. TTM readiness to change stages include precontemplation, contemplation, preparation, action, and maintenance.

Assigning the TTM-LS along with a Personal Learning Plan (PLP; a reflective follow-up exercise that asks the individual to identify how to improve and to plan for the semester) may help students work metacognitively to identify their readiness to change (Tolman & Kremling, 2017). Part of the PLP requires the student to evaluate their stage of readiness to change and then describe how they can move forward and overcome their own forms of resistance. I have observed that using these instruments two to three times during the semester improves student metacognition, as evidenced by student reflections.

Additionally, by working through these self-reflective activities, students become more intentional or mindful about their own motivation to change, providing them multiple opportunities to think through their own behavior and learning. Though students may advance or step up higher through the stages of change, they may also revert back to a previous stage. Professors should help them view this regression as developmental and invite them to persist.

Surveys and reflective assignments like the TTM-LS and PLP help students think more about why they might not be not willing to adopt new behaviors and help them acknowledge their own reluctance to change their learning strategies (Tolman & Kremling, 2017; Yanqun, 2019).

Using the Forms of Student Resistance matrix above (Tolman & Kremling, 2017), an actively resisting autonomous student (left column) may acknowledge in their PLP that when they become frustrated, they request the professor be available regularly to explain the rationale for assignments. On the other hand, an actively resisting preserving-self (or an anxious) student (right column) might respond with fear to conform to the expectations of the professor while arguing for a better grade. Self-assessment helps the student better understand their own motivations, fears, and goals, so they can then move forward more deliberately.

This focus on intentionality is a core aspect of genuine metacognitive thinking because it can help students accept their own role in learning—understanding that what they choose to do shapes how they learn and that the main responsibility for learning resides with them, not with the professor. Once students begin to recognize their patterns of resistence and strategies for overcoming this obstacle, they will take more ownership over their learning. Assessments like these help students to shift their education from something that is being forced on them externally (by parents, society, employers) to something that they can personally control (Perry, et al., 2019).

Asking Students to Think Metacognitively Requires Change

Due to changes in this pandemic year, we can also invite students to become more metacognitive about:

  • technology use: their feelings toward new technology used for virtual or hybrid class settings and their level of mastery of that technology
  • learning on their own (less interaction in the classroom) – what works and what doesn’t
  • monitoring their own progress and anxieties in this dynamic environment

The more we support their metacognitive skills in these areas, the more willing they may be to intentionally make this shift. For example, as universities phase into more online learning, students who are motivated not to engage in new learning strategies may struggle to adapt. They may resist actively participating in online learning in its many forms. Professors can implement surveys such as the TTM-LS and a PLP to open the door to key conversations about students’ goals, what they hope to achieve from the class, and especially, how they might need to adapt to become successful in the new modality.

Learning in itself is a process of change, and as students use metacognition to accept rather than resist learning as an individual responsibility, their motivation can shift for the better. At its core, metacognition is about being open to seeing new possibilities and being willing to change (moving from the precontemplation to contemplation stage, for instance). As students practice self-assessment, they can accept the need for change and embrace “new duties.”

References

  • Lowell, J. R. (1890). Poems. Riverside Press. (Original poem published 1844)
  • Perry, J., Lundie, D., & Golder, G. (2019) Metacognition in schools: What does the literature suggest about the effectiveness of teaching metacognition in schools? Educational Review, 71(4), 483-500.
  • Prochaska, J. O., & DiClemente, C.C. (1983). Stages and processes of self-change of smoking: toward an integrative model of change. Journal of Consulting and Clinical Psychology, 51(3), 390-395.
  • Tolman, A. O., & Kremling, J. (2017). Why students resist learning: A practical model for understanding and helping students. Stylus.
  • Yanqun, Z. (2019). The Significance and Instruction of Metacognition in Continuing Education. International Forum of Teaching & Studies 15(1), 29-37.

How Metacognition Can Foster Inclusivity in the Classroom

by Christopher Lee; Snow College

Kelly Field (2018) reports that “A growing body of research suggests that students who feel they belong at their college are more likely to remain there [and] that first-generation and minority students are less likely to feel a connection to their colleges” (para. 27). As an instructor at a 2-year college, I recognize the important role that my institution plays in functioning as a bridge to further educational opportunities, particularly for underrepresented students. Crucial to this mission is ensuring that I do my part to facilitate a classroom environment in which these students feel valued and included.

Inclusivity means working to ensure that curricula and teaching practices don’t exclude marginalized minority students and help to close existing achievement gaps. It means not only valuing diversity but creating a space for diverse groups of students to actually feel included. It entails serious introspection from faculty (before we even enter the classroom) about implicit biases we may hold toward others, opportunities for privileged students to examine their attitudes about underprivileged peers, and opportunities for minority students to critically reflect on their own academic abilities. An inclusive classroom, then, is contingent on honest metacognitive reflection from both faculty and students.

a hand holding a mirror

Faculty: Holding Up the Mirror

Inclusivity requires holding the mirror up to ourselves as instructors and asking how our behaviors, teaching practices, and curriculum choices may confirm or exacerbate student feelings of exclusion. As we strive for an inclusive classroom – in relation to race, class, gender identity, sexual orientation, religious affiliation, age, culture, or ideology – it’s critical that we examine the hidden biases we may hold about certain groups of students and recognize how these biases manifest in the classroom.

It’s one thing to acknowledge that we may have negative biases, but can we actually identify and control them? Patricia Divine’s (cited in Nordell, 2017) research suggests that it is possible to identify and mitigate biases, noting that they can be overridden, but not overwritten. In other words, completely removing our biases doesn’t seem to be a realistic goal, but we can moderate them, once recognized. Divine offers a model for faculty that incorporates key components of metacognitive thinking.

First, we must become aware of our own implicit biases. Although there’s no silver bullet, the Implicit Association Test at Harvard’s Project Implicit can be a useful resource. Second, we must become concerned about the implications and outcomes of our biases, acknowledging that there are very real and harmful consequences to holding unchecked biases. Finally, we must work to replace biases with more productive attitudes that align with our conscious or aspirational values. Subsequently, we can design strategies to monitor and assess our progress.

Metacognitive Practices for Students

The work of creating an inclusive, “decolonized” classroom (Seward, 2019) can’t be reduced to a short and simple list; however, these three practical suggestions can be effectively implemented in any course in an effort to utilize the benefits of metacognition toward increasing inclusivity.

  • Assign Reflective Exercises: Start students reflecting on their thinking processes and assumptions early in the semester, particularly in relation to their abilities and potentially flawed preconceptions about themselves, others, and college. I have students write a short essay about their writing and thinking processes, previous experiences with English courses, including negative internalized experiences, and their expectations about our current class. Students can choose to share their thoughts and experiences openly with each other, demystifying the idea that there’s one “correct” (i.e. white, male, middle class, etc.) way to approach writing, thinking, and other academic skills. Previous negative experiences aren’t necessarily exclusive to them individually and won’t act as permanent barriers to their educational goals.

With opportunities to metacognitively reflect, students are more likely to feel included in the classroom environment, early on, if they see a variety of effective approaches to learning tasks. With this understanding, they need not feel pressure to conform to the norms of a hidden curriculum (Margolis, 2001).

  • Invite Former Underrepresented Students to Speak: Former students, particularly those who are underrepresented, can be a powerful reference point and model for current students, in both bolstering the self-efficacy of underrepresented students and busting negative minority stereotypes held among other students. Encourage students who have successfully navigated your course to candidly discuss successes, failures, and effective learning strategies. This could be followed-up with a quick one-minute reflection paper that students complete in which they acknowledge their own struggles and make plans for addressing them.

Although we need to be careful not to inappropriately spotlight students (which usually results from us “volunteering” students), this can help underrepresented students to feel more represented and included. We can also use underrepresented student work as models, particularly work that reinforces the idea that there can be multiple ways to reach course goals.

  • Engage Students in High Impact Practices: Design projects that allow for greater engagement. Opportunities to participate in undergraduate research, for instance, require students to design, monitor, and adjust their work with faculty mentoring and peer feedback. I incorporate such research opportunities in my freshmen research writing courses to various degrees. As Draeger (2018) notes, “undergraduate research allows students the opportunity to become co-inquirers within an existing scholarly conversation” (para. 4). Actively contributing to an existing academic conversation, rather than passively reporting, requires a number of metacognitive traits, such as identifying and working to mitigate existing biases about topics, assessing what they already know or think they know, how to weigh and prioritize information (including where research gaps exist in the broader conversation), and how to adjust a research question when source material presents new and often contradictory evidence. I scaffold assignments with reflective components to serve as individual checkpoints along this path.

First generation and other minority students, in particular, have been shown to benefit from undergraduate research because of increased interactions with faculty and the institution, developing closer relationships with peers, and the opportunity to challenge existing knowledge and power structures with their own primary research contributions (Charity Hudley et al., 2017). These outcomes help to alleviate some of the most prominent barriers to an inclusive classroom.

Practices like these, in addition to reflecting on our own potentially excluding attitudes and behaviors, can aid us in shaping our classroom spaces to be more inclusive and, ideally, help further serve the mission of colleges and universities in recruiting, retaining, and advancing minority students.

References

About Us. (2011). Project Implicit. Retrieved July 7, 2020, from https://implicit.harvard.edu/implicit/aboutus.html

Draeger, J. (2018, June 22). Metacognition supports HIP undergraduate research. Improve with Metacognition. https://www.improvewithmetacognition.com/metacognition-supports-hip-undergraduate-research/

Field, K. (2018, June 3). A third of your freshmen disappear. How can you keep them? The Chronicle of Higher Education. https://www.chronicle.com/article/A-Third-of-Your-Freshmen/243560

Charity Hudley, A.H., Dickter, C.L., & Franz, H.A. (2017). The indispensable guide to undergraduate research: Success in and beyond college. New York: Teachers College Press.

Margolis, E. (2001). The hidden curriculum in higher education. New York: Routledge.

Nordell, J. (2017, May 7). Is this how discrimination ends? The Atlantic. https://www.theatlantic.com/science/archive/2017/05/unconscious-bias-training/525405/

Seward, M. (2019, April 11). Decolonizing the Classroom: Step 1. National Council of Teachers of English. https://ncte.org/blog/2019/04/decolonizing-the-classroom/


How to Use Post-Test Analysis to Facilitate Metacognition in the College Classroom

by Gina Burkart, Ed.D., Learning Specialist, Clarke University

Pedagogy for Embedding Strategies into Classes

The transition to college is difficult. Students quickly discover that their old strategies from high school do not serve them well in college when they fail their first exam. As the Learning Specialist, I guide these students in modifying strategies and behaviors and in finding new strategies. This also involves helping them move away from a fixed mindset where they believe some students are just born smarter than others and move toward a growth mindset where they reflect on habits and strategies and how to set goals and make changes to achieve desired outcomes. Reflective metacognitive discussion and exercises that develop a growth mindset are necessary for this type of triaging with students (Dweck, 2006; Masters, 2013; Efklides, 2008; VanZile-Tamsen & Livingston, 1999; Livingston, 2003).

As the Learning Specialist at the University, I work with students who are struggling, and I also work with professors in developing better teaching strategies to reach students. When learning is breaking down, I have found that oftentimes the most efficient and effective method of helping students find better strategies is to collaborate with the professor and facilitate strategy workshops in the classroom tailored to the course curriculum. This allows me to work with several students in a short amount of time—while also supporting the professor by demonstrating teaching strategies he or she might integrate into future classes.

magnifying glass with the words Exam Analysis shown

An example of a workshop that works well when learning is breaking down in the classroom is the the post-test analysis workshop. The post-test analysis workshop (see activity details below) often works well in classes after the first exam. Since most students are stressed about their test results, the metacognitive workshop de-escalates anxiety by guiding students in strategic reflection of the exam. The reflection demonstrates how to analyze the results of the exam so that they can form new habits and behaviors in attempt to learn and perform better on the next exam. The corrected exam is an effective tool for fostering metacognition because it shows the students where errors have occurred in their cognitive processing (Efklides, 2008). The activity also increases self-awareness, imperative to metacognition, as it helps students connect past actions with future goals (Vogeley, Jurthen, Falkai, & Maier, 1999). This is an important step in helping students take control of their own learning and increasing motivation (Linvingston & VanZile Tamsen, 1999; Palmer & Goetz, 1988; Pintrich & DeGroot, 1990).

Post-Test Analysis Activity

When facilitating this activity, I begin by having the professor hand back the exams. I then take the students through a serious of prompts that engage them in metacognitive analysis of their performance on the exams. Since metacognitive experiences also require an awareness of feeling (Efklides, 2008), it works well to have students begin by recalling how they felt after the exam:

  • How did you feel?
  • How did you think you did?
  • Were your feelings and predictions accurate?

The post-test analysis then prompts the students to connect their feelings with how they prepared for the exam:

  • What strategies did you use to study?
    • Bloom’s Taxonomy—predicting and writing test questions from book and notes
    • Group study
    • Individual study
    • Concept cards
    • Study guides
    • Created concept maps of the chapters
    • Synthesized notes
    • Other methods?

Students are given 1-3 minutes to reflect in journal writing upon those questions. They are then prompted to analyze where the test questions came from (book, notes, power point, lab, supplemental essay, online materials, etc.) It may be helpful to have students work collaboratively for this.     

An Analysis of the Test—Where the Information Came From

  • For each question identify where the test question came from:
    • Book (B)
    • In-class notes (C)
    • Online materials (O)
    • Supplemental readings (S)
    • Not sure (?)

After identifying where the test information came from, students are then prompted to reflect in journal writing upon the questions they missed and how they might study differently based upon the questions they missed and where the questions came from. For example, a student may realize that he or she missed all of the questions that came from the book. That student may then make a goal to synthesizing class notes right after class with material from the book 30 minutes after class, and then use note reduction to create a concept map to study for the next test.

Another student might realize that he or she missed questions because of test-taking errors. For example, she didn’t carefully read the entire question and then chose the wrong response. To resolve this issue, she decided she would underline question words on the test and in attempt to slow down while reading test questions. She also realized that she changed several responses that she had correct. She will resist the urge to overthink her choices and change responses on the next test.

Next, students are taught about Bloom’s Taxonomy and how it is used by professors to write exams. In small groups, students then use Bloom’s Taxonomy to identify question types. This will take about 20-30 minutes—depending upon the length of the test. For example, students would identify the following test question as a comprehension-level question: Which of the following best describes positive reinforcement? Whereas, the following question would be noted as an application-level question: Amy’s parents give her a lollipop every time she successfully uses the toilet. What type of reinforcement is this?

Question Type: Identify What Level of Bloom’s Taxonomy the Test Question is Assessing

  • Knowledge-level questions
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

Students sometimes struggle with distinguishing the different levels of questions. So, it is helpful to also ask small groups to share their identified questions with the large group, as well as how they determined it to be that level of question. The professor also is a helpful resource in this discussion.

After discussion of the questions types, students then return to individual reflection, as they are asked to count the number of questions they missed for each level of Bloom’s Taxonomy. They are also asked to reflect upon what new strategies they will use to study based on this new awareness.

Adding It All Up

  • Count the number of questions missed in each level of Bloom’s Taxonomy.
  • Which types of questions did you miss most often?
  • Compare this with your study methods.
  • What adjustments might you make in your studying and learning of class material based on this information? Which levels of Bloom’s Taxonomy do you need to focus more on with your studying?

Finally, students are asked to use the class reflections and post-test assessment to create a new learning plan for the course. (See the learning plan in my previous post, Facilitating Metacognition in the Classroom: Teaching to the Needs of Your Students). Creating the Learning Plan could be a graded assignment that students are asked to do outside of class and then turn in. Students could also be referred to the Academic Resource Center on campus for additional support in formulating the Learning Plan. Additionally, a similar post-test assessment could be assigned outside of class for subsequent exams and be assigned a point value. This would allow for ongoing metacognitive reflection and self-regulated learning.

This type Cognitive Strategy Instruction (Scheid, 1993) embedded into the classroom offers students a chance to become more aware of their own cognitive processes, strategies for improving learning, and the practice of using cognitive and metacognitive processes in assessing their success (Livingston, 2003). Importantly, these types of reflective assignments move students away from a fixed mindset to a growth mindset (Dweck, 2006). As Masters (2013) pointed out “Assessment information of this kind provides starting points for teaching and learning.” Additionally, because post-test assessment offers students greater self-efficacy, control of their own learning, purpose, and an emphasis on the learning rather than the test score, it also positively affects motivation (VanZile-Tamsen & Livingston, 1999).

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Balantine Books.

Efklides, A. (2008). Metacognition: Defining its facets and levels of functioning in relation to self-regulation and co-regulation. European Psychologist, 13 (4), 277-287. Retrieved from https://www.researchgate.net/publication/232452693_Metacognition_Defining_Its_Facets_ad_Levels_of_Functioning_in_Relation_to_Self-Regulation_and_Co-regulation

Livingston, J. A. (2003). Metacognition: An overview. Retrieved from https://files.eric.ed.gov/fulltext/ED474273.pdf

Masters, G. N. (2013). Towards a growth mindset assessment. Retrieved from https://research.acer.edu.au/cgi/viewcontent.cgi?article=1017&context=ar_misc

Palmer, D. J., & Goetz, E. T. (1988). Selection and use of study strategies: The role of studier’s beliefs about self and strategies. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 41-61). San Diego, CA: Academic.

Palmer, D. J., & Goetz, E. T. (1988). Selection and use of study strategies: The role of studier’s beliefs about self and strategies. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 41-61). San Diego, CA: Academic.

Pintrich, P. R., & DeGroot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40

Palmer, D. J., & Goetz, E. T. (1988). Selection and use of study strategies: The role of studier’s beliefs about self and strategies. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 41-61). San Diego, CA: Academic.

Pintrich, P. R., & DeGroot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40

Pintrich, P. R., & DeGroot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40.

VanZile-Tamsen, C. & Livingston, Jennifer. J. A. (1999). The differential impact of motivation on the self regulated strategy use of high- and low-achieving college student. Journal of College Student Develompment, (40)1, 54-60. Retrieved from https://www.researchgate.net/publication/232503812_The_differential_impact_of_motivation_on_the_self-regulated_strategy_use_of_high-_and_low-achieving_college_students

Vogeley, K., Kurthen, M., Falkai, P., & Maier, W. (1999). Essential functions of the human self model are implemented in the prefrontal cortex. Consciousness and Cognition, 8, 343-363.