Understanding Bias in the Disciplines: Part 2 – the Physical and Quantitative Sciences 

by Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University
Simone Mcknight (Simone Erchov), Global Systems Technology

In Part 1, we summarized psychologists’ current understanding of bias. In Part 2, we connect conceptual reasoning and metacognition and show how bias challenges clear reasoning even in “objective” fields like science and math.

Science as conceptual

College catalogs’ explanations of general education (GE) requirements almost universally indicate that the desired learning outcome of the required introductory science course is to produce a conceptual understanding of the nature of science and how it operates. Focusing only on learning disciplinary content in GE courses squeezes out stakeholders’ awareness that a unifying outcome even exists. 

Wherever a GE metadisciplinary requirement (for example, science) specifies a choice of a course from among the metadiscipline’s different content disciplines (for example, biology, chemistry, physics, geology), each course must communicate an understanding of the way of knowing established in the metadiscipline. That outcome is what the various content disciplines share in common. A student can then understand how different courses emphasizing different content can effectively teach the same GE outcome.

The guest editor led a team of ten investigators from four institutions and separate science disciplines (biology, chemistry, environmental science, geology, geography, and physics). Their original proposal was to investigate ways to improve the learning in the GE science courses. While articulating what they held in common as professing the metadiscipline of “science,” the investigators soon recognized that the GE courses they took as students had focused on disciplinary content but scarcely used that content to develop an understanding of science as a way of knowing. After confronting the issue of teaching with such a unifying emphasis, they later turned to the problem of assessing success in producing this different kind of understanding.

Upon discovering no suitable off-the-shelf assessment instrument to meet this need, they constructed the Science Literacy Concept Inventory (SLCI). This instrument later made possible this guest-edited series and the confirmation of knowledge surveys as valid assessments of student learning.

Concept inventories test understanding the concepts that are the supporting framework for larger overarching blocks of knowledge or thematic ways of thinking or doing. The SLCI tests nine concepts specific to science and three more related to the practice of science and connecting science’s way of knowing with contributions from other requisite GE metadisciplines.

Self-assessment’s essential role in becoming educated

Self-assessment is partly cognitive (the knowledge one has) and partly affective (what one feels about the sufficiency of that knowledge to address a present challenge). Self-assessment accuracy confirms how well a person can align both when confronting a challenge.

Developing good self-assessment accuracy begins with an awareness that having a deeper understanding starts to feel different from merely having surface knowledge needed to pass a multiple-choice test. The ability to accurately feel when deep learning has occurred reveals to the individual when sufficient preparation for a challenge has, in fact, been achieved. We can increase learners’ capacity for metacognition by requiring frequent self-assessments that give them the practice needed to develop self-assessment accuracy. No place needs teaching such metacognition more than the introductory GE courses.

Regarding our example of science, the 25 items on the SLCI that test understanding of the twelve concepts derive from actual cases and events in science. Their connection to bias lies in learning that when things go wrong when doing or learning science, some concept is unconsciously being ignored or violated. Violations are often traceable to bias that hijacked the ability to use available evidence.

We often say: “Metacognition is thinking about thinking.” When encountering science, we seek to teach students to “think about” (1) “What am I feeling that I want to be true and why do I have that feeling?” and (2) “When I encounter a scientific topic in popular media, can I articulate what concept of science’s way of knowing was involved in creating the knowledge addressed in the article?”

Examples of bias in physical science

“Misconceptions research” constitutes a block of science education scholarship. Schools do not teach the misconceptions. Instead, people develop preferred explanations for the physical world from conversations that mostly occur in pre-college years. One such explanation addresses why summers are warm and winters are cold. The explanation that Earth is closer to the sun in summer is common and acquired by hearing it as a child. The explanation is affectively comfortable because it is easy, with the ease coming from repeatedly using the neural network that contains the explanation to explain the seasonal temperatures we experience. We eventually come to believe that it is true. However, it is not true. It is a misconception.

When a misconception becomes ingrained in our brain neurology over many years of repeated use, we cannot easily break our habit of invoking the neural network that holds the misconception until we can bypass it by constructing a new network that holds the correct explanation. Still, the latter will not yield a network that is more comfortable to invoke until usage sufficiently ingrains it. Our bias tendency is to invoke the most ingrained explanation because doing so is easy.

Even when individuals learn better, they often revert to invoking the older, ingrained misconception. After physicists developed the Force Concept Inventory (FCI) to assess students’ understanding of conceptual relationships about force and motion, they discovered that GE physics courses only temporarily dislodged students’ misconceptions. Many students soon reverted to invoking their previous misconceptions. The same investigators revolutionized physics education by confirming that active learning instruction better promoted overcoming misconceptions than did traditional lecturing.

The pedagogy that succeeds seemingly activates a more extensive neural network (through interactive discussing, individual and team work on problem challenges, writing, visualizing through drawing, etc.) than was activated to initially install the misconception (learning it through a brief encounter).

Biases that add wanting to believe something as true or untrue are especially difficult to dislodge. An example of the power of bias with emotional attachment comes from geoscience.

Nearly all school children in America today are familiar with the plate tectonics model, moving continents, and ephemeral ocean basins. Yet, few realize that the central ideas of plate tectonics once were scorned as “Germanic pseudoscience” in the United States. That happened because a few prominent American geoscientists so much wanted to believe their established explanations as true that their affect hijacked these experts’ ability to perceive the evidence. These geoscientists also exercised enough influence in the U. S. to keep plate tectonics out of American introductory level textbooks. American universities introduced plate tectonics in introductory GE courses only years later than did Europe.

Example of Bias in Quantitative Reasoning

People usually cite mathematics as the most dispassionate discipline and the least likely for bias to corrupt. However, researchers Dan Kahan and colleagues demonstrated that bias also disrupts peoples’ ability to use quantitative data and think clearly.

Researchers asked participants to resolve whether a skin cream effectively treated a skin rash. Participants received data for subjects who did or did not use the skin cream. Among users, the rash got better in 223 cases and got worse in 75 cases. Of subjects who did not use the skin cream, the rash got better in 107 cases and worse in 21 cases.

Participants then used the data to select from two choices: (A) People who used the cream were more likely to get better or (B) People who used the cream were more likely to get worse. More than half of the participants (59%) selected the answer not supported by the data. This query was primarily a numeracy test in deducing the meaning of numbers.

Then, using the same numbers, the researchers added affective bait. They replaced the skin cream query with a query about the effects of gun control on crime in two cities. One city allowed concealed gun carry, and another banned concealed gun carry. Participants had to decide whether the data showed that concealed carry bans increased or decreased crime.

Self-identified conservative Republicans and liberal Democrats responded with a desire to believe acquired from their party affiliations. The result was even more erroneous than the skin cream case participants. Republicans greatly overestimated increased crime from gun bans, but no more than Democrats overestimated decreased crime from gun bans (Figure 1). When operating from “my-side” bias planted by either party, citizens significantly lost their ability to think critically and use numerical evidence. This was true whether the self-identified partisans had low or high numeracy skills.

Graph showing comparing responses from those with low and high numeracy skills. Those with high numeracy always have better accuracy (smaller variance around the mean). When the topic was non-partisan, the means for those with low and high numeracy skills were roughly the same and showed little bias regarding direction of error. When the topic was partisan, then then those with lower skill showed, the strong bias and those with higher skill showed some bias.

Figure 1. Effect of bias on interpreting simple quantitative information (from Kahan et al. 2013, Fig. 8). Numerical data needed to answer whether a cream effectively treated a rash triggered low bias responses. When researchers employed the same data to determine whether gun control effectively changed crime, polarizing emotions triggered by partisanship significantly subverted the use of evidence toward what one wanted to believe.

Takeaway

Decisions and conclusions that appear based on solely objective data rarely are. Increasing metacognitive capacity produces awareness of the prevalence of bias.


Understanding Bias in the Disciplines: Part 1 – the Behavioral Sciences 

by Simone Mcknight (Simone Erchov), Global Systems Technology
Ed Nuhfer, California State University (Retired)
Eric Gaze, Bowdoin College
Paul Walter, St Edwards University

Bias as conceptual

Bias arises from human brain mechanisms that process information in ways that make decision-making quicker and more efficient at the cognitive/neural level. Bias is an innate human survival mechanism, and we all employ it.

Bias is a widely known and commonly understood psychological construct. The common understanding of bias is “an inclination or predisposition for or against something.” People recognize bias by its outcome—the preference to accept specific explanations or attributions as true.

In everyday conversation, discussions about bias occur in preferences and notions people have on various topics. For example, people know that biases may influence the development of prejudice (e.g., ageism, sexism, racism, tribalism, nationalism), political, or religious beliefs.

the words "Bias in the Behavioral Sciences" on a yellow backgroundA deeper look reveals that some of these preferences are unconscious. Nevertheless, they derive from a related process called cognitive bias, a propensity to use preferential reasoning to assess objective data in a biased way. This entry introduces the concept of bias, provides an example from the behavioral sciences, and explains why metacognition can be a valuable tool to counteract bias. In Part 2, which follows this entry, we provide further examples from hard science, field science, and mathematics.

Where bias comes from

Biases develop from the mechanisms by which the human brain processes information as efficiently as possible. These unconscious and automatic mechanisms make decision-making more efficient at the cognitive/neural level. Most mechanisms that help the human brain make fast decisions are credited to adaptive survival. Like other survival mechanisms, bias loses value and can be a detriment in a modern civilized world where threats to our survival are infrequent challenges. Cognitive biases are subconscious errors in thinking that lead to misinterpreting future information from the environment. These errors, in turn, impact the rationality and accuracy of decisions and judgments.

When we frame unconscious bias within the context of cognitive bias and survival, it is easier to understand how all of us have inclinations to employ bias and why any discipline that humans manage is subject to bias. Knowing this makes it easier to account for the frequent biases affecting the understanding and interpreting of diverse kinds of data.

People easily believe that bias only exists in “subjective” disciplines or contexts where opinions and beliefs seem to guide decisions and behavior. However, bias manifests in how humans process information at the cognitive level. Although it is easier to understand bias as a subjective tendency, the typical way we process information means that bias can pervade all of our cognition.

Intuitively, disciplines relying on tangible evidence, logical arguments, and natural laws of the physical universe would seem factually based and less influenced by feelings and opinion. After all, “objective disciplines” do not predicate their findings on beliefs about what “should be.” Instead, they measure tangible entities and gather data. However, even in the “hard science” disciplines, the development of a research question, the data collected, and the interpretations of data are vulnerable to bias. Tangible entities such as matter and energy are subject to biases as simple as differences in perception of the measured readings on the same instrument. In the behavioral sciences, where investigative findings are not constrained by natural law, bias can be even harder to detect. Thus, all scientists carry bias into their practice of science, and students carry bias into their learning of it.

Metacognition can help counter our tendencies toward bias because it involves bringing relevant information about a process (e.g., conducting research, learning, or teaching) into awareness and then using that awareness to guide subsequent behaviors.

Consequences of bias

Bias impacts individual understanding of the world, the self, and how the self navigates the world – our schemas. These perceptions may impact elements of identity or characterological elements that influence the likelihood of behaving in one way versus another.

Bias should be assumed as a potentially influential factor in any human endeavor. Sometimes bias develops for an explanation after hearing it in childhood and then invoking that explanation for years. Even after seeing the evidence against that bias, our initial explanations are difficult to replace with ones better supported by evidence because we remain anchored to that initial knowledge. Adding a personal emotional attachment to an erroneous explanation makes replacing it even more difficult. Scientists can have emotional attachments to particular explanations of phenomena, especially their own explanations. Then, it becomes easy to selectively block out or undervalue evidence that modifies or contradicts the favored explanation (also known as confirmation bias).

Self-assessment, an example of long-standing bias in behavioral science

As noted in the introduction, this blog series focuses on our team’s work related to self-assessment. Our findings countered results from scores of researchers who replicated and verified the testing done in a seminal paper by Kruger and Dunning (1999). Their research asserted that most people were overconfident about their abilities, and the least competent people had the most overly optimistic perceptions of their competence. Researchers later named the phenomenon the “Dunning-Kruger effect,” and the public frequently deployed “the effect” as a label to disparage targeted groups as incompetent. “The effect” held attraction because it seemed logical that people who lacked competence also lacked the skills needed to recognize their deficits. Quite simply, people wanted to believe it, and replication created a consensus with high confidence in concluding that people, in general, cannot accurately self-assess.

While a few researchers did warn about likely weaknesses in the seminal paper, most behavioral scientists selectively ignored the warnings and repeatedly employed the original methodology. This trend of replication continued in peer-reviewed behavioral science publications through at least 2021.

Fortunately, the robust information storage and retrieval system that characterizes the metadiscipline of science (which is a characteristic distinguishing science from technology as ways of knowing) makes it possible to challenge a bias established in one discipline by researchers from another. Through publications and open-access databases, the arguments that challenge an established bias then become available. In this case, the validity of “the effect” resided mainly in mathematical arguments and not, as presumed, arguments that resided solely within the expertise of behavioral scientists.

No mathematics journal had ever hosted arguments addressing the numeracy of arguments that established and perpetuated the belief in “the effect.” However, mathematics journals offered the benefit of reviewers who specialized in quantitative reasoning and were not emotionally attached to any consensus established in behavioral science journals. These reviewers agreed that the long-standing arguments for supporting the Dunning-Kruger effect were mathematically flawed.  

In 2016 and 2017, Numeracy published two articles from our group that detailed the mathematical arguments that established the Dunning-Kruger effect conclusions and why these arguments are untenable. When examined by methods the mathematics reviewers verified as valid, our data indicated that people were generally good at self-assessing their competence and confirmed that there were no marked tendencies toward overconfidence. Experts and novices proved as likely to underestimate their abilities as to overestimate them. Further, the percentage of those who egregiously overestimated their abilities was small, in the range of about 5% to 6% of participants. However, our findings confirmed a vital conclusion of Kruger and Dunning (1999): experts self-assess better than novices (variance decreases as expertise increases), and self-assessment accuracy is attainable through training and practice.

By 2021, the information released in Numeracy began to penetrate the behavioral science journals. This blog series, our earlier posts on this site, and archived presentations to various audiences (e.g., the National Numeracy Network, the Geological Society of America) further broadened awareness of our findings.

Interim takeaways

Humans construct their learning from mentally processing life experiences. During such processing, we simultaneously construct some misconceptions and biases. The habit of drawing on a misconception or bias to explain phenomena ingrains it and makes it difficult to replace with correct reasoning. Affective attachments to any bias make overcoming the bias extremely challenging, even for the most accomplished scholars.

It is essential to realize that we can reduce bias by employing metacognition to recognize bias originating from within us at the individual level and by considering bias that influences us but is originated from or encouraged by groups. In the case above, we were able to explain the bias within the Behavioral Sciences disciplines by showing how repeatedly mistaking mathematical artifacts as products of human behavior produced a consensus that held understanding self-assessment captive for over two decades.

Metacognitive self-assessment seems necessary for initially knowing self and later for recognizing one’s own personal biases. Self-assessment accuracy is valuable in using available evidence well and reducing the opportunity for bias to hijack our ability to reason. Developing better self-assessment accuracy appears to be a very worthy objective of becoming educated.


Introduction: Why self-assessment matters and how we determined its validity 

By Ed Nuhfer, Guest Editor, California State University (retired)

There are few exercises of thinking more metacognitive than self-assessment. For over twenty years, behavioral scientists accepted that the “Dunning-Kruger effect,” which portrays most people as “unskilled and unaware of it,” correctly described the general nature of human self-assessment. Only people with significant expertise in a topic were capable of self-assessing themselves accurately, while those with the least expertise supposedly held highly overinflated views of their abilities. 

The authors of this guest series have engaged in a collaborative effort to understand self-assessment for over a decade. They documented how the “Dunning-Kruger effect,” from its start, rested on specious mathematical arguments. Unlike what the “effect” asserts, most people do not hold overly inflated views of their competence, regardless of their level of expertise. We summarized some of our peer-reviewed work in earlier articles in “Improve with Metacognition (IwM).” These are discoverable by using “Dunning-Kruger effect” in IwM’s search window. 

Confirming that people, in general, are capable of self-assessing their competence affirms the validity of self-assessment measures. The measures inform efforts in guiding students to improve their self-assessment accuracy. 

This introduction presents commonalities that unify the series’ entries to follow. In the entries, we hotlink the references available as open-source within the blogs’ text and place all other references cited at the end. 

Why self-assessment matters

After an educator becomes aware of metacognition’s importance, teaching practice should evolve beyond finding the best pedagogical techniques for teaching content and assessing student learning. The “place beyond” focuses on teaching the student how to develop a personal association with content as a basis for understanding self and exercising higher-order thinking. Capturing the changes in developing content expertise together with self in a written teaching/learning philosophy expedites understanding how to achieve both. Self-assessment could be the most valuable of all the varieties of metacognition that we employ to deepen our understanding. 

Visualization is conducive to connecting essential themes in this series of blogs that stress becoming better educated through self-assessment. Figure 1 depicts the role and value of self-assessment from birth at the top of the figure to becoming a competent, autonomous lifelong learner by graduation from college at the bottom. diagram illustrating components that come together to promote life-long learning: choices & effort through experiences; self-assessment; self-assessment accuracy; self-efficacy; self-regulation

Figure 1. Relationship of self-assessment to developing self-regulation in learning. 

Let us walk through this figure, beginning with early life Stage #1 at the top. This stage occurs throughout the K-12 years, when our home, local communities, and schools provide the opportunities for choices and efforts that lead to experiences that prepare us to learn. In studies of Stage 1, John A. Ross made the vital distinction between self-assessment (estimating immediate competence to meet a challenge) and self-efficacy (perceiving one’s personal capacity to acquire competence through future learning). Developing healthy self-efficacy requires considerable practice in self-assessment to develop consistent self-assessment accuracy.

Stage 1 is a time that confers much inequity of privilege. Growing up in a home with a college-educated parent, attending schools that support rich opportunities taught in one’s native language, and living in a community of peers from homes of the well-educated provide choices, opportunities, and experiences relevant to preparing for higher education. Over 17 or 18 years, these relevant self-assessments sum to significant advantages for those living in privilege when they enter college. 

However, these early-stage self-assessments occur by chance. The one-directional black arrows through Stage 2 communicate that nearly all the self-assessments are occurring without any intentional feedback from a mentor to deliberately improve self-assessment accuracy. Sadly, this state of non-feedback continues for nearly all students experiencing college-level learning too. Thereby, higher education largely fails to mitigate the inequities of being raised in a privileged environment.

The red two-directional arrows at Stage 3 begin what the guest editor and authors of this series advocate as a very different kind of educating to that commonly practiced in American institutions of education. We believe education could and should provide self-assessments by design, hundreds in each course, all followed by prompt feedback, to utilize the disciplinary content for intentionally improving self-assessment accuracy. Prompt feedback begins to allow the internal calibration needed for improving self-assessment accuracy (Stage #4). 

One reason to deliberately incorporate self-assessment practice and feedback is to educate for social justice. Our work indicates that we can enable the healthy self-efficacy needed to succeed in the kinds of thinking and professions that require a college education by strengthening the self-assessment accuracy of students and thus make up for the lack of years of accumulated relevant self-assessments in the backgrounds of those lesser privileged.

By encouraging attention to self-assessment accuracy, we seek to develop students’ felt awareness of surface learning changing toward the higher competence characterized by deep understanding (Stage #5). Awareness of the feeling characteristic when one attains the competence of deep understanding enables better judgment for when one has adequately prepared for a test or produced an assignment of high quality and ready for submission. 

People attain Stage #6, self-regulation, when they understand how they learn, can articulate it, and can begin to coach others on how to learn through effort, using available resources, and accurately doing self-assessment. At that stage, a person has not only developed the capacity for lifelong learning, but has developed the capacity to spread good habits of mind by mentoring others. Thus the arrows on each side of Figure 1 lead back to the top and signify both the reflection needed to realize how one’s privileges were relevant to their learning success and cycling that awareness to a younger generation in home, school, and community. 

A critical point to recognize is that programs that do not develop students’ self-assessment accuracy are less likely to produce graduates with healthy self-efficacy or the capacity for lifelong learning than programs that do. We should not just be training people to grow in content skills and expertise but also educating them to grow in knowing themselves. The authors of this series have engaged for years in designing and doing such educating.

The common basis of investigations

The aspirations expressed above have a basis in hard data from assessing the science literacy of over 30,000 students and “paired measures” on about 9,000 students with peer-reviewed validated instruments. These paired measures allowed us to compare self-assessed competence ratings on a task and actual performance measures of competence on that same task. 

Knowledge surveys serve as the primary tool through which we can give “…self-assessments by design, hundreds in each course all followed by prompt feedback.” Well-designed knowledge surveys develop each concept with detailed challenges that align well with the assessment of actual mastery of the concept. Ratings (measures of self-assessed competence) expressed on knowledge surveys, and scores (measures of demonstrated competence) expressed on tests and assignments are scaled from 0 to 100 percentage points and are directly comparable.

When the difference between the paired measures is zero, there is zero error in self-assessment. When the difference (self-assessed minus demonstrated) is a positive number, the participant tends toward overconfidence. When the difference is negative, the participant has a tendency toward under-confidence.

In our studies that established the validity of self-assessment, our demonstrated competence data in our paired measures came mainly from the validated instrument, the Science Literacy Concept Inventory or “SLCI.” Our self-assessed competence data comes from knowledge surveys and global single-queries tightly aligned with the SLCI. Our team members incorporate self-created knowledge surveys of course content into their higher education courses. Knowledge surveys have proven to be powerful research tools and classroom tools for developing self-assessment accuracy. 

Summary overview of this blog series

IwM is one of the few places where the connection between bias and metacognition has directly been addressed (e.g., see a fine entry by Dana Melone). The initial two entries of this series will address metacognitive self-assessment’s relation to the concept of bias. 

Later contributions to this series consider privilege and understanding the roles of affect, self-assessment, and metacognition when educating to mitigate the disadvantages of lesser privilege. Other entries will explore the connection between self-assessment, participant use of feedback, mindset, and metacognition’s role in supporting the development of a growth mindset. Near the end of this series, we will address knowledge surveys, the instruments that incorporate the disciplinary content of any college course to improve learning and develop self-assessment accuracy. 

We will conclude with a final wrap-up entry of this series to aid readers’ awareness that what students should “think about” when they “think about thinking” ought to provide a map for reaching a deeper understanding of what it means to become educated and to acquire the capacity for lifelong learning.


Writing metacognitive learning objectives for metacognitive training that supports student learning

by Patrick Cunningham, Ph.D., Rose-Hulman Institute of Technology

Teaching through the COVID-19 pandemic has highlighted disparities in how students approach their learning. Some have continued to excel with hybrid and online instruction while others, and more than usual, have struggled. Compounding these struggles, these students also find themselves behind or with notable gaps in their prerequisite knowledge for following courses. A significant component of these struggles may be due to not having developed independence in their learning. Engaging in explicit metacognitive activities directly addresses this disparity, improving students’ abilities to overcome these struggles. Given the present challenges of living through COVID-19, this is more important now than ever. However, creating activities with metacognitive focus is likely unfamiliar and there are not a lot of resources to guide their development. Here I seek to demonstrate an accessible approach, an entry point, for supporting students’ growth as more skillful and independent learners grounded in metacognition.

Cognitive Learning Objectives are Just the Start

Creating explicit learning objective is one means by which educators commonly try to support students’ independence in learning. Typically learning objectives focus on the cognitive domain, often based on Bloom’s Taxonomy. The cognitive domain refers to how we think about or process information. Bloom’s taxonomy for the cognitive domain is comprised of Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating (Krathwohl, 2002). Each of these gives an indication how a student is expected to engage or use the material we are teaching. For constructing learning objectives, there are lists of action verbs associated with each Bloom category.

Consider this cognitive learning objective for a computer programming course.

Students will be able to create and implement functions with inputs and an output in C++ programs to accomplish a specified task on an Arduino board with a prewired circuit.

This learning objective is specific to a lesson and targets the Apply level of Bloom’s taxonomy. (The approach I am presenting could equally apply to broader course-level learning objectives, but I think the specificity here makes the example more tangible.) This objective uses good action verbs (bolded) and has a prescribed scope and context. But is it adequate for guiding student learning if they are struggling with it?

Metacognitive Learning Objectives can Direct Learning Activities

silhouette shape of brain with the words "metacognitive learning objectives"inside the shape

Cognitive learning objectives point students to what they should be able to do with the information but do not usually provide guidance for how they should go about developing their ability to do so. Metacognition illuminates the path to developing our cognitive abilities. As a result, metacognitive training can support students’ attainment of cognitive learning objectives. Such training requires metacognitive learning objectives.

Metacognitive learning objectives focus on our awareness of the different ways we process information and how we regulate and refine how we process information. Metacognitive knowledge includes knowledge of how people (and we as individuals) process information, strategies for processing information and monitoring our thinking, and knowledge of the cognitive demands of specific tasks (Cunningham, et al., 2017). As we engage in learning we draw on this knowledge and regulate our thinking processes by planning our engagement, monitoring our progress and processes, adjusting or controlling our approaches, and evaluating the learning experience (Cunningham, et al., 2017). Metacognitive monitoring and evaluation feed back into our metacognitive knowledge, reinforcing, revising, or adding to it.

Example Implementation of Metacognitive Learning Objectives

Considering our example cognitive learning objective, how could we focus metacognitive training to support student attainment of it? Two possibilities include 1) focusing on improving students’ metacognitive knowledge of strategies to practice and build proficiency with writing functions or 2) supporting students’ accurate self-assessment of their ability to demonstrate this skill. Instructors can use their knowledge of their students’ current strategies to decide which approach (or both) to take. For example, if it appears that most students are employing limited learning strategies, such as memorizing examples by reviewing notes and homework, I might focus on teaching students about a wider range of effective learning strategies. The associated metacognitive learning objective could be:

Students will select and implement at least two different elaborative learning strategies and provide a rationale for how they support greater fluency with functions.

The instructional module could differentiate categories of learning objectives (e.g., memorization, elaboration, and organization), demonstrate a few examples, and provide a more complete list of elaborative learning strategies (Seli & Dembo, 2019). Then students could pick one to do in class and one to do as homework. If, on the other hand, it appears that most students are struggling to self-assess their level of understanding, I might focus on teaching students how to better monitor their learning. The associated metacognitive learning objective could be:

Students will compare their function written for a specific application, and completed without supports, to a model solution, using this as evidence to defend and calibrate their learning self-assessment.

Here the instructional module could be a prompt for students to create and implement a function, from scratch without using notes or previously written code. After completing their solutions, students would be given access to model solutions. In comparing their solution to the model, they could note similarities, differences, and errors. Then students could explain their self-assessment of their level of understanding to a neighbor or in a short paragraph using the specific comparisons for evidence. These examples are metacognitive because they require students to intentionally think about and make choices about their learning and to articulate their rationale and assessment of the impact on their learning. I believe it is important to be explicit with students about the metacognitive aim – to help them become more skillful learners. This promotes transfer to other learning activities within the class and to their learning in other classes.

Implementing and Supporting Your Metacognitive Outcomes

In summary, to create actionable metacognitive learning objectives I recommend,

  • clarifying the cognitive learning objective(s) you aim to support
  • investigating and collecting evidence for what aspect(s) of learning students are struggling with
  • connecting the struggle(s) to elements of metacognition
  • drafting a metacognitive learning objective(s) that address the struggle(s)

Armed with your metacognitive learning objectives you can then craft metacognitive training to implement and assess them. Share them with a colleague or someone from your institution’s teaching and learning center to further refine them. You may want to explore further resources on metacognition and learning such as Nilson’s (2013) Creating Self-Regulated Learners, Seli and Dembo’s (2019) Motivation and learning strategies for college success, and Svinicki’s GAMES© survey in (Svinicki, 2004). Or you could watch my Skillful Learning YouTube video, What is Metacognition and Why Should I Care?.

If metacognition is less familiar to you, avoid overwhelm by choosing one element of metacognition at a time. For example, beyond the above examples, you could focus on metacognitive planning to support students better navigating an open-ended project. Or you could help students better articulate what it means to learn something or experience the myth of multitasking (we are task switchers), which are elements pertaining to metacognitive knowledge of how people process knowledge. Learn about that element of metacognition, develop a metacognitive learning objective for it, create the training materials, and implement them with your students. You will be supporting your students’ development as learners generally, while you also promote deeper learning of your cognitive course learning objectives. Over time, you will have developed a library of metacognitive learning objectives and training, which you could have students explore and self-select from based on their needs.

Acknowledgements

This blog post is based upon metacognition research supported by the National Science Foundation under Grant Nos. 1932969, 1932958, and 1932947. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

References

Cunningham, P. J., Matusovich, H. M., Hunter, D. A., Williams, S. A., & Bhaduri, S. (2017). Beginning to Understand Student Indicators of Metacognition. In the proceedings of the American Society for Engineering Education (ASEE) Annual Conference & Exposition, Columbus, OH.

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into practice41(4), 212-218.

Nilson, L. (2013). Creating self-regulated learners: Strategies to strengthen students? self-awareness and learning skills. Stylus Publishing, LLC.

Seli, H., & Dembo, M. H. (2019). Motivation and learning strategies for college success: A focus on self-regulated learning. Routledge.

Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. Anker Publishing Company.


Building Emotional Regulation and Metacognition through Academic Entrepreneurship

by Traci McCubbin, M.A., Director of the Promise Program, Merrimack College

(Post #3 Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

I teach a required academic study skills course for undergraduate students that have been placed on academic probation. Students share a variety of reasons that have led to their academic predicament, including but not limited to: underdeveloped academic and/or study skills, social and emotional difficulties, time management flaws, and economic challenges.

After digging a bit deeper with students, I found a common trend in addition to the reasons they shared: they lacked positive coping strategies for regulating their emotions. These emotions could be related to difficulties experienced both inside and outside of the classroom. For example, I had students report that they had not been able to cope with the crushing emotions of a close friendship ending. They had either stopped attending class or could not focus in class for weeks.

cartoon of guy sitting in chair and overwhelmed by negative thoughts

As you may guess, their poor academic performance was hindering their academic confidence, and their mindset was more fixed than growth. This blog post shares my creation of self-regulation and metacognition development activities that parallel steps that might be taken when professionals create a business plan. Hence the course title, Academic Entrepreneurship.

Motivating Question: How could I even begin to teach academic strategies or have students reflect on their metacognition, if I couldn’t address their emotional state?

Drawing on Literature and Personal Experience

To begin to answer this question, I turned to the research and published work of Mary Helen Immordino-Yang, Emotions, Learning & the Brain and Carol Dweck, Mindset: The New Psychology of Success. Immordino-Yang’s (2016) research reveals that emotions must be present for learning to occur and that strong social emotions, both positive and negative, have the power to motivate our decisions and actions including educational decisions and actions (Imordino-Yang, 2016, pp. 107,171). Dweck’s (2006) studies consistently show the positive power of a growth mindset and the disruptive power of a fixed mindset. Growth mindset is the idea that intelligence and abilities can be developed overtime with hard work and persistence, while fixed mindset is the belief that intelligence is predetermined or set (Dweck, 2006).

Through my own reflection on my academic journey, I began to understand how my emotions both positively and negatively impacted my learning. During my middle school days, I struggled with math. My mindset was fixed, and I believed that I was not capable of being successful in this subject area. It was as if every time a new concept was taught, I could feel a metal fortress of walls enclose around my brain to prevent any helpful information from penetrating the walls. Despite this struggle, I did finally master fractions and some of the intro to algebra concepts.

As one might expect of a student with a fixed mindset, my frustrations with math and my feelings of defeat followed me from middle school to high school. My high school math teacher started our class off with a review of fractions; immediately, I felt my heart race, my palms get sweaty, and the metal walls beginning to enclose. It was in this moment of panic that I decided to take a few deep breaths, which allowed me to gain clarity. I reminded myself that I already knew how to handle fractions and that I was capable of learning. That moment was life changing, I had adopted a growth mindset. I began to apply this strategy to my fixed mindset areas including but not limited to: running, science, and drumming. Overtime, I began to take more advance math courses and my overall high school GPA began to climb. I have demonstrated both a growth and fixed mindset in different areas of my academic, professional, and personal life. I believe the same must be true for most people as well as for my students.

My personal experiences, combined with the literature, led me to incorporate key components into my study skills course: emotional regulation practices, regular activities to incorporate mindfulness and mindset, and an overarching course theme of entrepreneurship.

Academic Entrepreneurship Class Context

I decided to provide my students with the opportunity to practice coping skills for regulating their emotions, better understand their mindset, and explore the power of growth mindset. Throughout the semester, we opened the start of each class with a 5-minute-or-less mindfulness meditation or a meditative activity such as mindfulness coloring or progressive relaxation. Students were then given time to reflect on the activity and share how they could apply this strategy in their personal lives and/or in the classroom when they felt overwhelmed or highly energized. Mindset was introduced through a series of video clips and case studies. Students were given multiple opportunities throughout the semester to reflect on their mindset and identify opportunities to challenge their mindset.

Concurrent with the self-regulation activities, students were asked to view their academic approach through the lens of an entrepreneur to enhance their metacognitive perspective. The idea is that by building their personal academic business plan, students are empowered to take ownership of their academic experience through a series of metacognitive reflections, exploration of new study skill strategies, and opportunities to practice new and strengthen pre-existing academic skillsets. Students were asked to focus on four areas of a business plan:

  • Company Descriptions: Students create their description by engaging in activities and reflections designed to help them identify their interests, personal values, previous academic experiences, activities that bring them joy, and areas of struggle.
  • Projections: Instead of setting financial projections, students are introduced to SMART Goals and set 4-5 goals with benchmarks for tracking their progress. Students are encouraged to set 2 goals related to their academic progress, one for health and wellness, and one for professional discovery.
  • SWOT Analysis: Students work through motivational interviewing to help each other identify their strengths and successes, areas of weakness, opportunities, and threats. They are also challenged to address their weaknesses and threats by applying their strengths and resources.
  • Marketing Plan: Through a series of activities and reflections, students create a plan to sell their Academic Success Business by identifying skills that they strengthen over the semester, resources they accessed, strategies they incorporated, and how these steps translate to leadership.

Schematic with three components: 1) Fixed Mindset; Emotional Disregulation, 2) Practicing emotional regulation skills; identifying mindset; working towards growth mindset, 3) Postive Student Development Outcomes

Figure 1. Academic Entrepreneurship Course Process

Concluding Question: Was I able to help my students practice and implement coping skills for managing their emotions, take ownership of their academic experience, develop a growth mindset, and think critically about their own thinking and learning?

Yes, somewhat, and no….the answer is a bit more complicated and dependent on the student.

Students did proactively engage in the mindfulness meditations and activities of their own accord. They always had the option to remain respectfully quiet and not participate in the meditations or activities. When prompted by an anonymous poll in class about their recent meditative experience, the majority of students requested that we allow for longer practices and activities. They also proactively engaged in dialogues on how they could use these techniques during study breaks, stressful parts of a test, or when dealing with their roommates.

Students landed in very different places when it came to taking ownership of their academic experience, development of a growth mindset, and metacognitive thinking. By the end of the semester a few students had fully taken ownership of their academic experience, were thinking critically and questioning their learning approach and actions, were working towards developing a growth mindset, and could identify when a fixed mindset was starting to develop.

The majority of the students made progress in one area and less progress in the other areas, or only made progress in one area. A few did not make progress outside of practicing their emotional regulation activities.

Though results were mixed, I still believe it is important to teach emotional regulation techniques, provide space for practice, and give students the time to explore and understand their mindset and metacognitive perspective. If students are more aware of their emotional state and able to exercise regulation strategies, they will be better equipped for reflecting on their mindset and metacognitive perspective. This understanding will help them implement a potential shift in perspective and targeted strategies for success. Development takes time and cannot always occur in the framework of a semester. I believe the seeds have been planted and can be nurtured by the student when they are ready to tend to their garden.

References

Dweck, C.S. (2006). Mindset: the New Psychology of Success. Random House.

Immordino-Yang, M.H. (2016). Emotions, learning, and the brain: Exploring the educational implications of affective neuroscience. W.W.Norton & Company.

Resources

TEDx Manhattan Beach. (2011). Mary Helen Immordino-Yang – Embodied Brains, Social Minds. Retrieved from https://www.youtube.com/watch?v=RViuTHBIOq8

Trevor Ragan. (2016). Growth Mindset Introduction: What it is, How it Works, and Why it Matters. Retrieved from: https://www.youtube.com/watch?v=75GFzikmRY0

Trevor Ragan. (2014). Carol Dweck – A Study on Praise and Mindsets. Retrieved from: https://www.youtube.com/watch?v=NWv1VdDeoRY#action=share


Helping students become self-directed writers

Dr. Christina Hardway, Professor, Department of Psychology, Merrimack College

(Post #2: Integrating Metacognition into Practice Across Campus, Guest Editor Series Edited by Dr. Sarah Benes)

Helping students to become self-directed learners is, arguably, one of the most important outcomes of education. Self-directed learning is proposed as a circular (and iterative) process. It involves making a plan, monitoring one’s progress, and then making changes or adapting as needed. These behaviors occur within the context of one’s beliefs about learning and abilities to succeed (see figure, adapted from Ambrose, et al., 2010).

schematic showing elements of self-directed learning as adapted from Ambrose et al 2010: Assessing the assignment, Evaluating personal resources, Planning accordingly, Applying plan and monitoring progress, Reflecting and adjusting if needed

Helping students to build better metacognitive skills during their regular coursework is important (see Education Endowment Foundation, 2020). This is, perhaps, because metacognitive knowledge (e.g. cognition about cognition), is a relatively abstract concept. Learning theorists like Jean Piaget suggest that learning concrete concepts occurs before learning abstract principles. For this reason, I believe that it is important to provide students with explicit tasks embedded in their courses so that they can practice these skills in order to build this more abstract and flexible set of metacognitive competencies.

This blog post shares activities and suggestions to help students build more metacognitive skills and become better self-directed learners as they complete a challenging, semester-long writing assignment.      

Beliefs and Assumptions

I have taught a writing intensive research methodology course for many years, and the work in this course lends itself to an embedded approach to teaching metacognitive skills. It also presents an opportunity to help students examine their implicit attitudes toward learning and writing. Students come to the classroom with ideas about themselves as writers and may labor under notions like, “I am not a good writer” or “I have to wait until the last minute to start, because that is when I do my best work.” It is within this context that teaching students explicit and concrete ways to self-regulate their learning of the writing process is helpful. Providing activities throughout the semester helps students adjust these beliefs and build better writing practices, which can help them to not only convey their ideas, but also learn from that writing process.

Additionally, the kind of writing required in research courses is often novel for undergraduate students. Many students enrolled in the course are in their second or third semester of college and have never written a long research proposal.      Their assumptions about how to approach this task are, therefore, not always aligned with the requirements. Many students also experience anxiety when faced with an assignment like writing an extensive research paper for the first time. As a result, the assignment of writing a long research proposal, as they are asked to do in this course, provides an opportunity to practice the emotional regulation skills required to successfully manage their intellectual endeavors.

Activities to guide the process of self-directed learning

For each phase of this self-directed learning cycle, I include prompts to guide students to explicitly consider their (often) implicit assumptions about the way they work. Each of these activities gives students the opportunity to reflect on their understanding of the writing process and build better metacognitive skills. Sometimes, these activities are presented in a free-writing exercise, and I commonly divide students into smaller groups to discuss their responses and then report back to the group. This sharing allows students to see that their peers often experience the same struggles during the writing process, and they can offer one another advice or support.

Assessing the assignment. With the permission of previous students, I provide examples of completed work to new students, together with my own annotations, highlighting places where and how requirements were met. This gives them a concrete understanding of what to accomplish. Additionally, I provide a detailed rubric that I review with students multiple times so they can continually compare their progress with the final expectations of the assignment.

Evaluating personal resources. I prompt students to evaluate their personal resources as writers, early in the course. To accomplish this,     I ask them to reflect on their approach to writing by responding to questions like: “Please tell me a bit about your writing process and a few ways you would like to improve as a writer” (adapted from Dunn, 2011). This reflection invites them to step back from the immediate tasks and see their work as connected to their development as scholars, writers, and learners.

Planning. To help students make appropriate plans for completing a long multi-step assignment, I ask them to develop a concrete work-plan, as well as to discuss these plans with others. Two kinds of conversations can facilitate this process. One set of prompts gives students a chance to make specific plans to complete their work, including questions like, Identify times you can complete this work” and “How much work will you complete at each time?” The other set of prompts are designed to scaffold their intellectual development. Through small-group conversations, students describe their research ideas to other students, with instructions like this: “Please describe your research interest. This is an opportunity to discuss your research ideas with someone else. Talking through your ideas is a good way to not only receive feedback, but also, it gives you a sense about which things are clear to you and which concepts need more clarification.”  

Applying & Monitoring. I also ask students to write drafts of sections of this larger paper and to visit a writing fellow in our College Writing Center to discuss them. To help students monitor their progress, I have asked them to complete reflective activities after tutorial sessions, including questions like, “Please describe what you learned about the writing process in your meeting.” and “Please describe AT LEAST three specific revisions for your paper, based on your meeting with the Writing Fellow.”

Reflecting & Adjusting. Several reflective opportunities embedded in the course help students to adjust their approach to writing.

  1. Peer review reflections: At the more immediate level, I ask students to engage in an intensive peer-review process, whereby they read each others’ papers to provide specific feedback. This process of helping others to improve their writing often provokes them to reflect more broadly on the writing process. I ask students to use the paper’s grading rubric, as well as a series of questions that help them to think about ways to evaluate whether the paper under review meets the criteria. For example, I ask them to notice if they need to re-read a passage to understand the author’s point, as this might indicate revision is warranted. After peer-review, students engage in conversations about what they have learned from the process, and I also ask them to identify at least three specific changes to their papers they should focus on next. By providing this feedback, students must step back and think about what makes writing successful, and our subsequent discussions facilitate the development of metacognitive knowledge.
  2. Personal growth reflections: A second set of reflective activities were suggested by our Writing Center and are designed to help students consider the broader ways in which they have changed as writers. These include questions like, “Please consider the different phases of this assignment and discuss what you have learned about writing. and “What are the ways you have improved as a writer? What are some ways that you would like to improve in the future?” This combination of fine-grained, detail-oriented and bigger picture questions is intended to help students develop fundamental metacognitive skills and also a more nuanced understanding of metacognition for their identity as learners and writers.

The self-directed learning cycle is a circular process whereby students bring the skills they learn in one course to their next endeavors. Through this process of sharing and reflecting, they build their metacognitive skills and become more comfortable with their inchoate ideas and compositions. Hopefully, students are then able to transfer these skills into future courses and into their lives outside of academics as well.

References

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. Jossey-Bass.

Dunn, D.S. (2011). A Short Guide to Writing about Psychology (3e). Boston: Pearson Longman.

Education Endowment Foundation (2020). Metacognition and Self-Regulated Learning: Guidance Report. Retrieved on July 7, 2021 from https://educationendowmentfoundation.org.uk/public/files/Publications/Metacognition/EEF_Metacognition_and_self-regulated_learning.pdf


Metacognition and First-Year Students

by Megan Morrissey, Assistant Director of Student Success, Mount Saint Mary College

MY OWN INTRODUCTION TO METACOGNITION

“But, Meg, how am I supposed to remember this stuff?”

I heard this question quite frequently throughout my first year in 2012-2013 as a part-time Academic Coach, a new position for me in higher education. The core values of my job included:

  • developing holistic relationships with my students
  • assisting them in feeling more confident as a college student both academically and personally
  • aiding in their overall transition to college life

Armed and prepared with questions and exercises I thought would help my students open up to me, I used an intake form that posed logistical questions, like their contact information and intended major, as well as questions that spoke to their interests and self-awareness. However, what became apparent to me is that this generation of students was craving skills that would help them retain information in more meaningful ways, and my focus was to support them in becoming more metacognitive.

For my student meetings, my toolkit included ways to learn and many inventories that tested students’ learning styles. Although these strategies might have worked for an initial exam, giving them a good place to start, these were not enough to help them fully understand key concepts they would be seeing over and again, throughout the semester, and the rest of their college career. They used the skills I gave them to cram information and facts in for that first test, and then they would push all of it aside to do the same thing for the next exam, never truly immersing themselves in the material and understanding the concepts themselves.

The words "asking questions" are shown along with the logo for Mount Saint Mary College

THE FEAR OF ASKING QUESTIONS

[In high school] “I didn’t have to study. I paid attention and got good grades.”

Prevalent in secondary education, the “teach to the test” mentality that some educators have is understandable. Being evaluated by standardized test scores, teachers and administrators feel the need to educate their students on exactly what to expect. However, what happens when these students get to college and suddenly the answers to the exam are not so black and white? When they need to defend an answer instead of just memorizing a Power Point slide? When professors want them to immerse themselves in the material? What scared my students the most was their faculty encouraging them to ask questions in class and/or share their informed opinions on what they thought about the material. 

Coupled with an intimidation of new faculty, many students face a real imposter syndrome coming into college and feel as if they do not truly belong there. My students have told me that they “don’t want to bother their professors” or are afraid of asking “dumb questions” and risk having faculty look at them in a negative way. My students also struggled with figuring out specifically how to word questions to faculty to get the clarification they need. In order to help them with this task, I would ask them in our meetings to explain what they might be having trouble with in class, asking my own questions to ensure I understood what they needed. Then, we would do a role-play:

  • My students play their professor, and I play the student.
  • They give me the absolute worst things that they think their faculty might say and I, in turn, show them how to navigate the situation and get their questions answered.
  • We then switch roles so that they can practice and anticipate their own reactions and responses.

HOW DO HIGH-ACHIEVING STUDENTS DO IT?

The role-playing exercises I used with my students to ease their anxiety in relating to their faculty led me to think more about how other higher achieving students were able to perform at such a caliber. A study from Iowa State University investigated academic achievement, achievement goals and beliefs about learning surrounding study strategies. The researchers concluded that competence, it seems, can be found both through performance, i.e. the results of an exam or quiz, and through reflection in comparing the actual results achieved to their own expectations (Geller et. al., 2018). Four patterns of achievement goals constitute the development of student competence:

  • They cultivate a personal sense of having learned the material.
  • They create the greatest link with metacognitive skills.
  • They monitor their own progress.
  • They adjust their study habits accordingly.

To achieve more, successful students engaged in metacognition, assessing the work they have already successfully retained, creating questions to more accurately understand the material they have yet to master and adjusting as needed. They also relied on study skills that included self-testing and planning out their study schedules to avoid procrastinating and cramming.

SUPPORTING STUDENTS WITH ANXIETY

In order to address a rise in the number of students with anxiety-related health issues, we deploy a reverse design in developing several strategies to help them cope. I assist them in the following tasks and activities:

  • creating a structured study schedule, working backwards from when their exam is, breaking down how much material they feel they can handle in a day, and
  • rehearsing, i.e. going to the actual classroom, when empty, and creating a practice test (using questions from their professors, textbooks and/or the internet), having them sit in their seat, and taking the practice test in the time they’re usually allotted. This activity not only facilitates the comprehension of the test material, but also anticipates the coping mechanisms they will use in case they get anxious, e.g. deep breathing, repeating a mantra they have created, and scanning the test to see what answers they absolutely know. This process of focusing awareness on their state of mind (specifically looking at when, where, how and why their anxiety peaks) and then using that to adjust their behaviors is another form of metacognition.

HOW CAN FACULTY HELP?

“If metacognition is the answer to being a more engaged and high achieving learner, what strategies can be utilized in class to better assist them in engaging in metacognition?”

Instructors can be powerful influencers by incorporating strategies in their courses and explicitly encouraging metacognitive practices. A study done by Wilson and Bai (2010) at the University of Central Florida concluded that educators need to make metacognition a priority in their lessons and demonstrate the flexibility of these learning strategies in order to show students that they need to reflect and think about how they are retaining information. These reflections can include the following:

  • active discussions and think-alouds
  • asking students to hand in questions anonymously before class—concepts, ideas, and points-of information—that they may not have understood from the previous lesson and/or homework assignment
  • incorporating reflective writing at the end of each class session and to guide them in making connections in what they have been learning

CONCLUSION

My students often come into my office during the first few days of their new journey at our institution. Their emotions are raw and they’re terrified of making any type of mistake. In bridging reflective practices with the development of students’ metacognitive skills, the power, for me, lies in asking purposeful, thoughtful questions and, thus, guiding them as they confront their fear of asking questions and learn to ask questions themselves. Metacognitive skills assist them in building self-confidence in and out of the classroom.

WORKS CITED

Geller, Jason, et al. “Study strategies and beliefs about learning as a function of academic achivement and achievement goals.” Memory (2018): 8. Article.

Wilson, Nancy S. and Haiyan Bai. “The relationships and impact of teachers’ metacognitive knowledge and pedagogical understandings of metacognition.” Metacognition Learning (2010): 20. Study.


Pandemic Metacognition: Distance Learning in a Crisis

By Jennifer A. McCabe, Ph.D., Center for Psychology, Goucher College

The college “classroom” certainly looks different these days. Due to campus closures in the wake of the COVID-19 pandemic, we no longer travel to a common space to learn together in physical proximity. Though most of us have transitioned to online instruction, there was insufficient time to prepare for this new model – instead, we are in the midst of “emergency distance learning,” with significant implications for teacher and student metacognition.

image of person at computer under emergency red light

New Demands for Self-regulation

Now that certain overt motivators are no longer present, self-regulated learning is more critical than ever (e.g., Sperling et al., 2004; Wolters, 2003). Students are no longer required to hand in work during class, to engage in in-person class discussions about learned material, or to come face-to-face with instructors who know whether students are keeping up with the course. Instead they must figure out how to engage in the work of learning (and to know it is, indeed, still supposed to be work), away from the nearby guidance of instructors, other on-campus support sources, and peers. What are the effects of isolation on student metacognition? We can only find out as the situation evolves, and it will surely prove to be a complex picture. Though some will continue to succeed and even find new sources of motivation and revised strategies during this unusual time, others may experience a decline in metacognitive accuracy in the absence of typically available sources of explicit and implicit feedback on learning.

What metacognitive and motivational challenges face students who began the semester in a traditional in-person classroom, and now log in to a device to “go to class?” When I invited my (now online) students to report their experiences in preparing for our first web-based exam, many reported that the learning strategies themselves do not feel different as implemented at home, but that they are especially struggling with motivation and time management. Though these are common issues for college students even in the best of (face-to-face) circumstances, it seems they may be magnified by the current situation. For example, distractions look very different at home. Even if students already had figured out a system to manage distractions, and to channel their motivation to find focused time to implement effective learning strategies, this campus-based skill set may not translate to their current settings. Students need to recognize barriers to learning in this new context, and should be supported in developing (perhaps new or at least tweaked) strategies for academic success.

Regarding time management, online course deadlines may be timed differently – perhaps more flexibly or perhaps not – on different days of the week (instead of in a class meeting), late at night (or early in the morning), or over the weekend. Students must strategically allocate their time in a manner different from traditional classroom learning. This is compounded by the fact that some courses meet synchronously, some are completely asynchronous, and some are a hybrid. Managing this new schedule requires the metacognitive skill of recognizing how long different types of learning will take, applying the appropriate strategies, and – oh yes – fitting all that in with other non-academic demands that may change day to day. Planning is especially challenging – and anxiety-provoking – with so much unknown about the future.

Stretched Too Thin to Think Well

Looming over the learning, we cannot forget, is the actual threat of the virus, and the myriad ways it is impacting students’ mental and physical health. In my cognition classes, we discuss the implications of cognitive load, or the amount of our limited attentional resources (and therefore working memory capacity) being used for various tasks in a given moment; this current load determines how much is left over for tasks central to learning and performance goals (e.g., Pass et al., 2003). If working memory is consumed with concerns about one’s own health or the health of loved ones, financial concerns, caregiving needs, food availability, or even basic safety, it is no surprise that the ability to focus on coursework would be compromised. Intrusive worries or negative thoughts may be particularly troublesome right now, and again leave fewer resources available for learning new information. Instructors may want to consider evidence-based educational interventions – such as writing about worries to manage anxiety – that have been effective in clearing ‘space’ in mental load for learning tasks (Ramirez & Beilock, 2011).

Most importantly, we all need to understand (and accept) the limitations of our cognitive system, the implications of having limited attentional resources, and how to most effectively manage this shifting load. To better support students in metacognitive awareness, instructors across disciplines can incorporate information about cognitive load management and self-regulated learning strategies as part of their courses.

Teachers should also think carefully about the line between desirable difficulties – those learning conditions that are challenging, slow, and error-prone, but lead to stronger long-term retention – and undesirable difficulties – those challenges that are simply hard but do not result in better learning (e.g., Yan et al., 2017). When faced with a choice to add work or effort, consider whether it is part of the learning that relates to the core learning outcomes for the class. If it does not, given the current uniquely high-load circumstances we find ourselves in, drop it.

Further, be explicit and transparent with students about why assignments were retained or changed (ideally connecting these to those core objectives), and share with them your thought process about course-related design and assessment decisions. Most of all, communicate early and often with students about expectations and assessments to help them with motivation, scheduling, and cognitive load. Acknowledge that this is a highly atypical situation, show compassion, allow flexibility as you can, and let them know we are all learning together.

Imperative Explicitness

Metacognition in the time of COVID-19 must be even more intentionally brought from the implicit “hidden curriculum” of college to the explicit. Factors important to student metacognition, including self-regulated learning, should be named as a skill set central to academic (and life) success. Help them better understand their own learning and memory processes, and how strategies may need to evolve in changing circumstances, which for now means “emergency distance learning.” Perhaps a silver lining is that this investment in metacognitive flexibility will pay off in supporting students’ future endeavors. For teachers, this unexpected transition just might help us improve our student-centered approaches – wherever our classrooms may exist in the future.

Suggested References

Pass, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38(1), 1-4. https://doi.org/10.1207/S15326985EP3801_1

Ramirez, G., & Beilock, S. L. (2011). Writing about testing worries boosts exam performance in the classroom. Science, 331(6014), 211-213. https://doi.org/ 10.1126/science.1199427

Sperling, R. A., Howard, B. C., Staley, R., & DuBois, N. (2004). Metacognition and self-regulated learning constructs. Educational Research and Evaluation, 10(2), 117–139. doi:10.1076/edre.10.2.117.27905

Wolters, C. A. (2003). Regulation of motivation: Evaluating an underemphasized aspect of self-regulated learning. Educational Psychologist, 38(4), 189–205. doi:10.1207/S15326985EP3804_1

Yan, V. X., Clark, C. M., & Bjork, R. A. (2017). Memory and metamemory considerations in the instruction of human beings revisited: Implications for optimizing online learning. In J. C. Horvath, J. Lodge, & J. A. C. Hattie (Eds.), From the Laboratory to the Classroom: Translating the Learning Sciences for Teachers (pp. 61-78). Routledge.


Wrapping up Metacognition: Pre- and Post-Exam Interventions

By Jennifer A. McCabe, Ph.D., Goucher College

Multiple studies have demonstrated that college students report using less-than-optimal learning strategies when preparing for exams. Without explicit instruction on effective techniques, along with guidance on how to engage in metacognitive monitoring and evaluation of their learning processes, it is not clear how this situation will improve. One of the many ways in which this goal could be achieved is through a specific technique called “exam wrappers.”

"Wrap it up" slogan

An exam wrapper (also known as a “cognitive wrapper”; Bowen, 2017) is a brief activity in which students complete a form to assess their recent exam performance, describe and reflect on how they prepared, and make a strategic plan for future improvement. This would typically be given to students upon receiving exam grades, with the goal to shift the focus from course content and exam outcome (grade) to the learning process itself. Since being introduced by Marsha Lovett in 2013, educators have been encouraged to use this tool to improve student metacognition and, ultimately, performance on exams and assignments.

There is surprisingly little well-controlled research on exam wrappers, and the several studies that have evaluated their impact are lacking in statistical power, internal validity, and/or generalizability. Raechel Soicher and Regan Gurung note this issue at the start of their 2017 article in which they report the results of an exam-wrapper intervention in introductory psychology. They compared an exam wrapper (modeled on Lovett, 2013) to both a “sham wrapper” condition in which students evaluate their incorrect answers and connect each to a relevant course topic, and also to a true control condition in which students simply reviewed their exams without explicit instruction. Results showed no differences among conditions in final grades (even when controlling for pre-intervention metacognition scores), nor on any of the exams, nor on metacognition subscale scores. The authors suggest that exam wrappers may be more successful when used across multiple classes, and that it may also help to make them more interesting and engaging for students. As I suggest below, perhaps having students complete the exam wrappers in the context of having learned about effective study strategies would also improve the benefit of implementing them after exams.

Another recent study, published in 2017 by Patricia Chen and colleagues, reported on outcomes from an exam-wrapper-type of activity called a “Strategic Resource Use” (SRU) intervention. Students in an introductory statistics course were randomly assigned to the SRU intervention or to a control condition that experienced many parts of the activity except for the focused metacognitive components. Importantly, this approach differs from that of traditional exam wrappers in that (1) it was self-administered and fully online; and, more importantly (2) there were both pre- and post-exam components. In the 7-10 days prior to taking the exam, all students completed an online survey in which they reported their predicted exam grade, motivation level, importance of achieving that grade, and confidence in reaching their performance goal. Those in the SRU condition also answered questions about the upcoming exam format, the types of resources available to them during preparation time, why each would be useful, and their plan for using each one. From a checklist of class resources, SRU students provided elaborated answers on usefulness and strategic planning. After the exam, students reported on which they had used, level of perceived usefulness, and how much self-reflection they had engaged in with regard to learning course material. Results showed that in comparison to the control condition, SRU students had higher course grades (about 1/3 of a letter grade), lower self-reports of negative affect toward exams, and higher perceived control over exam performance.

It is interesting that Chen and colleagues do not make the connection to the exam wrapper idea or literature. Both interventions described above have similar implementation and goals surrounding exams – to improve undergraduates’ self-regulated learning by focusing their attention on how they currently learn, how the quality and/or quantity of preparation map on to exam performance, and how they can use various strategies to improve for next time. Both interventions are based on the idea that highlighting the essential metacognitive processes of reflecting and adjusting supports student learning.

What to do with this mixed evidence and varying models for implementing this metacognitive “wrapper” tool? I have personally been using post-exam wrappers (modeled on Lovett) in my Cognitive Psychology course for several years. Though I have not collected empirical data on their effectiveness, based on student comments and my own observations I believe they help and plan to continue to use them. After considering Soicher and Gurung’s methods and results, I think that my implementation may be especially poised for single-course success because, unlike in the two studies discussed above, my exam wrappers are administered on the heels of learning about and engaging in practice with evidence-based learning strategies such as elaboration and frequent, effortful, and distributed (spaced) retrieval practice.

In addition to incorporating these elements into my course structure to provide students with multiple tools for durable learning, they also read the book “Make It Stick” (Brown, Roediger, and McDermott, 2014) early in the semester and engage in writing and peer discussion about effective ways to learn as described in my 2017 blog post Make It Stick in Cognitive Psychology. Thus, when my students complete the post-exam wrapper by reporting strategies they used, and those they will try to increase for future exams, they are doing so in a context of this metacognitive knowledge and accompanying motivation to learn. I am planning to add a pre-exam wrapper component, similar to the SRU model, the next time I teach this course, and given Chen et al.’s promising results, I hope it will even further support my students’ metacognitive development, learning, and, yes, course performance.

I explicitly communicate my perspective on exams to students, early and often: tests are learning events. By incorporating exam wrappers, I am reinforcing this message, and my students see that I care about their learning and my genuinely want them to improve. This also connects to a chapter in “Make It Stick” on the benefits of having what Carol Dweck calls a growth mindset – believing that intelligence is malleable and can be enhanced through practice and strategic effort. I encourage my students to adopt this mindset in multiple ways, and one way I can explicitly support this is to provide opportunities to learn from their experiences, including course exams.

Suggested References

Bowen, J. A. (2017). Teaching naked techniques: A practical guide to designing better classes. San Francisco, CA: Jossey-Bass.

Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Cambridge, Massachusetts: The Belknap Press of Harvard University.

Chen, P., Chavez, O., Ong, D. C., & Gunderson, B. (2017). Strategic resource use for learning: A self-administered intervention that guides self-reflection on effective resource use enhanced academic performance. Psychological Science, 28(6), 774-785. https://doi.org/10.1177/0956797617696456

Lovett, M. C. (2013). Make exams worth more than the grade: Using exam wrappers to promote metacognition. In M. Kaplan, N. Silver, D. LaVauge-

Manty & D. Meizlish (Eds.), Using reflection and metacognition to improve student learning: Across the disciplines, across the academy (pp. 18-52).  

Soicher, R. N., & Gurung, R. A. R. (2017). Do exam wrappers increase metacognition and performance? A single course intervention. Psychology Learning & Teaching, 16(1), 64-73. https://doi.org/10.1177/1475725716661872


Investigating Students’ Beliefs about Effective Study Strategies

By Sabrina Badali, B.S., Weber State University
Cognitive Psychology PhD student starting Fall ‘19, Kent State University

As an undergraduate, I became familiar with the conversations that took place after a major test. My classmates frequently boasted about their all-nighters spent reviewing textbooks and notes. Once grades were released, however, another conversation took place. The same students were confused and felt their scores did not reflect the time they spent preparing. My classmates were using relatively ineffective study strategies; most likely because they did not understand or appreciate the benefits of more effective alternatives.

Some of the most commonly reported study strategies include rereading a textbook and reviewing notes (Karpicke, Butler, & Roediger, 2009). However, those strategies are associated with lower memory performance than other strategies, such as testing oneself while studying, spreading out study sessions, and interleaving or “mixing” material while learning (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). Getting students to change their study habits can prove difficult. An effective way to start, perhaps, is getting students to change their beliefs about these strategies.

Before a learner will independently choose to implement a more effective study strategy (i.e. spreading out study sessions), they need to appreciate the benefits of the strategy and realize it will lead to improved performance. It seems this is often where the problem lies. Many students lack a metacognitive awareness of the benefits of these effective strategies. It is common for students to believe that strategies such as rereading a textbook or cramming are more beneficial than strategies such as testing oneself while learning or spacing out study sessions, a belief that does not match actual memory performance.

Researching Interleaving as a Study Strategy

This underappreciation of the benefits of these effective study strategies was something I recently investigated. In my research project, undergraduate participants completed two category learning tasks – learning to recognize different species of butterflies and learning artists’ painting styles. For each learning task, half of the butterfly species and half of the artists were assigned to the massed study condition. In the massed condition, all images of a category would be presented consecutively before moving on to the next species or artist. For example, all four images of one butterfly species would be presented back-to-back before moving on to images of the next species. The remaining half of the categories were assigned to the interleaved study condition. In the interleaved condition, images from a category were spread throughout the learning task and two images from the same category were never presented consecutively. For example, the first image of the “Tipper” butterfly may be shown early on, but the remaining three images would be distributed throughout the learning task such that participants viewed several other species before viewing the second image of the “Tipper”.  

Images illustrating both massed presentation (left side - all butterflies are in the same category) and interleaved presentation (right side - the butterflies come from four different categories).

After completing these tasks, and completing a final memory assessment, participants were given a brief explanation about the difference between the massed method of presentation and the interleaved method. After this explanation, participants provided a metacognitive judgment about their performance on the study. They were asked whether they thought they performed better on massed items, interleaved items, or performed the same on both.

Misalignment of Evidence and Beliefs

I found that 63% of the participants thought they performed better on massed items, even though actual memory performance showed that 84% of participants performed better on interleaved items. There was a clear disconnect between what the student participants thought was beneficial (massing) versus what was actually beneficial (interleaving). Participants did not realize the benefits of interleaving material while learning. Instead, they believed that the commonly utilized, yet relatively ineffective, strategy of massing was the superior choice. If students’ judgments showed they thought interleaving was less effective than massing, how could we expect these students to incorporate interleaving into their own studying? Metacognition guides students’ study choices, and, at least in this example, students’ judgments were steering them in the wrong direction. This poses a problem for researchers and instructors who are trying to improve students’ study habits.

Using these effective study strategies, such as interleaving, makes learning feel more effortful. Unfortunately, students commonly believe it is a bad thing if the learning process feels difficult. When learning feels difficult, our judgments about how well we will perform tend to be lower than when something feels easy. However, memory performance shows a different pattern. When learning is easy, the material is often quickly forgotten. Alternatively, when learning is more difficult, it tends to lead to improved longer-term retention and higher memory performance (Bjork, 1994). While this difficulty is good for learning outcomes, it can be bad for the accuracy of metacognitive judgments. Before we can get students to change their study habits, it seems we need to change their thoughts about these strategies. If we can get students to associate effortful learning with metacognitive judgments of superior memory performance, we may be able to help students choose these strategies over others.

When teaching these study strategies, explaining how to use the strategy is a vital component, but this instruction could also include an explanation of why the strategies are beneficial to help convince students they are a better choice. Part of this explanation could address the notion that these strategies will feel more difficult, but this difficulty is part of the reason why they are beneficial. If students can accept this message, their metacognitive judgments may start to reflect actual performance and students may become more likely to implement these strategies during their own studying.

References

Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe and A. Shimamura (Eds.). Metacognition: Knowing about Knowing (pp. 185-205). Cambridge, MA: MIT Press.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58.

Karpicke, J. D., Butler, A. C., & Roediger, H. L. (2009). Metacognitive strategies in student learning: Do students practise retrieval when they study on their own? Memory, 17(4), 471-479.


Helping Students Feel Responsible for Their Learning

by Patrick Cunningham, Ph.D., Rose-Hulman Institute of Technology

“Dr. C, you really expect your students to do a lot!” I quickly replied, “Yes!” We then engaged in a discussion of things only students can do for their learning. How can we help more of our students recognize their responsibility for their learning? Three strategies I employ include explicit and direct instruction, questioning for self-discovery, and in-class opportunities to practice new learning strategies. Each of these strategies direct students’ focus to things under their control.

Helping our students recognize and embrace their responsibility for their learning requires metacognitive activity. Specifically, it requires building metacognitive knowledge of persons and strategies and engaging in metacognitive regulation through planning for and monitoring learning experiences. Direct instruction and in-class learning strategy practice can expand metacognitive knowledge. Questioning for self-discovery can facilitate students metacognitive monitoring and planning for subsequent learning experiences.

For explicit and direct instruction, I start a discussion within the first two days of class by asking, “What does it mean to learn something?” Most responses include applying and explaining concepts. Good answers, but I press for more depth. In turn I respond, “Apply to what? Explain to whom?” Learning something, they say, means being able to apply concepts to real circumstances. My engineering students also come up with a variety of people or groups of people to explain things to: their grandmother, family members, a cross-functional design team, a boss, peer engineers, marketing/sales professionals, or even customers. These answers are good operational definitions of learning. Next, I talk to my students about the knowledge frameworks that underlie these abilities.

Illustration of Knowledge Frameworks

In order to apply concepts to real and diverse circumstances and to explain concepts effectively to a range of audiences we must have many routes to and between the elements of our knowledge and a logical structure of the information. That is, our knowledge frameworks must be well populated, richly interconnected, and meaningfully organized (Ambrose et al., 2010). However, as novices in an area, we start with sparsely populated and isolated knowledge frameworks. I then share with students that they are the only ones who can construct their knowledge frameworks. The population and interconnection of elements depends on what they individually do with the material, in class and out of class. As the instructor, I can create opportunities and experiences for them, but I cannot build their knowledge frameworks for them. Students are responsible for the construction work.

For self-discovery I use guiding questions to help students articulate learning goals, combat the Illusion of Comprehension, and make cause-and-effect linkages between their learning behaviors and outcomes. I may ask, “What goals do you have for your homework/study sessions?” Students often focus on getting assignments done or being “ready” for exams, but these are not directly learning goals. It is helpful here to ask what they want or need to be able to do with the information. Eliciting responses such as: “Apply ____ to ____. Create a ____ using ____. Explain ____.” Now we can ask students to put the pieces together. How does just “getting the homework done” help you know if you can apply/create/explain? We are seeking to help students surface incongruities in their own behavior, and these incongruities are easier to face when you discover them yourself rather than being told they are there.

A specific incongruity that many students struggle with is the Illusion of Comprehension (Svinicki, 2004), which occurs when students confuse familiarity with understanding. It often manifests itself after exams as, “I knew the material, I just couldn’t show you on the exam.” My favorite question for this is, “How did you know you knew the material?” Common responses include looking over notes or old homework, working practice exams, reworking examples and homework problems. But what does it mean to “look over” prior work? How did you work the practice exam? How did you elaborate around the concepts so that you weren’t just reacting to cues in the examples and homework problems? What if the context of the problem changes? It is usually around this point that students begin to realize the mismatch between their perceptions of deep understanding and the reality of their surface learning.

Assignment or exam wrappers are also good tools to help students work out cause-and-effect linkages between what they do to learn material and how they perform. In general, these “wrappers” ask students to reflect on what they did to prepare for the assignment or exam, process instructor feedback or errors, and adjust future study plans.

It is important, once we encourage students to recognize these incongruities, that we also help direct students back to what they can do to make things better. I direct conversations with my students to a variety of learning strategies they can employ, slanted towards elaborative and organizational strategies. We talk about such things as making up problems or questions on their own, explaining solutions to friends, annotating their notes summarizing key points, or doing recall and reviews (retrieval practice).

However, I find that telling them about such strategies often isn’t enough. We trust what is familiar and comfortable – even ineffective and inefficient learning strategies that we have practiced over years of prior educational experiences and for which we have been rewarded. So I implement these unfamiliar, but effective and efficient strategies into my teaching. I want my students to know how to do them and realize that they can do them in their outside of class study time as well.

One way I engage students with new strategies is through constructive review prior to exams. We start with a recall and review exercise. I have students recall as many topics as they can in as much detail as they can for a few minutes – without looking anything up. Then I have students open their notes to add to and refine their lists. After collectively capturing the key elements, I move to having pairs of students work on constructing potential questions or problems for each topic. I also create a discussion forum for students to share their problems and solutions – separately. As they practice with each others’ problems, they can also post responses and any necessary corrections.

In concert, direct instruction, questioning for self-discovery, and in-class opportunities to practice new learning strategies can develop our students’ sense of responsibility for their learning. It even can empower them by giving them the tools to direct their future learning experiences. In the end, whether they recognize it or not, students are responsible for their learning. Let’s help them embrace this responsibility and thrive in their learning!

References

Ambrose, S., Bridges, M., DiPietro, M., Lovett, M., & Norman, M. (2010) How Learning Works: 7 Research-Based Principles for Smart Teaching. San Francisco, CA: Jossey-Bass.

Svinicki, M. (2004). Learning and Motivation in the Postsecondary Classroom. San Francisco, CA: John Wiley & Sons.

Acknowledgements

This blog post is based upon metacognition research supported by the National Science Foundation under Grant Nos. 1433757 & 1433645. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Next Blog-post:

Overcoming student resistance to engaging in their metacognitive development.


It shouldn’t be Top Secret – Bloom’s Taxonomy

By Lauren Scharff, Ph.D.,  U. S. Air Force Academy *

Across the past year or so I have been reminded several times of the following fact: Most students are not aware of Bloom’s Taxonomy, and even if they are aware, they have no clue how or why their awareness of it might benefit them and their learning. Most instructors have heard of at least one version of Bloom’s Taxonomy, and some keep it in mind when designing learning activities and assessments.  But, rarely do instructors even mention it to their students.

Why don’t instructors share Bloom’s Taxonomy with their students? Is it a top secret, for instructors only? No! In fact, awareness and use of Bloom’s taxonomy can support metacognitive learning, so students should be let in on the “secret.”

What were the key experiences that led me to this strong stance? Let me share….

In May of 2016, I was fortunate to attend a keynote by Dr. Saundra McGuire at High Point University. In her keynote address and in her book, Teach Students How to Learn (2015), McGuire shared stories of interactions with students as they became aware of Bloom’s Taxonomy and applied it to their learning. She also shared data showing how this coupled with a variety of other metacognitive strategies lead to large increases in student academic success. Her work served as the first “ah ha” moment for me, and I realized that I needed to start more explicitly discussing Bloom’s Taxonomy with my students.

An additional way to highlight Bloom’s Taxonomy and support student metacognitive learning was shared this past October (2017) when Dr. Karl Wirth led a workshop as part of our 9th Annual Scholarship of Teaching and Learning (SoTL) Forum at the U. S. Air Force Academy. In his workshop he shared examples of knowledge surveys along with data supporting their use as a powerful learning tool. Knowledge surveys are collections of questions that support student self-assessment of their knowledge, understanding, and skills. When answering the questions, students rate themselves on their ability to answer the question (similar to a confidence rating) rather than fully answering the question. Research shows that most students are able to accurately self-assess (confidence ratings correlate strongly with actual performance; Nuhfer, Fleisher, Cogan, & Gaze, 2017). However, most students do not take the time to carefully self-assess their knowledge and abilities without formal guidance and encouragement to do so. In order to be effective, knowledge surveys need to ask targeted / granular questions rather than global questions. Importantly, knowledge survey questions can span the full range of Bloom’s Taxonomy, and Dr. Wirth incorporates best practices by taking the time to explain Bloom’s Taxonomy to his students and explicitly share how his knowledge survey questions target different levels.

Sharing Bloom’s Taxonomy in our classes is a great first step, but ultimately, we hope that students use the taxonomy on their own, applying it to assignments across all their courses. However, just telling them about the taxonomy or explaining how aspects of our course tap into different levels of the taxonomy may not be enough to support their use of the taxonomy beyond our classrooms. In response to this need, and as part of an ongoing Scholarship of Teaching and Learning (SoTL) project at my institution, one of my student co-investigators (Leslie Perez, graduated May 2017), created a workshop handout that walks students through a series of questions that help them apply Bloom’s as a guide for their learning and academic efforts. This handout was also printed in a larger, poster format and is now displayed in the student dorms and the library. Students use the handout by starting in the middle and asking themselves questions about their assignments. Based on their answers, the walk through a path that helps them determine what level of Bloom’s Taxonomy they likely need to target for that assignment. It should help them become more explicitly aware of the learning expectations for their various assignments and support their informed selection of learning strategies, i.e. help them engage in metacognitive learning.

Figure 1. Snapshot of the handout we use to guide students in applying Bloom’s Taxonomy to their learning.  (full-sized version here)

As someone who is a strong proponent of metacognitive learning, I have become increasingly convinced that instructors should more often and more explicitly share this taxonomy, and perhaps even more importantly, share how it can be applied by students to raise their awareness of learning expectations for different assignments and guide their choice of learning strategies. I hope this post motivates instructors to share Bloom’s Taxonomy (and other science of learning information) with their students. Feel welcome to use the handout we created.

————

McGuire, S. (2015). Teach Students How to Learn. Stylus Publishing, LLC, Sterling, VA.

Nuhfer, E., Fleisher, S., Cogan, C., Wirth, K., & Gaze, E. (2017). How random noise and a graphical convention subverted behavioral scientists’explanations of self-assessment data: Numeracy underlies better alternatives. Numeracy, 10(1), Article 4. DOI: http://dx.doi.org/10.5038/1936-4660.10.1.4

* Disclaimer: The views expressed in this document are those of the author and do not reflect the official policy or position of the U. S. Air Force, Department of Defense, or the U. S. Govt.


The GAMES Survey: A Tool to Scaffold Metacognitive Practices

by Lauren Scharff, U. S. Air Force Academy

As many of us educators know, an unfortunately large number of students, both at the K-12 and college-levels, do not give much thought to how and why they try to learn the way they do, much less demonstrate strong habits of metacognition. Talking in general about metacognition might garner some students’ interest, but without some concrete guidance on how to engage in behaviors that support metacognition, students are less likely to develop such practices.

Thus, I was pleased to rediscover the GAMES survey / self-assessment tool created by Marilla Svinicki when I was re-reading her excellent book, Learning and Motivation in the Postsecondary Classroom, as part of a book group at my institution. GAMES stands for:

  • Goal-oriented studying
  • Active studying
  • Meaningful and memorable studying
  • Explain to understand
  • Self-monitor

For each component of the survey, there are five to ten behaviors for which students indicate their likelihood to perform using a 5-point scale ranging from “Never” to “Always.” These behaviors are distinct, tangible actions such as:

  • Analyze what I have to do before beginning to study. (Goal-oriented studying)
  • Ask myself questions before, during, and after studying. (Active studying)
  • Make connections between what I am studying and past classes or units. (Meaningful and memorable studying)
  • Discuss the course content with anyone willing to listen. (Explain to understand)
  • Keep track of things I don’t understand and note when they finally become clear and what made that happen. (Self-monitor)

Marilla suggests that the use of such an instrument can help students become more aware of the possibility of self-regulating their learning behaviors. This combination of awareness and self-regulation is key to metacognition, and is what is prompting this blog post.

Through the process of completing the GAMES survey, students are introduced to more than 30 specific behaviors that holistically will support metacognition about learning. Students can easily observe areas where they might show stronger or weaker engagement, and they can focus their efforts where they are weaker, using the list of specific, tangible behaviors as a scaffold to help them target their activity.

At my institution, the U. S. Air Force Academy, we plan to use the GAMES survey in a current Science of Learning workshop series for our students led by students. Most of the seminar attendees are students who are struggling academically, but we are advertising that, by “studying smarter, not only harder” students of all levels of academic achievement can improve their learning. We believe that the GAMES survey will help students target specific behaviors that have been shown to support deeper learning.

We are not the only institution that has seen value in disseminating the GAMES survey to students. For example, several years ago, Georgia Tech encouraged its use across all sections of their first-year seminar. Importantly, they didn’t simply ask students to complete the survey and that was it. They encouraged instructors to help students use the results in a meaningful way, such as by picking a weak behavior and striving to improve it over a 2-week time period, or by having students journal about changes they made and how those changes seemed to impact their academic performance.

This survey tool is appropriate across the disciplines and only takes a few minutes for students to complete. Its use and a short follow-on activity to encourage meaningful application would not add great burden to a faculty member or take much time from normal course activities. But, the pay-off could be large for individual students, both in that course as well as others if they transfer the principles into new contexts. It’s definitely worth a try!

——————

Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. Bolton, MA: Anker Publishing Co.

If you do not have access to Marilla Svinicki’s book, you can read a short online overview of GAMES on the Association for Psychological Science website (2006), and obtain a pdf copy of the survey online.


Metacognition in STEM courses: A Developmental Path

by Roman Taraban, PHD, Texas Tech University

There is a strong focus in science, technology, engineering, and math (STEM) courses to solve problems (Case & Marshall, 2004). Does problem solving in STEM involve metacognition? I argue that the answer must surely be ‘yes’. That’s because metacognition involves monitoring the effectiveness of learning and problem-solving strategies and using metacognitive knowledge to regulate behavior (Draeger, 2015). But when does metacognition become part of problem solving, and how does it come about? Can we discern development in metacognitive monitoring and regulation? In this post, I will present some qualitative data from a study on problem-solving in order to reflect on these questions. The study I draw from was not about metacognition per se, however, it may provide some insights into the development of metacognition.

The study I conducted involved freshman engineering majors. These students were asked to solve typical problems from the course in mechanics in which they were currently enrolled (Taraban, 2015). Not surprisingly, students varied in how they began each problem and how they proceeded towards a solution. In order to gain some insight into their problem-solving strategies, I asked students to simply state why they started with the equation they chose and not some other equation, after they had solved the problems.

Students’ responses fell into at least three types, using labels from Case and Marshall (2004): surface, algorithmic, and deep conceptual. When asked why they started with their first equation, some students responded:

  • “I don’t know, it’s just my instinct”.
  • “No special reason. I’m just taking it randomly”.
  • “It’s just habit.”
  • “The first thing that came to my mind.”

Of interest here, these students did not appear to reflect on the specific problem or show evidence of modulating their behavior to the specific problemheir responses fit a surface learning approach: “no relationships sought out or established, learn by repetition and memorization of formulae” (Case & Marshall, 2004, p. 609).

Other students’ responses reflected an algorithmic approach to learning — “identifying and memorizing calculation methods for solving problems” (Case & Marshall, 2004, p. 609):

  • “I am getting three variables in three unknowns so I can solve it.”

Here the student verbally expresses a more structured approach to the problem. The student believes that he needs three equations involving three unknowns and uses that as a goal. Students who take an algorithmic approach appear to be more reflective and strategic about their solutions to problems, compared to surface problem solvers.

Case and Marshall (1995) regarded both the surface and algorithmic pathways as part of development towards deeper understanding of domain concepts and principles, the latter which they labeled the conceptual deep approach to learning: “relating of learning tasks to their underlying concepts or theory” with the intention “to gain understanding while doing this” (p. 609). Basically, their suggestion is that at some point students recognize that a goal of learning is to understand the material more deeply, and that this recognition guides how they learn. Case and Marshall’s description of conceptual deep learning fits Draeger’s (2015) earlier suggestion that monitoring the effectiveness of learning and regulating one’s behavior is characteristic of metacognitive thinking. Once students reach this level, we should be able to more readily observe students’ intentions to understand the material and observe their overt attempts to grasp the material through their explicit reflection and reasoning. Examples of this type of reflection from my study could be gleaned from those students who did not jump directly to writing equations without first thinking about the problem:

  • “If I choose the moment equation first, then directly I am getting the value of F. So in the other equations I can directly put the value of F.”

As students progress from surface to algorithmic to deep conceptual processing, there is certainly development. However, in the present examples that track that development, it is difficult to partial out students’ thinking about the problem content from their thinking-about-thinking, that is, their metacognitions. Draeger (2015) helps here by distinguishing between metacognition and critical thinking. The latter often requires domain-specific knowledge. Draeger suggests that “many students are able to solve complex problems, craft meaningful prose, and create beautiful works of art without understanding precisely how they did it” (p. 2). Basically, critical thinking is about methodology within a domain – e.g., the person knows how to format a narrative or select an appropriate statistical procedure, without necessarily reflecting on the effectiveness of those choices, that is, without metacognition. In the examples I provided above from my work with undergraduates on problem solving, there is invariably a mix of critical thinking and metacognition. Draeger’s distinction signals a need to better decouple these two distinct kinds of cognitive processes in order to better clarify the developmental trajectory of metacognitive processing in problem solving.

Finally, why do we observe such wide variance in students’ approaches to problem-solving, and, relatedly, to metacognition? One reason is that instructors may emphasize assessment and grades (Case & Marshall, 2004). As a consequence, students may focus more on gaining points for the correct answer rather than on the process. Welsh (2015) has suggested that course structure can act as a barrier to deeper learning: “high stakes assessments may overshadow resources designed for metacognitive development” (p. 2). Welsh found that students were more concerned with test performance than with reflecting upon their study strategies and implementing learning strategies recommended by the instructor.

How are we to understand this discord between concern with test performance and metacognition? At some level, when students set goals to do well on tests they are regulating their behavior. Metacognitive resources from the instructor may be in competition with students’ perceived resources (e.g., access to old tests, study buddies, cramming the night before). The instructor can facilitate change, but the leap from surface and algorithmic learner to deep conceptual learner must be undertaken by the student.

Passion and commitment to a topic are strong motivators to find the means to access and acquire deeper conceptual understanding. One measure of teacher success is class test performance, but another can be found in student comments. Here is one that I recently received that I found encouraging: Despite the fact that I was a bit uninterested in the subject matter, this was one of my favorite classes. By the end of the semester, not only was I interested in the subject matter, I was fascinated by it. Perhaps as instructors we need to facilitate good metacognitive practices but also nurture interest in what we teach in order to motivate students to pursue it more deeply through more effective metacognitive practices.

References

Case, J., & Marshall, D. (2004). Between deep and surface: procedural approaches to learning in engineering education contexts. Studies in Higher Education, 29(5), 605-615.

Draeger, J. (2015). Two forms of ‘thinking about thinking’: metacognition and critical thinking. Retrieved from https://www.improvewithmetacognition.com/two-forms-of-thinking-about-thinking-metacognition-and-critical-thinking/ .

Taraban, R. (2015, November). Transition from means-ends to working-forward problem solving. 56th Annual Conference of the Psychonomic Society. Chicago, IL.

Welsh, A. (2015). Supports and barriers to students’ metacognitive development in a large intro chemistry course. Retrieved from https://www.improvewithmetacognition.com/supports-and-barriers-to-students-metacognitive-development-in-a-large-intro-chemistry-course/


Forging connections with students through metacognitive assignments

by Diane K. Angell, St. Olaf College

We have all likely shared the experience, early in our teaching career, of a gaggle of students gathering at our office door after an exam. “I studied for so many hours!” “I came to class everyday.” “I always did well in high school.” Students also seemed to struggle ahead of exams as they tried to learn and master scientific material. “What should I study?” “Why can’t you just give us a study guide?” I was often perplexed by these frustrations. I wondered and tried to recall how I had learned material and strategized as a science student preparing for the inevitable exams in larger introductory college courses.

That same month, I found myself at a conference, the Accredited Colleges of the Midwest’s Teagle Collegium on Student Learning. The focus was very much on metacognition. Although as a biologist, I struggled to understand the details of several presentations, it all sounded very familiar. Perhaps this was what my students were missing? I appreciated the intellectual points and took copious notes, until my mind began to wander. I needed to prepare to host a large group for Thanksgiving in the coming days. How should I start? What did I need to purchase and where would I get it? What needed to be prepared and cooked when, so that all the different dishes were ready and warm when it was actually time to sit down and eat? I began to get anxious. I quickly realized two things. Focusing back on my students, I immediately appreciated the degree to which preparing a Thanksgiving meal, and preparing to take an exam are both complex metacognitive tasks. I could finally imagine what my students were feeling and understand the metacognitive challenges exams present to them. Students need to evaluate what they know, what they don’t know and how best to approach any material they are uncertain of. And unlike cooking and meal preparation, there are no clear simple sets of directions highlighting how to approach the task of taking a typical college classroom exam. Second, my own pre-Thanksgiving meal mental preparation check made me realize that I have likely been using such metacognitive skills since I was a student, but was just not aware I was using them. Perhaps I did have some wisdom to share and upon returning to campus I committed to using a metacognition approach to help students prepare for exams.

Introductory college biology courses are an excellent place to begin engaging students with a metacognitive approach to exam preparation. These classes will probably always have exams. Moreover, as students move on in biology they are likely to face even more challenging exams. In order to engage students in metacognitive practices I came up with a series of straightforward metacognitive prompts that I emailed to students before each exam. They included simple questions such as: How do you think you will start studying? What techniques will you use while studying? What was the most difficult topic in this section of the course and why was it difficult? How will you approach the material you do not yet understand?

I found their responses fascinating. Some clearly wrote as little as possible, but most wrote quite extensively sharing with me precise details of how they had studied (or not studied) to prepare for the exam. Many responses were surprisingly sincere and confessional. The assignments brought home to me two points that have left a lasting impression. First, I was reminded of the importance of establishing a connection with students as well as the importance of that connection to student learning. Their emailed responses helped me get to know them in a way that was very different than in the public arena of class or lab. They let me in on their personal narrative of test preparation. I sometimes felt as if I was reading a secret diary. They were honest with me in their emails about what their studying experiences had been, perhaps even more so than if they had come to see me in person. Perhaps the proliferation of email, texting and Facebook has made students more comfortable conversing through a keyboard with faculty than face to face. After responding to the emailed questions, many did eventually come in to chat and engage with me about study strategies and differences they were noticing between high school and college. They seemed to think they knew me better and that I knew them better. Upon arriving in my office, they would frequently refer back to their emailed responses, even though I sometimes struggled to remember exactly who had emailed me what details. The emails seemed to prompt a unique relationship and they saw me as someone who was interested in them as an individual, an attitude that likely helped them feel as if they were part of the learning community in the classroom.

I also came to understand that that the task of mastering material in order to prepare for an exam has become more complicated. In the past, we had a textbook and we had notes from class. That was it. Today this task really is fraught with complex decisions. Students in college classrooms are less likely to be taking notes in a traditional lecture format. They are more likely to be engaged during class in small group discussions and problem based learning activities. They have access to and are justly encouraged to use the online resources that come with texts and take advantage of other online resources. They are also frequently encouraged to form study groups to discuss their understanding of topics outside of class. These are great ways for students to engage with material, and prepare for exams. This diverse learning landscape can be a lifesaver for some students, but for others, when it comes time to prepare for an exam, the variety of options for studying can be overwhelming and paralyzing. As we have opened up new ways of teaching and learning, we may have left students with many different resources at their fingertips but failed to help them think metacognitively about what works for them as they master knowledge to prepare for a summative exam.

Both the stronger connections I made with my students and my better understanding of the diverse exam preparation choices they must make help me feel better prepared to mentor and advise students as they navigate their introductory biology course. By engaging students metacognitively in emails concerning their exam preparation I gained a deeper understanding about how students were learning in my class. Their sincere and thoughtful responses provided a window on their world and, in interesting ways, their metacognitive thoughts rounded out my efforts to metacognitively assess my course. As faculty, we are often reminded to step back and reflect on our goals for our class and for student learning. We need to consider what is working in our course and what is not working. It was finally clear to me that a full metacognitive consideration of my course required regular reflective feedback from my students and an understanding of what they were struggling with. Although I had always solicited such feedback, students seemed much more likely to be thinking about their learning and willing to share their assessment of that learning in an email just before an exam. Ultimately I now see their honest metacognitive feedback has meant that I have gained as much or more than the students I was initially trying to help.

Connecting with students can improve student performance Click To Tweet


Assessing Metacognition and Self-Regulated Learning

This article “provides an overview of the conceptual and methodological issues involved in developing and evaluating measures of metacognition and self-regulated learning.” Sections in this article discuss the components of metacognition and self-regulated learning as well as the assessment of metacognition.

Pintrich, Paul R.; Wolters, Christopher A.; and Baxter, Gail P., “2. Assessing Metacognition and Self-Regulated Learning” (2000). Issues in the Measurement of Metacognition. Paper 3.

Assessing Metacognition and Self-Regulated Learning


Metacognition as Part of a Broader Perspective on Learning

This article includes six instructional strategies that promote self-regulation and ways that motivational cognitive and metacognitive skills can be enhanced using these strategies.

Research in Science Education, 2006, Volume 36, Number 1-2, Page 111. Gregory Schraw, Kent J. Crippen, Kendall Hartley

 

Promoting Self-Regulation in Science Education: Metacognition as Part of a Broader Perspective on Learning


Student Motivation and Self-Regulated Learning in the College Classroom

This chapter talks about the problems in students’ motivation to learn and how self-regulated learning can provide some insights to issues such as, how come students care more about their grades than learning the disciplinary content of their courses?, why do students wait until the last minute to fulfill the obligations of their courses such as studying for an exam or writing a paper?

R.P. Perry and J.C. Smart (eds.), The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective, 731–810. Pintrich and Zusho: Student Motivation and Self-Regulated Learning in the Classroom

Student Motivation and Self-Regulated Learning in the College Classroom


Metacognition and Self-Regulated Learning Constructs

This article contains findings from several different studies, and the “Findings indicated convergence of self-report measures of metacognition, significant correlations between metacognition and academic monitoring, negative correlations between self-reported metacognition and accuracy ratings, and positive correlations between metacognition and strategy use and metacognition and motivation.”

Rayne A. Sperling, Bruce C. Howard, Richard Staley & Nelson DuBois

(2004) Metacognition and Self-Regulated Learning Constructs, Educational Research and

Evaluation: An International Journal on Theory and Practice, 10:2, 117-139

Metacognition and Self-Regulated Learning Constructs