Democratizing a Classroom: The Challenge of Student Engagement and Metacognition

by Dr. Sonya Abbye Taylor, Associate Professor of Education and Chair of Education Division

INTRODUCTION: DEMOCRATIZING A CLASSROOM AND CREATING A COMMUNITY OF LEARNERS

 John Dewey in 1917, introduced the concept of student engagement. In creating classroom democracies, “communities … permeated throughout with the spirit of art, history, and science”— environments where students became self-directed, committed to service, and engaged in experiences reflective of society. Dewey believed in relevant curriculum, meaningful to students’ lives—in which students are thoroughly engaged with content, among a community of learners functioning as and in society. I believe that a community of learners relies upon student engagement and metacognition is intrinsically connected to being engaged. Student engagement provides the core of democratic classrooms, with the role of promoting and educating towards diversity, equity, and inclusion.

THE CHALLENGE OF METACOGNITION AND STUDENT ENGAGEMENT

I prepare teacher candidates to work with diverse populations, including students with special needs. I have rarely had complaints regarding grades. Students could predict grades because they had processed feedback and reflected on their work throughout the semester. Yet, last spring I received a complaint. Considering her grades averaged within the “A” range, a student questioned why she received an “A-.” She questioned if her grade was related to participation for which she received 35/75 points; she was correct. This experience required me to reflect on Dewey’s concepts of engagement in relation to class participation and to reflect on connections between learning and engagement. Aligning Dewey’s concept of “community of learners” with student engagement, I see how metacognition—the student’s and my own—underlies this pedagogical struggle. What was the student’s understanding of engagement and participation? How did her understanding equate with my own?

During this course, held synchronously, online, this student was invisible except for two presentations for which visibility was required. However, when reviewing videos, I found times she responded in chat and discussions: brief responses, one-word comments or a sentence, emanating from a black box on the screen.

My expectations for students in online environments are comparable to in-person classes. Classes include discussions that require sharing ideas, asking questions, analyzing, and deliberating. The 75 points allocated, places value on discussions. If a student is silent throughout a semester, they have not contributed to the community. Yet, this student perceived themselves to have been engaged, to have demonstrated engagement, and therefore eligible for full- credit. 

As I thought about this contradiction the following questions emerged:

What do we learn when we:

  • learn in isolation?
  • interact with individuals within a learning community?
  • delve into content while engaging within a learning community?

Most importantly, what is the relationship between learning, engagement, and learning within the community, in preparing students to be teachers for the democratic society Dewey describes?

For teacher educators and teacher candidates, answers to these questions provide more fodder for thought. We can learn in isolation. We can memorize information alone, anywhere or engage with a text while reading thoughtfully, asking silent questions of the text and author. We can learn sitting silently in a classroom. We CAN learn in isolation, but WHAT do we learn? What opportunities are missed when we function in this manner?

A silent classroom has always been troublesome to me. Student engagement and class participation are synonymous to me. It is joyful to hear students discussing content, actively listening, asking questions, and sharing opinions based on their frame of reference.

  • Isn’t that the vision of democracy Dewey conveys?
  • Don’t these interactions provide practice communicating, and enable students to see another’s point of view, to understand perspectives and experiences of individuals from other cultures?
  • Aren’t these the interactions that enable us to understand and appreciate one another?

Students can learn content without opportunities and expectations for engagement with others. However, in vibrant learning communities, students ask questions and question themselves; they think and rethink concepts because of the various influences. They use metacognitive processes to evolve as learners.

 I changed the student’s grade because she perceived her actions constituted participation. Though the student received an “A,” I did her no favor. Informed by Dewey, I believe the “A” she received was not as valuable as the “B’s,” “A-’s,” or “A’s,” earned by other students who demonstrated engagement. Those students were visible: the expressions on their faces illuminated their thoughts. Posing thoughtful questions and adding to discussions, those students enriched the experience for us all. They responded while they were processing. When their responses were askew, conversation ensued that brought the level of thought and understanding to higher levels. They demonstrated their commitment to learning and being part of a community were more important than having the “right” answer. They earned my respect and my confidence, knowing that their classrooms will be full of engaged learners, that they will encourage their students to think and will engage them in metacognitive practices.

CONCLUSION

I agonized over this grade, and by doing so, I know I served myself and my future students well. I will redouble my efforts to define and explain the value of engagement in relation to “community” and to create more activities, pose more questions and provide collaborative assignments that inspire engagement in efforts to make engagement palpable. I will do a better job of calling attention to student engagement. I will continue to honor askew responses because of the thought processes they illuminate. Metacognition has brought me to these conclusions, and will be the catalyst for changes that I’ll incorporate in my instruction and the emphasis I will place on my students to  be  self-directed.

In processing and metacognitively rethinking ideas for my classes and revisions for this paper, I reflected further on Dewey’s perception of engagement for a community of learners reflective of the society, and have come to this conclusion. We teach classes with diverse students therefore we have the opportunity and obligation to create environments in our classrooms where students can be productive members of the classroom society. We must create environments where students can function efficiently and develop the skills and behaviors desirable in a democratic society. In these environments students are respected and are comfortable to take risks; they listen to each other, communicate effectively, accept each other’s opinions, settle disagreements, celebrate each other’s successes, and support each other through difficult times. By creating these environments in teacher education classrooms perhaps there would be consensus as to the quality and value of engagement.

Works Cited

Dewey, John. Democracy and Education. Simon and Brown, 2012


Metacognitive Self-assessment in Privilege and Equity – Part 2: Majority Privilege in Scientific Thinking

Ed Nuhfer, California State University (Retired)
Rachel Watson, University of Wyoming
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Being in the majority carries the privilege of empowerment to set the norms for acceptable beliefs. Minority status for any group invites marginalization by the majority simply because the group appears different from the familiar majority. Here, we explore why this survival mechanism (bias) also operates when a majority perceives an idea as different and potentially threatening established norms.

Young adult learners achieve comfort in ways of thinking and explaining the world from their experiences obtained during acculturation. Our Introduction stressed how these experiences differ in the majority and minority cultures and produce measurable effects. Education disrupts established states of comfort by introducing ideas that force reexaminations that contradict earlier beliefs established from experiences.

Even the kind of college training that promotes only growing cognitive expertise is disruptive but more critical; research verifies that the disruptions are felt. While discovering the stages of intellectual development, William Perry Jr. found that, for some learners, the feelings experienced during transitions toward certain higher stages of thinking were so discomforting that the students ceased trying to learn and withdrew. Currently, about a third of first-year college students drop out before their sophomore year.

Educating for self-assessment accuracy to gain control over bias

We believe that the same survival mechanisms that promote prejudice and suppress empathizing and understanding different demographic groups also cripple understanding in encounters with unfamiliar or contrarian ideas. In moments that introduce ideas disruptive to beliefs or norms, unfamiliar ideas become analogous to unfamiliar groups—easily marginalized and thoughtlessly devalued in snap judgments. Practice in doing self-assessment when new learning surprises us should be valuable for gaining control over the mechanism that triggers our own polarizing bias. Image of a maze on a black background with each branch of the maze showing different words such as "response, meaning, bias, memory." credit: Image by John Hain from Pixabay

Earlier (Part 2 entry on bias), we recommended teaching students to frequently self-assess, “What am I feeling that I want to be true, and why do I have that feeling?” That assignment ensures that students encounter disruptive surprises mindfully by becoming aware of affective feelings involved in triggering their bias. Awareness gives the greater control over self needed to prevent being captured by a reflex to reject unfamiliar ideas out of hand or to marginalize those who are different.

Teaching by employing self-assessment routinely for educating provides the prolonged relevant practice with feedback required for understanding self. Educating for self-assessment accuracy constitutes a change from training students to “know stuff” to educating students to know how they can think to understand both “stuff” and self.

When the first encounter with something or someone produces apprehension, those who gain a capacity for self-assessment accuracy from practice can exercise more control over their learning through recognizing the feeling that accompanies incipient activation of bias in reaction to discomfort. Such self-awareness allows a pause for reflecting on whether enlisting this vestigial survival mechanism serves understanding and can prevent bias from terminating our learning and inducing us to speak or act in ways that do not serve to understand.

Affect, metacognition, and self-assessment: minority views of contrarian scholars

We address three areas of scholarship relevant to this guest-edited series to show how brain survival mechanisms act to marginalize ideas that contradict an established majority consensus.

Our first example area involves the marginalization of the importance of affect by the majority of behavioral scientists. Antonio Damasio (1999, p. 39) briefly described this collective marginalization:

There would have been good reason to expect that, as the new century started, the expanding brain sciences would make emotion part of their agenda…. But that…never came to pass. …Twentieth Century science…moved emotion back into the brain, but relegated it to the lower neural strata associated with ancestors whom no one worshipped. In the end, not only was emotion not rational, even studying it was probably not rational.

A past entry in Improve with Metacognition (IwM) also noted the chilling prejudice against valuing affect during the 20th Century. Benjamin Bloom’s Taxonomy of the Affective Domain (Krathwohl et al. 1964) received an underwhelming reception from educators who had given unprecedented accolades to the team’s earlier volume on Taxonomy of the Cognitive Domain (Bloom, 1956). Also noted in that entry was William G. Perry’s purposeful avoidance of referring to affect in his landmark book on intellectual and ethical development (Perry, 1999). The Taxonomy of the Affective Domain also describes a developmental model that maps onto the Perry model of development much better than Bloom’s Taxonomy of the Cognitive Domain.

Our second example involved resistance against valuing metacognition. Dunlosky and Metcalfe (2009) traced this resistence to French philosopher Auguste Comte (1798-1854), who held that an observer trying to observe self was engaged in an impossible task like an eye trying to see itself by looking inwardly. In the 20th Century, the behaviorist school of psychology gave new life to Comte’s views by professing that individuals’ ability to do metacognition, if such an ability existed, held little value. According to Dunlosky and Metcalfe (2009, p. 20), the behaviorists held “…a stranglehold on psychology for nearly 40 years….” until the mid-1970s, when the work of John Flavell (see Flavell, 1979) made the term and concept of metacognition acceptable in academic circles.

Our third example area involves people’s ability to self-assess. “The Dunning-Kruger effect” holds that most people habitually overestimate their competence, with those least competent holding the most overly inflated views of their abilities and those with real expertise revealing more humility by consistently underestimating their abilities by modest amounts. Belief in “the effect” permeated many disciplines and became popular among the general public. As of this writing, a Google search brought up 1.5 million hits for the “Dunning Kruger effect.” It still constitutes the majority view of American behavioral scientists about human self-assessment, even after recent work revealed that the original mathematical arguments for “the effect” were untenable. 

Living a scholars’ minority experience

Considering prejudice against people and bias against new ideas as manifestations of a common, innate survival mechanism obviates fragmentation of these into separate problems addressed through unrelated educational approaches. Perceiving that all biases are related makes evident that the tendency to marginalize a new idea will certainly marginalize the proponents of an idea.

Seeing all bias as related through a common mechanism supports using metacognition, particularly self-assessment, for gaining personal awareness and control over the thoughts and feelings produced as the survival mechanism starts to trigger them. Thus, every learning experience providing discomfort in every subject offers an opportunity for self-assessment practice to gain conscious control over the instinct to react with bias

Some of the current blog series authors experienced firsthand the need for higher education professionals to acquire such control. When publishing early primary research in the early 1990s, we were naively unaware of majority consensus, had not yet considered bias as a survival reaction, and we had not anticipated marginalization. Suggesting frequent self-assessments as worthwhile teaching practices in the peer-reviewed literature brought reactions that jolted us from complacency into a new awareness.

Scholars around the nation, several of them other authors of this blog series, read the guest editor’s early work, introduced self-assessment in classes and launched self-assessment research of their own. Soon after, many of us discovered disparagements at the departmental, college, and university levels, and even at professional meetings followed for doing so. Some disparagements led to damaged careers and work environments.

The bias imparted by marginalization led to our doubting ourselves. Our feelings for a time were like those of the non-binary gender group presented in the earlier Figure 1 in the previous Part 1 on privilege: We “knew our stuff,” but our feelings of competence in our knowledge lagged. Thanks to the feedback from the journal peer-reviewers of Numeracy, we now live with less doubt in ourselves. For those of us who weathered the storm, we emerged with greater empathy for minority status and minority feelings and greater valuing of self-assessment. 

Self-assessment, a type of metacognition employing affect, seems in a paradigm change that recapitulates the history of affect and metacognition. Our Numeracy articles have achieved over 10,000 downloads, and psychologists in Europe, Asia, and Australia now openly question “the effect” (Magnus and Peresetsky, 2021; Kramer et al., 2022; Hofer et al., 2022; Gignac, 2022) in psychology journals. The Office of Science and Society at McGill University in Canada reached out to the lay public (Jarry, 2020) to warn how new findings require reevaluating “the effect.” We recently discovered that paired measures could even unearth unanticipated stress indicators among students (view section at time 21.38 to 24.58) during the turbulent times of COVID and civil disruption.

Takeaways

Accepting teaching self-assessment as good practice for educating and self-assessment measures as valid assessments open avenues for research that are indeed rational to study. After one perceives bias as having a common source, developing self-assessment accuracy seems a way to gain control over personal bias that triggers hostility against people and ideas that are not threatening, just different. 

“Accept the person you are speaking with as someone who has done amazing things” is an outstanding practice stressed at the University of Wyoming’s LAMP program. Consciously setting one’s cognition and affect to that practice erases all opportunities for marking anyone or their ideas for inferiority.

References

Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives, handbook 1: Cognitive domain. New York, NY: Longman.

Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: a new area of cognitive-developmental inquiry. American Psychologist 34, 906-911.

Gignac, Gilles E. (2022). The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect. Personality and Individual Differences 184: 111224. https://doi.org/10.1016/j.paid.2021.111224

Hofer, G., Mraulak, V., Grinschgl, S., & Neubauer, A.C. (2022). Less-Intelligent and Unaware? Accuracy and Dunning–Kruger Effects for Self-Estimates of Different Aspects of Intelligence. Journal of Intelligence, 10(1). https://doi.org/10.3390/jintelligence10010010

Kramer, R. S. S., Gous, G., Mireku, M. O., & Ward, R. (2022). Metacognition during unfamiliar face matching. British Journal of Psychology, 00, 1– 22. https://doi.org/10.1111/bjop.12553

Krathwohl, D.R., Bloom, B.S. and Masia, B.B. (1964) Taxonomy of Educational Objectives: The Affective Domain. New York: McKay.

Magnus, Jan R., and Peresetsky, A. (October 04, 2021). A statistical explanation of the Dunning-Kruger effect. Tinbergen Institute Discussion Paper 2021-092/III, http://dx.doi.org/10.2139/ssrn.3951845

Nicholas-Moon, Kali. (2018). “Examining Science Literacy Levels and Self-Assessment Ability of University of Wyoming Students in Surveyed Science Courses Using the Science Literacy Concept Inventory with Expanded Inclusive Demographics.” Master’s thesis, University of Wyoming.

Perry, W. G. Jr. (1999). Forms of Ethical and Intellectual Development in the College Years. San Francisco, CA: Jossey-Bass (a reprint of the original 1968 work with minor updating).

Tarricone, P. (2011). The Taxonomy of Metacognition (1st ed.). Psychology Press. 288p. https://doi.org/10.4324/9780203830529


Metacognitive Self-assessment in Privilege and Equity – Part 1 Conceptualizing Privilege and its Consequences

by Rachel Watson, University of Wyoming
Ed Nuhfer, California State University (Retired)
Cinzia Cervato, Iowa State University
Ami Wangeline, Laramie County Community College

Demographics of metacognition and privilege

The Introduction to this series asserted that lives of privilege in the K-12 years confer relevant experiences advantageous to acquire the competence required for lifelong learning and entry into professions that require college degrees. Healthy self-efficacy is necessary to succeed in college. Such self-efficacy comes only after acquiring self-assessment accuracy through practice in using the relevant experiences for attuning the feelings of competence with demonstrable competence. We concur with Tarricone (2011) in her recognition of affect as an essential component of the self-assessment (or awareness) component of metacognition: the “‘feeling of knowing’ that accompanies problem-solving, the ability to distinguish ideas about which we are confident….” 

A surprising finding from our paired measures is how closely the mean self-assessments of performance of groups of people track with their actual mean performances. According to the prevailing consensus of psychologists, mean self-assessments of knowledge are supposed to confirm that people, on average, overestimate their demonstrable knowledge. According to a few educators, self-reported knowledge is supposed to be just random noise with no meaningful relationship to demonstrable knowledge. Data published in 2016 and 2017 in Numeracy from two reliable, well-aligned instruments revealed that such is not the case. Our reports in Numeracy shared earlier on this blog (see Figures 2 and 3 at this link) confirm that people, on average, self-assess reasonably well. 

In 2019, by employing the paired measures, we found that particular groups of peoples’ average competence varied measurably, and their average self-assessed competence closely tracked their demonstrable competence. In brief, different demographic groups, on average, not only performed differently but also felt differently about their performance, and their feelings were accurate.

Conceptualizing privilege and its consequences

Multiple systems (structural, attitudinal, institutional, economic, racial, cultural, etc.) produce privilege, and all individuals and groups experience privilege and disadvantage in some aspects of their lives. We visualize each system as a hierarchical continuum, along which at one end lie those systematically marginalized/minoritized, and those afforded the most advantages lie at the other. Because people live and work within multiple systems, each person likely operates at different positions along different continuums.

Those favored by privilege are often unaware of their part in maintaining a hierarchy that exerts its toll on those of lesser privilege. As part of our studies of the effects on those with different statuses of privilege, we discovered that instruments that can measure cognitive competence and self-assessments of their competence offer richer assessments than competency scores. They also inform us about how students feel and how accurately they self-assess their competence. Students’ histories of privilege seem to influence how effectively they can initially do the kinds of metacognition conducive to furthering intellectual development when they enter college.

Sometimes a group’s hierarchy results from a lopsided division into some criterion-based majority/minority split. There, advantages, benefits, status, and even acceptance, deference, and respect often become inequitably and systematically conferred by identity on the majority group but not on the underrepresented minority groups. 

Being a minority can invite being marked as “inferior,” with an unwarranted majority negative bias toward the minority, presuming the latter have inferior cognitive competence and even lower capacity for feeling than the majority. Perpetual exposure to such bias can influence the minority group to doubt themselves and unjustifiably underestimate their competence and capacity to perform. By employing paired measures, Wirth et al. (2021, p. 152 Figs.6.7 & 6.8) found recently that undergraduate women, who are the less represented binary gender in science, consistently underestimated their actual abilities relative to men (the majority) in science literacy.

We found that in the majority ethnic group (white Caucasians), both binary genders, on average, significantly outperformed their counterparts in the minority group (all other self-identified ethnicities combined) in both the competence scores of science literacy and the mean self-assessed competency ratings (Figure 1). 

Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories.

Figure 1. Graph of gender performance on measures of self-assessed competence ratings and demonstrated competence scores across ethnic majority/minority categories. This graph represents ten years of data collection of paired measures, but we only recently began to collect non-binary gender data within the last year, so this group is sparsely represented. Horizontal colored lines coded to the colored circles’ legend mark the positions of the means of scores and ratings in percent at the 95% confidence level. 

Notably, in Figure 1, the non-binary gender groups, majority or minority, were the strongest academic group of the three gender categories based on SLCI scores. Still, relative to their performance, the non-binary groups felt that they performed less well than they actually did.  

On a different SLCI dataset with a survey item on sexual preference rather than gender, researcher Kali Nicholas Moon (2018) found the same degree of diminished self-assessed competence relative to demonstrated competence for the small LGBT group (see Fig. 7 p. 24 of this link). Simply being a minority may predispose a group to doubt their competence, even if they “know their stuff” better than most.

These mean differences in performance shown in Figure 1 are immense. For perspective, pre-post measures in a GE college course or two in science rarely produce more than mean differences of more than a couple of percentage points on the SLCI. In both majority and minority groups, females, on average, underestimated their performance, whereas males overestimated theirs. 

If a group receives constant messages that their thinking may be inferior, it is hardly surprising that they internalize feelings of inferiority that are damaging. Our evidence above from several groups verifies such a tendency. We showed that lower feelings of competence parallel significant deficit performance on a test of understanding science, an area relevant to achieving intellectual growth and meeting academic aspirations. Whether this signifies a general tendency of underconfidence in minority groups for meeting their aspirations in other areas remains undetermined.

Perpetuating privilege in higher education

Academe nurtures many hierarchies. Across institutions, “Best Colleges” rating lists perpetuate a myth that institutions that make the list are, in all ways, for all students “better than” those not on the list. Some state institutions actively promote a “flagship reputation,” implying the state’s other schools as “inferior.” Being in a community of peers that reinforces such hierarchical valuing confers damaging messaging of inferiority to those attending the “inferior” institutions, much as an ethnic majority casts negative messages to the minority.  

Within institutions, different disciplines are valued differently, and people experience differential privileges across the departments and programs that focus on educating to support different disciplines. The degrees of consequences of stress, alienation, and physical endangerment are minor compared to those experienced by socially marginalized/minoritized groups. Nevertheless, advocating for any change in an established hierarchy in any community is perceived as disruptive by some and can provide consequences of diminished privilege. National communities of academic research often prove no exception. 

Takeaways

Hierarchies usually define privilege, and the majority group often supports hierarchies detrimental to the well-being of minority groups. Although test scores are the prevalent measures used to measure learning mastery, paired measures of cognitive competence and self-assessed competence provide additional information about students’ affective feelings about content mastery and their developing capacity for accurate self-assessment. This information helps reveal the inequity across groups and monitors how well students can employ the higher education environment for advancing their understanding of specialty content and understanding of self. Paired measures confirm that groups of varied privilege fare differently in employing that environment for meeting their aspirations. 


Measuring Metacognitive Self-Assessment – Can it Help us Assess Higher-Order Thinking?

by Dr. Ed Nuhfer, California State Universities (retired)

Since 2002, I’ve built my “Developers’ Diary” columns for National Teaching and Learning Forum (NTLF) around the theme of fractals and six essential components in practice of college teaching: (1) affect, (2) levels of thinking (intellectual & ethical development), (3) metacognition. (4) content knowledge & skills, (5) pedagogy and (6) assessment. The first three focus on the internal development of the learner and the last three focus on the knowledge being learned. All six have interconnections through being part of the same complex neural networks employed in practice.

In past blogs, we noted that affect and metacognition, until recently, were deprecated and maligned by behavioral scientists, with the most deprecated aspect of metacognition being self-assessment. The highest levels of thinking discovered by Perry are heavily affective and metacognitive, so some later developmental models shunned these stages when only cognition seemed relevant to education. However, the fractal model advocates for practicing through drawing on all six components. Thus, metacognition is not merely important for its own merits; we instructors rely on metacognitive reflection to monitor whether we are facilitating students’ learning through attending to all six.

The most maligned components, affect and self-assessment may offer a key to measuring the overall quality of education and assessing progress toward highest levels of thinking. Such measurements have been something of a Grail quest for developers. To date, efforts to make such measures have proven to be labor intensive and expensive.

Measuring; What, Who, Why, and How?

The manifestation of affect in the highest Perry stages indicates that cognitive expertise and skills eventually connect to affective networks. At advanced levels of development, experts’ affective feelings are informed feelings that lead to rapid decisions for action that are usually effective. In contrast, novices’ feelings are not informed. Beginners’ approaches are tentative and take a trial-and-error approach rather than an efficient path to a solution. By measuring how well students’ affective feelings of their self-assessed competence have integrated with their cognitive expertise, we should be able to assess their stage of progress toward high-level thinking.

To assess a group’s (a class, class rank or demographic category) state of development, we can obtain the group’s mean self -assessments of competence on an item-by-item basis from a valid, reliable multiple-choice test that requires some conceptual thinking. We have such a test in the 25-item Science Literacy Concept Inventory (SLCI). We can construct a knowledge survey of this Inventory (KSSLCI) to give us 25 item-by-item self-assessed estimates of competence from each participant.

As demonstrated in 2016 and 2017, item-by-item averages of group responses attenuate the random noise present in individuals’ responses. Thus, assessments done by using aggregate information from groups can provide a clear self-assessment signal that allows us to see valid differences between groups.

If affective self-assessed estimates become increasingly informed as higher level thinking capacity develops, then we should see that the aggregate item-by item paired measures correlate with increasing strength as groups gain in participants who possess higher order thinking skill. We can indeed see this trend.

Picture the Results

For clear understanding, it is useful first to see what graphs of paired measures of random noise (meaningless nonsense) look like (Figure 1A) and how paired measures look when they correlate perfectly (Figure 1B). We produce these graphs by inputting simulated data into our SLCI and KSSLCI instruments (Figure 1).

Random nonsense produces a nearly horizontal line along the mean (“regression to the mean”) of 400 random simulated responses to each of the 25 items on both instruments. The best-fit line has values of nearly zero for both correlation (r) and line slope (Figure 1A).

We use a simulated set of data twice to get the pattern of perfect correlation when the participants’ mean SLCI and KSSLCI scores for each item are identical. The best-fit line (Figure 1B has a correlation (r) and a line slope, both of about unity (1). The patterns from actual data (Figure 2) will show slopes and correlations between random noise and perfect order.

Fig 1 Nuhfer Modeling Correlational Patterns

Figure 1. Modeling correlational patterns with simulated responses to a measure of competence (SLCI) and a measure of self-assessed competence (KSSLCI). A shows correlational pattern if responses are random noise. B shows the pattern if 400 simulated participants perfectly assessed their competence.

Next, we look at the actual data obtained from 768 novices (freshmen and sophomores—Figure 2 A). Novices’ self-assessed competence and actual competence have a significant positive correlation. The slope is 0.319 and r is .69. The self-assessment measures explain about half of the variance (r2) in SLCI scores. Even novices do not appear to be “unskilled and unaware of it.” Developing experts (juniors, seniors and graduate students, N = 831 in Figure 2B) produce a fit line with a slightly steeper slope of 0.326 and a stronger r of .77. Here, the self-assessment measures explain about 60% of the variance in the Inventory scores.

When we examine experts (109 professors in Figure 2C), the fit line steepens to a slope of 0.472, and a correlation of r = .83 explains nearly 70% of the variance in Inventory scores. The trend from novice to expert is clear.

Final Figure 2D shows the summative mean SLCI scores and KSSLCI ratings for the four undergraduate ranks plus graduate students and professors. The values of KSSLCI/SLCI increase in the order of academic rank. The correlation (r) between the paired measures is close to unity, and the slope of 0.87 produces a pattern very close to that of perfect self-assessment (Figure 1B).

Fig 2 Nuhfer SLCI data

Figure 2: Correlations from novice to expert of item-by-item group means of each of the 25 items addressed on the KSSLCI and the SLCI. Panel A contains the data from 768 novices (freshmen and sophomores). B consists of 831 developing experts (juniors, seniors and graduate students). C comes from 109 experts (professors). Panel D employs all participants and plots the means of paired data by academic rank. We filtered out random guessing by eliminating data from participants with SLCI scores of 32% and lower.

Figure 2 supports: that self-assessments are not random noise, that knowledge surveys reflect actual competence; that affective development occurs with cognitive development, and that a group’s ability to accurately self-assess seems indicative of the group’s general state of intellectual development.

Where might your students might fall on the continuum of measures illustrated above? By using the same instruments we employ, your students can get measures of their science literacy and self-assessment accuracy, and you can get an estimate of your class’s present state of intellectual development. The work that led to this blog is under IRB oversight, and getting these measures is free. Contact enuhfer@earthlink.net for further information.


Metacognition supports HIP undergraduate research

by Dr. John Draeger, SUNY Buffalo State

The Association of American Colleges and Universities (AAC&U) has identified a number of high-impact learning practices (e.g., undergraduate research, collaborative assignments, learning communities, service learning, study abroad, capstone seminars). Each of these learning practices involve a significant investment of student effort over time with multiple interactions between faculty and students about substantive matters as well as frequent,constructive feedback from faculty, and regular, structured processes for reflection and integration (Kuh 2008; Kilgo, Sheets & Pascarella 2015). This post offers some strategies for intentionally structuring undergraduate research experiences and building metacognition into the process. Subsequent posts will consider other high-impact practices (HIPs).

 Undergraduate research is a HIP because students ask the questions and set the research agenda. Inquiry-based projects, such as undergraduate research, promote student autonomy, self-direction, and teach students about the process (Healey & Jenkins 2009; Kilgo & Pascarella 2016). Without guidance, however, students can find themselves in a hot mess. After years of mentoring undergraduate research projects in philosophy, I’ve developed the following model to help keep students on track. Elements of this model may seem obvious and common practice. I don’t claim that it is novel, but I offer it as a distillation of some lessons that I’ve learned the hard way.

First, philosophers like to ask the big questions (and they should), but unless topics are reined in, student research can easily turn to sprawl and sloppy thinking. Thus, I talk with students about topic refinement early and often. I begin student meetings by asking them to give a one-minute “elevator pitch” for their topic. As the topic gets refined, the pitch becomes easier. En route to refining the topic and developing the elevator pitch, I ask a series of critical questions about the underlying conceptual issues. For example, if a student wants to consider what parents owe their children, I will push her to consider the nature of obligation (e.g., human rights, fairness, well-being, character, social roles) and concrete cases that may or may not fall within the scope of that obligation (e.g., providing food, a new bike, college tuition). Prodding them to consider the nature and scope of the obligation prompts them to consider the underlying philosophical substructure, which is what I believe philosophical inquiry is all about (Draeger 2014). However, once students begin making deep conceptual connections, it is easy for a topic to sprawl as students believe that each connected idea will need its own separate discussion. Metacognition encourages students to be aware of their own learning process (e.g., research) and make intentional adjustments based on that awareness. Encouraging students to be aware of the possibility topic sprawl can help them better evaluate whether their current thinking is moving away from the core issue or towards a better version of that core issue.

Second, all of us are standing on the shoulders of giants. It is good scholarship to acknowledge the original thinking efforts of others by using proper citation. However, the research experience should teach students more than to not plagiarize. Rather, undergraduate research allows students the opportunity to become co-inquirers within an existing scholarly conversation. Becoming familiar with the literature allows them to tap into long-standing debates and utilize conceptual distinctions developed by others. As students begin their research, each comes with their own background and dispositions. Some believe they need to read everything on a topic before they venture an opinion. Others are so eager to begin that they skip the literature review and soon find themselves lost without the resources found within the tradition. Metacognition can help students become aware of when they are reading too much or too little as well as point the way to adjustments in their process.

Third, many students struggle with how to find the relevant source material in philosophy. Even if they know how to use the library, they are often unfamiliar with idiosyncrasies of philosophy as a discipline. For this reason, I explicitly discuss how to go about doing library work (e.g., how to use library databases, how to conduct keyword searches, how to decide which articles seem promising), discuss reading strategies (e.g., how to read at different speeds to find articles most deserving attention, how to read identified articles more carefully, how to annotate a text with an eye towards research), and discuss note taking strategies (e.g., how to organize summaries, critical questions, conceptual applications, personal reflections). When undergraduate research is embedded in my course, we discuss these strategies in class. When undergraduate research takes the form of an independent project, I discuss these strategies one-on-one. In either case, I encourage students to practice becoming aware of what’s working, what’s not, and when they need to adjust their strategies.

Fourth, my undergraduate research students are required to keep a weekly journal. Students are asked to track pesky questions, troublesome counter-examples, and worrisome objections. Beyond their focus on content, however, students are also asked to focus on their own process, including a sketch of the library, reading, and writing strategies attempted as well as whether those strategies were successful. Journaling about these strategies is another way to encourage metacognitive awareness about the research process and locate opportunities for intentional self-regulation.

Undergraduate research can be a HIP (if implemented well) because it encourages students to learn about the research process on their own terms as well as producing their own research product. Metacognition helps monitor whether students are engaged in the sort of deep learning that makes undergraduate research a HIP.  Moreover, intentionally structuring metacognitive opportunities can encourage greater learner autonomy and help facilitate inquiry-based research long after undergraduate experiences have officially concluded. In this way, undergraduate research and metacognition can be highly-impactful because they support the skills necessary for lifelong learning.

References

Draeger, J. (posted July 11, 2014). Using metacognition to uncover the substructure of moral issues.” Retrieved from https://www.improvewithmetacognition.com.

Healey, M., & Jenkins, A. (2009). Developing undergraduate research and inquiry. York: HE Academy.

Kilgo, C. A., Sheets, J. K. E., & Pascarella, E. T. (2015). The link between high-impact practices and student learning: Some longitudinal evidence. Higher Education, 69(4), 509-525.

Kilgo, C. A., & Pascarella, E. T. (2016). Does independent research with a faculty member enhance four-year graduation and graduate/professional degree plans? Convergent results with different analytical methods. Higher Education, 71(4), 575-592.

Kuh, G. D. (2008). Excerpt from high-impact educational practices: What they are, who has access to them, and why they matter. Association of American Colleges and Universities.


Utilizing a Metacognition Chart for Exam Review and Metacognitive Skill Development

by Dana Melone, Cedar Rapids Kennedy High School

 Downloadable

Motivation and Context:

I teach AP Psychology at a Kennedy High School in Cedar Rapids, Iowa. My students range in age from 15-18 years old. They also come into my classroom with a variety of grade point averages ranging from below a 2.0 to above a 4.0. While some students have excellent, note taking and study skills as well an understanding of what they need to study, I find that most of my students (even the top ones) tend to try to study everything and get overwhelmed. They also do not utilize review time to their advantage.

At the same time my students love review games and in class review time. However, for years I was hesitant to play them or give them time to review in class because they would be so actively engaged in the game or review activity that they would not take the time to consider what they knew and what they did not know, and how this should effect their studying (i.e. practice metacognition about their learning). I wanted to engage them in demonstrations and games but I also wanted them to use those activities to guide their studying and develop effective learning strategies that could be used beyond my course. In response to this dilemma, I developed the metacognitive prompt chart below.

Nuts and Bolts:

In order to help students gauge how much they know, I have started requiring them to complete the metacognitive chart as they are reviewing in class or playing a review game I have also pointed out that they can use the chart even when they are working on current content. The chart consists of 4 columns that help students categorize their understanding of the concepts.

Students use the chart by putting the concept names in the column that best describes their understanding of each concept as we move through review activities or games. There are also two questions at the bottom that ask them about the focus of their studying and patterns they have seen over time. In the end, they have a chart that allows them in one glance to know what they need to study and what they have knowledge of.

  1. What concepts need to be the focus of your studying? How will you make sure you are studying them actively?
  2. Look at your past charts, what concepts seem to remain a problem? How will you address this?

My students have this chart out any time we are going over previously learned content or reviewing (multiple times a week). I encourage my students to re-examine their charts once a week to look for patterns over time and reflect on what they need to get help with or review. I also encourage them to combine any charts as we near assessments that are cumulative. Multiple times a month I collect the sheets and can visibly see areas that all my students are struggling. I have been able to use it for my own personal metacognition in planning review, re-teaching, and remediation times.

Outcomes:

This chart has proven its effectiveness on many levels. Students have been able to visibly see the areas that they need improvement in and should focus on. They are also able to examine these sheets over time to see patterns in the content that they are struggling with and doing well with. An unintended outcome has been the ability to use it as the teacher for ongoing formative assessment of my classes.


Promoting Metacognition with Retrieval Practice in Five Ateps

by Blake Harvard, James Clemens High School

 Downloadable

 

Motivation for Activity

I am very lucky to work at a high school with students who are quite focused and, from the standpoint of intelligence, very gifted.   This does not make them great learners though. I realize a lot of my students clearly benefit from being able to memorize information. This may work in high school, where assessments are given sometimes daily. In college, however, this will not work. Assessment of material may consist of a midterm and a final. As a teacher who wants to better prepare my students for a lifetime of learning, I am motivated to introduce and cultivate learning strategies that focus on this personal growth and better understanding of their own learning through specific exercises promoting metacognition.

Context for Activity

I use this activity with my high school AP Psychology classes. These classes average about 30 students. Although my situation is quite specific, I believe this activity can easily be accommodated to fit most class sizes in almost all disciplines of study.

Description of Activity

Let me put all the cards out on the table: I am a big believer in using researched/proven learning strategies promoting metacognition to improve retention of classroom material. I have applied strategies in my high school Advanced Placement Psychology classes and seen notable improvements in three areas:

  • Test scores
  • Study habits
  • Student’s understanding of their learning

Improvement in test scores is important for many reasons and ultimately describes an overall level of understanding.  While I am thrilled to see my mean test score increase and standard deviation shrink a bit, that is not what I’m most excited about when lauding learning strategies.  I am far happier with the student growth with respect to their study habits and metacognition about their learning.  While I instruct highly intelligent adolescents, most of my students do not enter my room as great learners.  They are merely great memorizers.  There’s nothing inherently wrong with that, but it becomes much more difficult to just memorize your way through college and most of my students (80% to 90%) will attend university.

In particular, one learning strategy that I believe to be the most effective is retrieval practice.  The Learning Scientists provide a great overview of the strategy.  Basically, the idea is to attempt to retrieve information from your memory a bit after it’s been presented to you.  This can be done minutes, hours, or days later, and can be seen in many forms:  multiple-choice or matching questions, essays,. I have written before on the topic of retrieval practice and its impact on my classroom.  Today, I want to focus on how I promote metacognition through the use of retrieval practice in my classroom.

Usually the day after a lesson, I use these steps to practice retrieval of the information:

  1. Provide questions or a prompt.  Since I am preparing my students for an AP exam in May, I usually provide AP style questions (no more than 7). By ‘AP style’, I mean, either multiple-choice questions with five possible answers or an essay prompt requiring students use terms or concepts from the previous lesson to successfully relate their knowledge to a given scenario.
  2. Answer using only their brain.  This step starts to break their habit of asking those around them for help or looking at their notes/the internet for assistance.  In my opinion, this step is the most important aspect of retrieval practice.  They are forced to attempt to retrieve material as they practice answering test questions, which is the process in which they will have to engage during the actual test. A second benefit is that this practice can help to reduce test anxiety.  A lot of students shy away from this step because it can be difficult or because it highlights flaws in their learning, but I tell my students it’s definitely better to struggle with the material now than on the test.  If the test is the first time a student is presented with material in a way that utilizes the use of retrieval practice, we’ve all probably failed.
  3. Evaluate their answers.  How many answers are they very confident with?  How many answers are simply guesses?  I want students to understand that if they just guessed and answered correctly, they still don’t know the answer, they just got lucky.  Sometimes I’ll have my students delineate, by using a different color pen on their paper, answers they are confident with and those they are not.  This helps them to visualize their pre-grade understanding.
  4. Compare/contrast answers with neighbors.  I instruct the students to have a conversation; debate any discrepancies.  At this point, if they can thoughtfully discuss answers they probably have a decent grasp of the information and have taken time to reflect on their learning; specifically where their may be holes in their learning of the material or with what they thought they knew, but may have been mislead.
  5. Grade their paper.  After students grade their paper I want them thinking about the following questions that really allow the students to practice their metacognition and regulate/reinforce their study habits for future practice.

a. Does my grade reflect my knowledge?

b. Am I happy with my grade?

c. If no to either of the above questions, what strategies can I utilize to successfully retain the material?  At this point, many students incorrectly believe that their understanding of material is complete…for better or worse.  You can almost see them thinking either “Oh well, I just don’t know this” or “I scored well, I must know this”.  I attempt to impress upon my students that use of other strategies, like spaced practice and dual coding, will further aid to improve and solidify retention of the material.

d. If yes to the above questions, I ask students to reflect on what work they put in to remember this material so they can plan to use that strategy again for future learning. This step also helps reinforce that they should focus on learning strategies, not just guessing / luck.

Reflection

After many semesters of working with students, I have come to believe that metacognition and reflection on study habits/strategies is of foundational importance.  One of the goals I have for the students in my class, over the course of a semester, is these learning strategies become their norm for studying.  It’s not something extra, it is what they do to practice and learn.  Without the reflection piece of using retrieval practice and other learning strategies, it is hard for high school students to examine their study/practice growth. While walking the students through these five steps may seem a little laborious, the explicitness of the instructions seems to work well to increase their awareness of their own learning shape their behaviors toward more effective practices.

It is often quite difficult to convince teenagers their study/practice habits, that usually rely on simple memorization, will more than likely not be successful at college.  They need to see results from their added efforts.  Using these five steps, I have witnessed student’s grades improve and study/practice habits change for the better.  As a teacher, I’m not sure it gets any better…improving a student’s learning and making them more successful.  It’s why we get paid the big bucks.  🙂

References

Learn How to Study Using…Retrieval Practice, The Learning Scientists, www.learningscientists.org

Retrieval Practice in the High School Classroom, The Effortful Educator, www.effortfuleducator.com

Learn How to Study Using…Spaced Practice, The Learning Scientists, www.learningscientists.org

Learn How to Study Using…Dual Coding, The Learning Scientists, www.learningscientists.org


Addressing Metacognition Deficits in First Semester Calculus Students: Getting Students to Effectively Self-Evaluate their Understanding

by Derek Martinez, University of New Mexico

 Downloadable

Motivations and context: 

The problem I chose to tackle as a UNM Teaching Fellow was to develop methods for how to teach first semester calculus students to effectively self-test and develop metacognitive skills. One of the biggest issues I have seen over the years is students thinking they understand the material, getting over confident, and then performing horribly on an exam. The practices described below were carried out during the spring 2016 semester in two math 162 (Calculus I) classes.

Method: The two main ways that metacognitive strategies were incorporated into the curriculum were (1) daily “Test Yourself” exercises and (2) exam skills check self-assessments (essentially practice tests) before each exam. The “Test Yourself ” exercises were designed to be a daily reminder to the students that they should not confuse the ability to follow a lecture with the ability to solve a problem on their own. The purpose of the self-assessments was to help students identify where they had gaps in their understanding before taking each exam.

The “Test Yourself” exercises (example attached) were e-mailed to students the night before the lecture and were designed to give students a way to assess whether or not they understood the fundamental concepts of the lecture the following day. For example, if the lecture that day was about rates of change applications, the exercise would focus on an easy-to-medium level example that would test whether or not the students got the fundamental concepts from the section before going on and trying the more challenging homework problems.

Sending the exercises out the night before was effective in getting many students to read ahead in the text, and try to solve parts of the exercises (this was especially true of students who were struggling, or had math anxiety). If a student had solved the exercise before class, they were instructed to bring in a blank exercise and make sure they could duplicate their results without notes. The format for each class was usually lecture for about 40 minutes and then students would work on these exercises (some in groups, some by themselves).

The self-assessments (example attached) were given about five days before each exam. Participation in this was voluntary. I reserved a room outside of class and students took this like an actual exam. I made it clear to the students that the material on these assessments covered the fundamental ideas and basic examples, but were at a lower level of difficulty than the actual exams. The reasoning behind this was to help students pinpoint what core skills they still needed to work on. I graded these assessments just like exams so students could get feedback on their work as well as use of proper notation. To help identify their level of metacognition, at the end of each assessment the students were asked to rank their performance on a scale of 1-5 (5 being best performance). In many cases this ranking followed by actual exam scores provided further evidence to the students that they tended to be overconfident in their preparedness and needed to study more. In the beginning, students tended to over rank their performance but by the final exam assessment, their rankings were more in line with their performance.

Outcomes: Students in my spring 2016 sections had a final exam pass rate of more than 11% higher than all other sections (group graded without me to avoid any possible bias). These students also had a higher final exam pass rate than my fall/spring 2015 students by about 10% (when I did not yet incorporate these activities). The self-assessments seemed to have the biggest measurable impact on student success, as students who took them consistently outscored those who did not by 10 – 20% on the exams. Further, scores on the actual exams were 15 – 65% higher than on the self-assessments. I believe this was due to the fact that they guided and motivated their learning as well as simply scared some students into studying harder.

Lessons learned: “Buy-in” from the beginning is essential. Sharing the data with the students after the first assessment significantly increased the number of students taking the remaining assessments. These were mainly STEM majors so the statistical evidence went a long way with them. It was also crucial to make time throughout the semester to talk about what metacognition is and remind the students why they were doing these exercises.


The Great, The Good, The Not-So-Good of Improve with Metacognition: An Exercise in Self-Reflection

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

Recently, Lauren, John, and I reflected on and discussed our experiences with Improve with Metacognition (IwM). We laughed and (no crying) found (at least I did) that our experiences were rich and rewarding. As such, we decided that each of us would write a blog on our experience and self-reflection with IwM. Therefore, I’m up. When thinking about IwM, the theme that kept surfacing in my mind is that we are Great, Good, and on a few things—Not-So-Great.

The Great

Oh, how can I count the ways of how IwM is Great. Well, by counting. In my reflection on what we have accomplished, it came apparent that here at IwM, we have been highly productive in our short existence. Specifically, we have published over 200 blogs, resources about metacognition measures, videos, instruction, curated research articles, and teaching metacognition (see our new call for Teaching with Metacognition). We have created a space for collaborators to gather and connect. We have engaged in our own research projects. We have had over 35 contributors from all over North America and a few from beyond, who have ranged from preeminent scholars in the field of metacognition and SoTL to graduate students writing their first blog. Speaking for Lauren and John, I can only hope that the explosion in productivity and high quality research and writing continues with IwM.

The Good

Ok, it is not just Good—this is just another thing that is great. IwM has produced some amazing blogs. I can’t review them all because, this time I will keep to my word count, but I would like to highlight a few insightful blogs that resonated with me. First, Edn Nuhfer recently wrote a blog titled, Collateral Metacognitive Damage (2017, February). The title is amazing in itself, but Ed extolls the use of self-assessments, why approach and perspective of self-assessment matters most (be the little engine that could vs. little engine who couldn’t), and provides a marvelous self-assessment tool (http://tinyurl.com/metacogselfassess ). I have already shared this with my students and colleagues. Second, one of the topics I would never have thought of, was Stephen Chew’s blog on Metacognition and Scaffolding Learning (2015, July). I have used scaffolding (and still do) throughout all of my courses, however, I never considered that by over-scaffolding, that I could reduce my student’s ability to calibrate (know when you know or don’t know something). That is, by providing too much scaffolding, it may cause students to be highly over confident and overestimate their knowledge and skill. Third, Chris Was wrote about A Mindfulness Perspective on Metacognition (2014, October). I have been begrudgingly and maybe curmudgeonly resistant to mindfulness. As such,  I was skeptical even though I know how great Chris’ research is. Well, Chris convinced me of the value of mindfulness and its connection to metacognition. Chris said it best, “It seems to me that if metacognition is knowledge and control of one’s cognitive processes and training in mindfulness increases one’s ability to focus and control awareness in a moment-by-moment manner, then perhaps we should reconsider, and investigate the relationship between mindfulness and metacognition in education and learning.” There are literally dozens and dozens of other blogs that I have incorporate into both my teaching and research. The work done at IwM is not merely good, it is great!

The Not-So-Good

IwM has been a labor of love. Speaking for myself, the work that has been done is amazing, exhausting, invigorating, productive, and fulfilling.  However, what I believe we have been “Not Great” at is getting the word out. That is, considering that there are over 200 blogs, resources, curated research articles, collaborations, etc. I believe that one of the things we are struggling with is spreading the gospel of metacognition.  Also, despite the fact that Lauren, John, and I have travelled across the globe (literally) promoting IwM at various conferences, so few people know about the good work being done. Moreover, notwithstanding that we have 258 email subscribers, I feel (passionately) that we can do better. I want and desire for other researchers and practitioners to not only benefit from the work we’ve done but to contribute to new IwM blogs, resources, research, and collaboration.

As I do with all my blogs, I will leave you with an open-ended question: What can we do to spread the word of the Great and Good work here at IwM?

Please give me/us some strategies or go out and help spread the word for us.

References

Chew, S. (2015, July). Metacognition and scaffolding student learning. Retrieved from https://www.improvewithmetacognition.com/metacognition-and-scaffolding-student-learning/

Nuhfer, E. (2017, February). Collateral metacognitive damage. Retrieved from https://www.improvewithmetacognition.com/collateral-metacognitive-damage/

Was, C. (2015, October). A mindfulness perspective on metacognition. Retrieved from https://www.improvewithmetacognition.com/a-mindfulness-perspective-on-metacognition/


Joining Forces: The Potential Effects of Team-Based Learning and Immediate Feedback Assessment Technique on Metacognition

by Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

As a standalone assessment tool, the Immediate Feedback Assessment Technique (IF-AT) has been demonstrated to affect student learning and students’ perceptions of the teacher (e.g., Brosvic et al. 2006; Slepkov & Sheil, 2014) and possibly improve metacognition (see Richmond, 2017). However, can IF-AT be combined with a cooperative learning activity such as Team-Based Learning (TBL) to enhance metacognition as well?

To partially answer this question, several researchers suggest that the IF-AT may be used effectively with TBL (Carmichael, 2009; Hefley & Tyre, 2012; Ives, 2014). For example, you could first form teams, give them an exam to discuss and debate the correct answer, and then have the teams decide on the correct answer. If students within a team cannot come to a consensus on the response to a question, you may allow them to write an “appeal” to turn in a separate answer. Click on Figure 1 for a video on how to use IF-AT combined with TBL.  IF-AT may also be used in dyads to allow students to discuss correct and incorrect answers. Students read a question, discuss the correct and incorrect answers, and then cooperatively make a decision, with the IF-AT providing immediate feedback. A third way, suggested by Ives (2011), is to do a two-stage group quiz. Ives suggests that you should have individual students write weekly quiz questions (first-stage), then get into teams and take quizzes in teams that consist of student’s written questions. However, the question then becomes, can the combination of TBL and IF-AT instructional strategies improve metacognition?

Figure 1. Team-Based Learning Using IF-AT. 

The Interplay Among IF-AT, TBL, and Metacognition
As I argued previously (Richmond, 2017), IF-AT may improve student’s metacognition; however, by adding TBL, what metacognitive processes and skills might improve? I see a several metacognitive benefits that may occur when combining these two instructional strategies.

First, the combination of IF-AT and TBL may increase student’s metacognitive awareness. For instance, test anxiety may be reduced in a group setting when using IF-AT (Ives, 2011) because students have the opportunity to debate the answers, hear from others, gather consensus and share responsibility. As the awareness of and conscious effort to reduce test anxiety is part of metacognitive awareness, the combination of TBL and IF-AT may make this process more salient.

Second, using TBL with IF-AT may also increase student’s calibration (e.g., the accuracy of knowing when you know or do not know something). That is, in a cooperative learning activity such as TBL, students are either reinforced with their correct knowledge through the process of debating and discussion of answers OR confronted with their incorrect knowledge by interacting with team members. Consequently, their assessment (calibration) of their knowledge should become more accurate through this process. For example, if a team member accurately identifies a correct answer, and one of the team members (who had the incorrect answer to start with) observes this, they may reflect on their answer, determine why and how they came to the incorrect answer, and change future strategies to study and subsequent estimations of knowledge. Or, an individual team member could consistently get (originally) the correct answer, but always underestimate his or her knowledge. This type of student may gain confidence in their knowledge and become more accurately calibrated.

Third, by combining TBL and IF-AT, there may also be an increase of metacognitive, cognitive, and learning strategy skills. That is, as team members share how, where, what, and why they studied, other team members may incorporate these strategies  into their quiver of learning strategies (especially if the team member who suggested it was correct). For example, one team member may explain the elaborative strategy they used effectively to study, and other team members listen and incorporate elaboration into their repertoire of strategies. Or, for example, a team member may consistently get questions wrong and share what strategy he or she uses (e.g., cramming and rehearsal). Other team members observe this, realize that strategy does not appear to work very well, and subsequently rarely use it themselves (we could only wish J).

Based on the above examples, it does seem likely that the combined use of TBL and IF-AT may improve various metacognitive skills.

Concluding Thoughts and The Hallmark of Good Assessments—Evidence
As a SoTL scholar, I would be remiss not to investigate the evidence supporting or refuting the efficacy of IF-AT and TBL. There are a handful of studies that demonstrate the advantage of using TBL and IF-AT to increase academic performance and enjoyment of class (e.g., Carmichael, 2009; Haberyan, 2007). The combination of IF-AT and TBL has also demonstrated to stimulate small group discussion and identify and correct content misconceptions (Cotner, Baepler, & Kellerman, 2008). However, there appears to be a gap in the research. Specifically, there are several research questions which arise:

  1. Does the combination of IF-AT and TBL increase metacognitive awareness?
  2. Does the combination of IF-AT and TBL increase the accuracy of a student’s calibration?
  3. Does the combination of IF-AT and TBL increase a student’s repertoire of cognitive and learning strategies?
  4. What other metacognitive processes may be enhanced by using IF-AT in a TBL setting?

As I mentioned in my first blog on IF-AT (Richmond, 2017) and here, I think there are enormous SoTL research opportunities for investigating the effects of IF-AT and TBL to improve metacognition. This, invariably, leads to the proverbial phrase: A little knowledge is a dangerous thing—so get to work!

Please follow me on Twitter: @AaronSRichmond

References
Carmichael, J. (2009). Team-based learning enhances performance in introductory biology. Journal of College Science Teaching, 38(4), 54–61.

Clark, M. C., Nguyen, H. T., Bray, C., & Levine, R. E. (2008). Team-based learning in an undergraduate nursing course. Journal of Nursing Education, 47, 111–117.

Cotner, S., Baepler, P., & Kellerman, A. (2008). Scratch this! The IF-AT as a technique for stimulating group discussion and exposing misconceptions. Journal of College Science Teaching37(4), 48.

Haberyan, A., (2007). Team-based learning in an industrial/organizational psychology course. North American Journal of Psychology, 9,143–152.

Hefley, T., & Tyre, A. J. (2012). Favorable team scores under the team-based learning paradigm: A statistical artifact?. RURALS: Review of Undergraduate Research in Agricultural and Life Sciences6(1), 1. Retrieved from http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1043&context=rurals

Ives, J. (2011). Two-stage group quizzes part 1: What, how and why. Science Learnification: Teaching and learning in the sciences with a focus on physics education research (PER) from the trenches.  Retrieved from https://learnification.wordpress.com/2011/03/23/two-stage-group-quizzes-part-1-what-how-and-why/

Richmond, A. S. (2017, February 24th). Scratch and win or scratch and lose? Immediate Feedback Assessment Technique. Retrieved from https://www.improvewithmetacognition.com/scratch-win-scratch-lose-immediate-feedback-assessment-technique/

Slepkov, A. D., & Shiell, R. C. (2014). Comparison of integrated testlet and constructed-response question formats. Physical Review Special Topics-Physics Education Research10(2), 020120.


Scratch and Win or Scratch and Lose? Immediate Feedback Assessment Technique

By Aaron S. Richmond, Ph. D., Metropolitan State University of Denver

When prepping my courses for this spring semester, I was thinking about how I often struggle with providing quick and easy feedback on quiz and exam performance to my students. I expressed this to my colleague, Dr. Anna Ropp (@AnnaRopp), and she quickly suggested that I check out Immediate Feedback Assessment Technique (IF-AT) by Epstein Educational Enterprises. When she showed me the IF-ATs, I was intrigued and thought I might as well give it a try—so I ordered some. IF-AT is used to instantaneously provide performance feedback to learners by allowing students to scratch off what they believe to be the correct answer on a multiple-choice exam, quiz or test. See Figure 1a and 1b for student examples of a completed IF-AT.  Students can find out what the incorrect or correct answer is by just scratching the chosen answer (see question 1 in Figure 1a). Students can scratch more than one answer to find the correct answer (see question 2 in Figure 1a). You may also use it as a way of providing partial credit for sequenced attempts (e.g., scratch 1 choice for full credit if correct, then scratch a second choice, and maybe a third, to get decreasing amounts of partial credit). See question 6 in Figure 1b for an example of this.  Epstein and colleagues suggest that IF-AT not only assesses student learning, but it can also teach at the same time. However, it occurred to me that this is not only an assessment and teaching tool, rather it is a great opportunity to increase metacognition.

                                                        (a)                                                (b)

Figure 1. Completed and Unscored 10-Question IF AT Completed 10-Question IF AT Student and Teacher Scored

How to Use IF-AT
Epstein and colleagues suggest that IF-AT is fair, fast, active, fun, and respectful and builds knowledge. The IF-AT scratch assessments come in 10, 25, or 50-item test options with 5 different orders of correct answers. The Epsteins suggest that IF-AT can be used in many ways. For example, they can be used for chapter tests; individual study (at home or in class); quizzes; pyramidal-sequential-process quizzing; exams; team-based and cooperative-learning; study-buddy learning; and most importantly as a feedback mechanism (see http://www.epsteineducation.com/home/about/uses.aspx website for further explanation). There are several metacognitive functions (although the Epstein’s do not couch their claims using this term) of the IF-AT. First, the Epstein’s argue that you can arrange your IF-AT so that the first question (and the immediate feedback of the correct answer) can be used in a pyramidal sequential process. That is, the correct answer to the first question is needed to answer subsequent questions as it is foundational knowledge for the remaining question. This sequential process allows the instructor and student to pin-point where the student’s knowledge of the integrated content broke down. This is implicit metacognitive modeling of a student’s metacognitive knowledge that should be made explicit. Meaning, by explaining to your students how the exam is set up, students can use cues and knowledge from previous questions and answers on the test to assist in their understanding of subsequent questions and answers. This is a key step to the learning process. Second, the IF-AT may also be used in a Team-Based way (i.e., distributed cognition) by forming groups, problem solving, and the team discovering what the correct answer is. IF-AT may also be used in dyads to allow students to discuss correct and incorrect answers. Students read a question, discuss the correct and incorrect answer, then cooperatively make a decision and receive immediate feedback. Third, IF-AT may be used to increase cognitive and metacognitive strategies. That is, by providing feedback immediately, students (if you explicitly instruct them to do so) may adjust their cognitive and metacognitive strategies for future study. For example, if a student used flashcards to study, and did poorly, they may want to adjust how they construct and use flashcards (e.g., distributed practice). Finally, and most importantly, IF-AT may improve student’s metacognitive regulation via calibration (i.e., the accuracy of knowing when you do and don’t know the answer to a question). That is, by providing immediate feedback, students may become more accurate in their judgments of knowing or even their feelings of knowing based on the feedback.

Is it Scratch to Win or is Scratch to Lose?
As described, by using the IF-AT, students get immediate feedback on whether they got the question correct, incorrect, and what is the appropriate answer. From a metacognitive perspective, this is outstanding. Students can calibrate (i.e., adjust their estimations and confidence in knowing an answer) in real-time, engage in distributed cognition, provide feedback on choice of cognitive and metacognitive strategies, can increase cognitive monitoring, and regulation control. These are all WIN, WIN, WIN, byproducts. HOWEVER, is there a down-side to instantaneously knowing you are wrong? That is, is there an emotional regulation and reactivity to IF-AT? As I have been implementing the use of the IF-AT, I have noticed (anecdotally) that about 1 in 10 students react negatively and it seems to increase their test anxiety. Presumably, the other 90% of the students love it and appreciate the feedback. Yet, what about the 10%? Does IF-AT stunt or hinder their performance? Again, my esteemed colleague Dr. Anna Ropp and I engaged in some scholarly discourse to answer this question, and Anna suggested that I make the first 3-5 questions on each IF-AT “soft-ball” questions. That is, questions that 75% of students will get correctly so that students’ fears and anxiety is remediated to some degree. Another alternative is to provide students with a copy of the test or exam and let them rank order or weight their answers (see Chris Was’ IwM Blog, 2014; on how to do this). Despite these two sound suggestions, there still may be an affective reaction that could be detrimental to student learning. To date, there has been no research to investigate this issue and there are only a hand full of well-designed studies to investigate IF-AT (e.g., Brosvic et al., 2006; Dihov et al., 2005; Epstein et al., 2002, 2003; Slepkov & Sheill, 2014). As such, more well-constructed and executed empirical research is needed to investigate this issue (Hint: all you scholars looking for a SoTL project…here’s your sign).

Concluding Thoughts and Questions for You
After investigating, reflecting on, and using IF-AT in my classroom, I think that it is a valuable assessment tool in your quiver of assessments to increase metacognition—but of course not an educational panacea. Furthermore, in my investigation of this assessment technique, (as usual), more questions popped up on the use of IF-AT. So, I will leave you with a charge and call to help me answer the questions below:

  1. Are there similar assessments that provide immediate feedback that you use? If so, are they less expensive or free?
  2. If you are using IF-AT, what is your favorite way to use it?
  3. Do you think IF-AT could cause substantial test anxiety? If so, to whom and to what level within your classes?
  4. How could you use IF-AT be used as a tool for calibration more efficiently? Or, what other ways do you think IF-AT can be used to increase metacognition?
  5. I think there are enormous opportunities for SoTL on IF-AT (e.g., the effects on calibration, distributed cognition, cognitive monitoring, conditional knowledge of strategy use, etc.), which means we all have some more work to do. J

References
Brosvic, G. M., Epstein, M. L., Dihoff, R. E., & Cook, M. J. (2006). Acquisition and retention of Esperanto: The case for error correction and immediate feedback. The Psychological Record56(2), 205.

Dihoff, R. E., Brosvic, G. M., Epstein, M. L., & Cook, M. J. (2005). Adjunctive role for immediate feedback in the acquisition and retention of mathematical fact series by elementary school students classified with mild mental retardation. The Psychological Record55(1), 39.

Epstein, M. L., Brosvic, G. M., Costner, K. L., Dihoff, R. E., & Lazarus, A. D. (2003). Effectiveness of feedback during the testing of preschool children, elementary school children, and adolescents with developmental delays. The Psychological Record53(2), 177.

Epstein, M. L., Lazarus, A. D., Calvano, T. B., & Matthews, K. A. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record52(2), 187.

Slepkov, A. D., & Shiell, R. C. (2014). Comparison of integrated testlet and constructed-response question formats. Physical Review Special Topics-Physics Education Research10(2), 020120.

Was, C. (2014, August). Testing improves knowledge monitoring. Improve with Metacognition. Retrieved from https://www.improvewithmetacognition.com/testing-improves-knowledge-monitoring/


Hypercorrection: Overcoming overconfidence with metacognition

by Jason Lodge, Melbourne Centre for the Study of Higher Education, University of Melbourne

Confidence is generally seen as a positive attribute to have in 21st Century Western society. Confidence contributes to higher self-esteem, self-reported happiness. It apparently makes someone more attractive and leads to better career outcomes. With such strong evidence suggesting the benefits of confidence, it is no wonder that building confidence has become a major focus within many sectors, particularly in professional development and education.

Despite the evidence for the benefits of confidence, it has a dark side that is overconfidence. There are many occasions where it is problematic to overinflate our skills or abilities. Learning is one of the most obvious examples. According to the (in)famous Dunning-Kruger effect, unskilled learners are often unaware that they are in fact unskilled. The issue here is that those who are low in knowledge of an area are often ignorant to how much they don’t know about the area.

Overconfidence is particularly problematic for students when considering how important it is to make relatively accurate estimates about how they are progressing. For example, if a student is overconfident about their progress, they may decide to stop reviewing or revising a topic prematurely. If students have a difficulty in accurately self-evaluating their learning it can lead them to being underprepared to use the knowledge, for example in an exam or when they need it in practice.

Being wrong can be good

One of the main problems with overconfidence is that students can fail to correct misconceptions or realise that they are wrong. Being wrong or failing has been long seen as negative educational outcomes.

Recent research on productive failure (e.g. Kapur, 2015) has shown, however, that being wrong and coming to realise it is a powerful learning experience. As opposed to more traditional notions of error-free learning, researchers are now starting to understand how important it is for learners to make mistakes. One of the necessary conditions for errors to be effective learning experiences though is that students need to realise they are making them. This is a problem when students are overconfident because they fail to see themselves failing.

There is a silver lining to overconfidence when it comes to making mistakes though. Research on a process called hypercorrection demonstrates that when learners are highly confident but wrong, if the misconception can be corrected, they have a much more effective learning experience (Butterfield & Metcalfe, 2001). In other words, overconfident students who realise that they are wrong about something tend to be surprised and that surprise means they are more likely to learn from the experience.

How metacognition helps with overconfidence

While hypercorrection has potential for helping students overcome misconceptions and achieve conceptual change, it doesn’t happen automatically. One of the main prerequisites is that students need to have enough awareness to realise that they are wrong. The balance between confidence and overconfidence is therefore precarious. It is helpful for students to feel confident that they can manage to learn new concepts, particularly complex and difficult concepts. Confidence helps students to persist when learning becomes difficult and challenging. However, students can have this confidence without necessarily engaging in careful reflective processing. In other words, confidence is not necessarily related to students being able to accurately monitoring their progress.

On the other hand though, it can be easy for students to feel confident in their knowledge of certain misconceptions. This is particularly so if the misconceptions are intuitive and based on real world experience. It is common to have misconceptions about physics and psychology for example because students have vast experience in the physical and social world. This experience gives them intuitive conceptions about the world that are reinforced over time. Some of these conceptions are wrong but their experience gives students high levels of confidence that they are right. Often careful observation or deliberate instructional design is required to shift students’ thinking about these conceptions.

Metacognition is critical in allowing students to monitor and detect when they are making errors or have incorrect conceptions. With misconceptions in particular, students can continue to believe false information if they don’t address the process at which they arrive at a conclusion. Often, overcoming a misconception requires dealing with the cognitive disequilibrium that comes from attempting to overwrite an intuitive conception of the world with a more sophisticated scientific conception.

For example, intuitively a heavy object like a bowling ball and light object like a feather will fall at different rates but, when observing both being dropped simultaneously, they fall at the same rate. The observation causes disequilibrium between the intuitive notion and the more sophisticated understanding of force and gravity encapsulated by Newton’s second law. Generally, overcoming this kind of disequilibrium requires students to shift strategies or approaches to understanding the concept to redress the faulty logic they relied on to arrive at the initial misconception. So in this example, they need to develop a higher-level conception of gravity that requires shifting from intuitive notions. Recognising the need for this shift only comes through metacognitive monitoring and effective error detection.

So metacognition is often necessary for correcting misconceptions and is particularly effective when students are confident about what they think they know and have the realisation that they are wrong. Overconfidence can therefore be managed through enhanced metacognition.

The research on confidence and hypercorrection suggests that it is good for students to be confident about what they think they know as long as they are prepared to recognise when they are wrong. This requires an ability to be able to detect errors and, more broadly, calibrate their perceived progress against their actual progress. While teachers can help with this to a degree through feedback and scaffolding, it is vital that students develop metacognition so that they can monitor when they are wrong or when they are not progressing as they should be. If they can, then there is every chance that the learning experience can be more powerful as a result.

References

Butterfield, B., & Metcalfe, J. (2001). Errors committed with high confidence are hypercorrected. Journal of Experimental Psychology. Learning, Memory, and Cognition, 27(6), 1491–1494. DOI: 10.1037/0278-7393.27.6.1491

Kapur, M. (2015). Learning from productive failure. Learning: Research and Practice, 1(1), 51–65. DOI: 10.1080/23735082.2015.1002195


Distributed Metacognition: Insights from Machine Learning and Human Distraction

by Philip Beaman, Ph.D., University of Reading, UK

Following the success of Google’s AlphaGo programme in competition with a human expert over five games, a result previously considered beyond the capabilities of mere machines (https://deepmind.com/alpha-go), there has been much interest in machine learning. Broadly speaking, machine learning comes in two forms: supervised learning (where the machine is trained by means of examples and errors it makes are corrected) or unsupervised learning (where there is no error signal to indicate previous failures). AlphaGo, as it happens, used supervised learning based upon examples of human expert-level games and it is this type of learning which looks very much like meta-cognition, even though the meta-cognitive monitoring and correction of the machine’s performance is external to the system itself, although not necessarily to the machine which is running the system. For example: an artificial neural network (perhaps of the kind which underpins AlphaGo) is trained to output Y when presented with X by means of a programme which stores training examples – and calculates the error signal from the neural network’s first attempts – outside the neural network software itself but on the same hardware. This is of interest because it illustrates the fluid boundary between a cognitive system (the neural network implemented on computer hardware) and its environment (other programmes running on the same hardware to support the neural network) and demonstrates that metacognition, like first-order cognition, is often a form of situated activity. Here, the monitoring and the basis for correction of performance is (like all supervised learning) external to the learning system itself.

In contrast, when psychologists talk about metacognition, we tend to assume that all the processing is going on internally (in the head), whereas in fact it is usually only partly in the head and partly in the world. This is not news to educationalists or to technologists: learners are encouraged to make effective use of external aids which help manage work and thought, but external aids to cognition are often overlooked by psychological theories and investigations. This was not always the case. In the book “Plans and the Structure of Behaviour” which introduced the term “working memory” to psychology, Miller, Galantner and Pribram (1960) spoke of working memory as being a “special state or place” used to track the execution of plans where the place could be in the frontal lobes of the brain (a prescient suggestion for the time!) or “on a sheet of paper”. This concept that was originally defined wholly functionally has, in subsequent years, morphed into a cognitive structure with a specific locus, or loci, of neural activity (e.g., Baddeley, 2007; D’Esposito, 2007; Henson, 2001; Smith, 2000).

We have come across the issue of distributed metacognition in our own work on auditory distraction. For many years, our lab (along with several others) collected and reported data on the disruptive effects of noise on human cognition and performance. We carefully delineated the types of noise which cause distraction and the tasks which were most sensitive to distraction but – at least until recently – neither we nor (so far as we know) anyone else gave any thought to meta-cognitive strategies which might be employed to reduce distraction outside the laboratory setting. Our experiments all involved standardized presentation schedules of material for later recall and imposed environmental noise (usually over headphones) which participants were told to ignore but which they could not avoid. The results of recent studies which both asked participants for their judgments of learning (JoLs) concerning the material and gave them the opportunity to control their own learning or recall strategy (e.g., Beaman, Hanczakowski & Jones, 2014) are of considerable interest. Theoretically, one of three things might happen: meta-cognition might not influence ability to resist distraction in any way, meta-cognitive control strategies might ameliorate the effects of distraction, or meta-cognition might itself be affected by distraction potentially escalating the disruptive effects. For now, let’s focus on the meta-cognitive monitoring judgments since these need to be reasonably accurate in order for people to have any idea that distraction is happening and that counter-measures might be necessary.

One thing we found was that people’s judgments of their own learning was fairly well-calibrated, with judgements of recall in the quiet and noise conditions mirroring the actual memory data. This is not a surprise because earlier studies, including one by Ellermeier and Zimmer (1997) also showed that , when asked to judge their confidence in their memory, people are aware of when noise is likely to detract from their learning. What is of interest, though, is where this insight comes from. No feedback was given after the memory test (i.e., in neural network terms this was not supervised learning) so it isn’t that participants were able to compare their memory performance in the various conditions to the correct answers. Ellermeier and Zimmer (1997) included in their study a measure of participants’ confidence in their abilities before they ever took the test and this measure was less well calibrated with actual performance so this successful metacognitive monitoring does seem to be dependent upon recent experience with these particular distractors and the particular memory test used, rather than being drawn from general knowledge or past experience. What then is the source of the information used to monitor memory accuracy (and hence the effects of auditory distraction on memory)? In our studies, the same participants experienced learning trials in noise and in quiet in the same sessions and the lists of items they were required to try and recall were always of the same set length and recalled by means of entering into a physical device (either writing or typing responses). Meta-cognitive monitoring, in other words, could be achieved in many of our experiments by learning the approximate length of the list to be recalled and comparing the physical record of number of items recalled with this learned number on a trial-by-trial basis. This kind of meta-cognitive monitoring is very much distributed because it relies upon the physical record of the number of items recalled on each trial to make the appropriate comparison. Is there any evidence that something like this is actually happening? An (as yet unpublished) experiment of ours provides a tantalising hint: If you ask people to write down the words they recall but give one group a standard pen to do so and another group a pen which is filled with invisible ink (so both groups are writing their recall, but only one is able to see the results) then it appears that monitoring is impaired in the latter case – suggesting (perhaps) that meta-cognition under distraction benefits from distributing some of the relevant knowledge away from the head and into the world.

References:

Baddeley, A. D. (2007). Working memory, thought and action. Oxford: Oxford University Press.

Beaman, C. P., Hanczakowski, M., & Jones, D. M. (2014). The effects of distraction on metacognition and metacognition on distraction: Evidence from recognition memory. Frontiers in Psychology, 5, 439.

D’Esposito, M. (2007) From cognitive to neural models of working memory. Philosophical Transactions of the Royal Society B: Biological Sciences, 362, 761-772.

Ellermeier, W. & Zimmer, K. (1997). Individual differences in susceptibility to the “irrelevant sound effect” Journal of the Acoustical Society of America, 102, 2191-2199.

Henson, R. N. A. (2001). Neural working memory. In: J. Andrade (Ed.) Working memory in perspective. Hove: Psychology Press.

Miller, G. A., Galanter, E. & Pribram, K. H. (1960). Plans and the structure of behavior. New York: Holt.

Smith, E. E. (2000). Neural bases of human working memory. Current Directions in Psychological Science, 9, 45-49.


Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations? (Part Two)

By Michael J. Serra, Texas Tech University

Part II: Fluency in the Classroom

In the first part of this post, I discussed laboratory-based research demonstrating that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. In this part, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

Are Fluency Manipulations Realistic?

Researchers who obtain effects of perceptual fluency on learners’ metacognitive self-evaluations in the laboratory suggest that similar effects might also obtain for students in real-life learning and study situations. In such cases, students might study inappropriately or inefficiently (e.g., under-studying when they experience a sense of fluency or over-studying when they experience a sense of disfluency). But to what extent should we be worried that any naturally-occurring differences in processing fluency might affect our students in actual learning situations?

Look at the accompanying figure. This figure presents examples of several ways in which researchers have manipulated visual processing fluency to demonstrate effects on participants’ judgments of their learning. When was the last time you saw a textbook printed in a blurry font, or featuring an upside down passage, or involving a section where pink text was printed on a yellow background? fluencyWhen you present in-person lectures, do your PowerPoints feature any words typed in aLtErNaTiNg CaSe? (Or, in terms of auditory processing fluency, do you deliver half of the lesson in a low, garbled voice and half in a loud, booming voice?). You would probably – and purposefully – avoid such variations in processing fluency when presenting to or creating learning materials for your students. Yet, even in the laboratory with these exaggerated fluency manipulations, the effects of perceptual fluency on both learning and metacognitive monitoring are often small (i.e., small differences between conditions). Put differently, it takes a lot of effort and requires very specific, controlled conditions to obtain effects of fluency on learning or metacognitive monitoring in the laboratory.

Will Fluency Effects Occur in the Classroom?

Careful examination of methods and findings from laboratory-based research suggests that such effects are unlikely to occur in the real-life situations because of how fragile these effects are in the laboratory. For example, processing fluency only seems to affect learners’ metacognitive self-evaluations of their learning when they experience both fluent and disfluent information; experiencing only one level of fluency usually won’t produce such effects. For example, participants only judge information presented in a large, easy-to-read font as better learned than information presented in a small, difficult-to-read font when they experience some of the information in one format and some in the other; when they only experience one format, the formatting does not affect their learning judgments (e.g., Magreehan et al., 2015; Yue et al., 2013). The levels of fluency – and, perhaps more importantly, disfluency – must also be fairly distinguishable from each other to have an effect on learners’ judgments. For example, consider the example formatting in the accompanying figure: learners must notice a clear difference in formatting and in their experience of fluency across the formats for the formatting to affect their judgments. Learners likely must also have limited time to process the disfluent information; if they have enough time to process the disfluent information, the effects on both learning and on metacognitive judgments disappear (cf. Yue et al., 2013; but see Magreehan et al., 2015). Perhaps most important, the effects of fluency on learning judgments are easiest to obtain in the laboratory when the learning materials are low in authenticity or do not have much natural variation in intrinsic difficulty. For example, participants will base their learning judgments on perceptual fluency when all of the items they are asked to learn are of equal difficulty, such as pairs of unrelated words (e.g., “CAT – FORK”, “KETTLE – MOUNTAIN”), but they ignore perceptual fluency once there is a clear difference in difficulty, such as when related word pairs (e.g., “FLAME – FIRE”, “UMBRELLA – RAIN”) are also part of the learning materials (cf. Magreehan et al., 2015).

Consider a real-life example: perhaps you photocopied a magazine article for your students to read, and the image quality of that photocopy was not great (i.e., disfluent processing fluency). We might be concerned that the poor image quality would lead students to incorrectly judge that they have not understood the article, when in fact they had been able to comprehend it quite well (despite the image quality). Given the evidence above, however, this instance of processing fluency might not actually affect your students’ metacognitive judgments of their comprehension. Students in this situation are only being exposed to one level of fluency (i.e., just disfluent formatting), and the level of disfluency might not be that discordant from the norm (i.e., a blurry or dark photocopy might not be that abnormal). Further, students likely have ample time to overcome the disfluency while reading (i.e., assuming the assignment was to read the article as homework at their own pace), and the article likely contains a variety of information besides fluency that students can use for their learning judgments (e.g., students might use their level of background knowledge or familiarity with key terms in the article as more-predictive bases for judging their comprehension). So, despite the fact that the photocopied article might be visually disfluent – or at least might produce some experience of disfluency – it would not seem likely to affect your students’ judgments of their own comprehension.

In summary, at present it seems unlikely that the experience of perceptual processing fluency or disfluency is likely to affect students’ metacognitive self-evaluations of their learning in actual learning or study situations. Teachers and designers of educational materials might of course strive by default to present all information to students clearly and in ways that are perceptually fluent, but it seems premature – and perhaps even unnecessary – for them to worry about rare instances where information is not perceptually fluent, especially if there are counteracting factors such as students having ample time to process the material, there only being one level of fluency, or students having other information upon which to base their judgments of learning.

Going Forward

The question of whether or not laboratory findings related to perceptual fluency will transfer to authentic learning situations certainly requires further empirical scrutiny. At present, however, the claim that highly-contrived effects of perceptual fluency on learners’ metacognitive judgments will also impair the efficacy of study behaviors in more naturalistic situations seems unfounded and unlikely.

Researchers might be wise to abandon the examination of highly-contrived fluency effects in the laboratory and instead examine more realistic variations in fluency in more natural learning situations to see if such conditions actually matter for students. For example, Carpenter and colleagues (Carpenter, et al., in press; Carpenter, et al., 2013) have been examining the effects of a factor they call instructor fluency – the ease or clarity with which information is presented – on learning and judgments of learning. Importantly, this factor is not perceptual fluency, as it does not involve purported variations in perceptual processing. Rather, instructor fluency invokes the sense of clarity that learners experience while processing a lesson. In experiments on this topic, students watched a short video-recorded lesson taught by either a confident and well-organized (“fluent”) instructor or a nervous and seemingly disorganized (“disfluent”) instructor, judged their learning from the video, and then completed a test over the information. Much as in research on perceptual fluency, participants judged that they learned more from the fluent instructor than from the disfluent one, even though test performance did not differ by condition.

These findings related to instructor fluency do not validate those on perceptual fluency. Rather, I would argue that they actually add further nails to the coffin of perceptual fluency. There are bigger problems out there besides perceptual fluency we can be worrying about in order to help our students learn and help them to accurately make metacognitive judgments. Perhaps instructor fluency is one of those problems, and perhaps it isn’t. But it seems that perceptual fluency is not a problem we should be greatly concerned about in realistic learning situations.

References

Carpenter, S. K., Mickes, L., Rahman, S., & Fernandez, C. (in press). The effect of instructor fluency on students’ perceptions of instructors, confidence in learning, and actual learning. Journal of Experimental Psychology: Applied.

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20, 1350-1356.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.

 

 


Part One: Does Processing Fluency Really Matter for Metacognition in Actual Learning Situations?

By Michael J. Serra, Texas Tech University

Part I: Fluency in the Laboratory

Much recent research demonstrates that learners judge their knowledge (e.g., memory or comprehension) to be better when information seems easy to process and worse when information seems difficult to process, even when eventual test performance is not predicted by such experiences. Laboratory-based researchers often argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. In the present post, I review these effects obtained in the laboratory. In the second part of this post, I question whether these outcomes are worth worrying about in everyday, real-life learning situations.

What is Processing Fluency?

Have you ever struggled to hear a low-volume or garbled voicemail message, or struggled to read small or blurry printed text? Did you experience some relief after raising the volume on your phone or putting on your reading glasses and trying again? What if you didn’t have your reading glasses with you at the time? You might still be able to read the small printed text, but it would take more effort and might literally feel more effortful than if you had your glasses on. Would the feeling of effort you experienced while reading without your glasses affect your appraisal of how much you liked or how well you understood what you read?

When we process information, we often have a co-occurring experience of processing fluency: the ease or difficulty we experience while physically processing that information. Note that this experience is technically independent of the innate complexity of the information itself. For example, an intricate and conceptually-confusing physics textbook might be printed in a large and easy to read font (high difficulty, perceptually fluent), while a child might express a simple message to you in a voice that is too low to be easily understood over the noise of a birthday party (low difficulty, perceptually disfluent).

Fluency and Metacognition

Certainly, we know that the innate complexity of learning materials is going to relate to students’ acquisition of new information and eventual performance on tests. Put differently, easy materials will be easy for students to learn and difficult materials will be difficult for students to learn. And it turns out that perceptual disfluency – difficulty processing information – can actually improve memory under some limited conditions (for a detailed examination, see Yue et al., 2013). But how does processing fluency affect students’ metacognitive self-evaluations of their learning?

In the modal laboratory-based examination of metacognition (for a review, see Dunlosky & Metcalfe, 2009), participants study learning materials (these might be simple memory materials or complex reading materials), make explicit metacognitive judgments in which they rate their learning or comprehension for those materials, and then complete a test over what they’ve studied. Researchers can then compare learners’ judgments to their test performance in a variety of ways to determine the accuracy of their self-evaluations (for a review, see Dunlosky & Metcalfe, 2009). As you might know from reading other posts on this website, we usually want learners to accurately judge their learning so they can make efficient decisions on how to allocate their study time or what information to focus on when studying. Any factor that can reduce that accuracy is likely to be problematic for ultimate test performance.

Metacognition researchers have examined how fluency affects participants’ judgments of their learning in the laboratory. The figure in this post includes several examples of ways in which researchers have manipulated the visual perceptual fluency of learning materials (i.e., memory materials or reading materials) to be perceptually disfluent compared to a fluent condition. fluencyThese manipulations involving visual processing fluency include presenting learning materials in an easy-to-read versus difficult-to-read typeface either by literally blurring the font (Yue et al., 2013) or by adjusting the colors of the words and background to make them easy versus difficult to read (Werth & Strack, 2003), in an upside-down versus right-side up typeface (Sungkhasettee et al., 2011), and using normal capitalization versus capitalizing every other letter (Mueller et al., 2013). (A conceptually similar manipulation for auditory perceptual fluency might include making the volume high versus low, or the auditory quality clear versus garbled.).

A wealth of empirical (mostly laboratory-based) research demonstrates that learners typically judge perceptually-fluent learning materials to be better-learned than perceptually-disfluent learning materials, even when learning (i.e., later test performance) is the same for the two sets of materials (e.g., Magreehan et al., 2015; Mueller et al., 2013; Rhodes & Castel, 2008; Susser et al., 2013; Yue et al., 2013). Although there is a current theoretical debate as to why processing fluency affects learners’ metacognitive judgments of their learning (i.e., Do the effects stem from the experience of fluency or from explicit beliefs about fluency?, see Magreehan et al., 2015; Mueller et al., 2013), it is nevertheless clear that manipulations such as those in the figure can affect how much students think they know. In terms of metacognitive accuracy, learners are often misled by feelings of fluency or disfluency that are neither related to their level of learning nor predictive of their future test performance.

As I previously noted, laboratory-based researchers argue that the misuse of such experiences as the basis for learners’ self-evaluations can produce metacognitive illusions and lead to inefficient study. But, this question has yet to receive much empirical scrutiny in more realistic learning situations. I explore the possibility that such effects will also obtain with realistic learning situations in the second part of this post.

References

Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA US: Sage Publications, Inc.

Magreehan, D. A., Serra, M. J., Schwartz, N. H., & Narciss, S. (2015, advanced online publication). Further boundary conditions for the effects of perceptual disfluency on judgments of learning. Metacognition and Learning.

Mueller, M. L., Tauber, S. K., & Dunlosky, J. (2013). Contributions of beliefs and processing fluency to the effect of relatedness on judgments of learning. Psychonomic Bulletin & Review, 20, 378-384.

Rhodes, M. G., & Castel, A. D. (2008). Memory predictions are influenced by perceptual information: evidence for metacognitive illusions. Journal of Experimental Psychology: General, 137, 615-625.

Sungkhasettee, V. W., Friedman, M. C., & Castel, A. D. (2011). Memory and metamemory for inverted words: Illusions of competency and desirable difficulties. Psychonomic Bulletin & Review, 18, 973-978.

Susser, J. A., Mulligan, N. W., & Besken, M. (2013). The effects of list composition and perceptual fluency on judgments of learning (JOLs). Memory & Cognition, 41, 1000-1011.

Werth, L., & Strack, F. (2003). An inferential approach to the knew-it-all-along phenomenon. Memory, 11, 411-419.

Yue, C. L., Castel, A. D., & Bjork, R. A. (2013). When disfluency is—and is not—a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. Memory & Cognition, 41, 229-241.


Unskilled and Unaware: A Metacognitive Bias

by John R. Schumacher, Eevin Akers, & Roman Taraban (all from Texas Tech University).

In 1995, McArthur Wheeler robbed two Pittsburgh banks in broad daylight, with no attempt to disguise himself. When he was arrested that night, he objected “But I wore the juice.” Because lelemonmon juice can be used as an invisible ink, Wheeler thought that rubbing his face with lemon juice would make it invisible to surveillance cameras in the banks. Kruger and Dunning (1999) used Wheeler’s story to exemplify a metacognitive bias through which relatively unskilled individuals overestimate their skill, being both unaware of their ineptitude and holding an inflated sense of their knowledge or ability. This is called the Dunning-Kruger effect, and it also seems to apply to some academic settings. For example, Kruger and Dunning found that some students are able to accurately predict their performance prior to taking a test. That is, these students predict that they will do well on the test and actually perform well on the test. Other students predict that they will do well on a test, but do poorly on the test. These students tend to have an inflated sense of how well they will do but do poorly, thus they fit the Dunning-Kruger effect. Because these students’ predictions do not match their performance, we describe them as poorly calibrated. Good calibration involves metacognitive awareness. This post explores how note taking relates to calibration and metacognitive awareness.

Some of the experiments in our lab concern the benefits of note taking. In these experiments, students were presented with a college lecture. Note takers recalled more than non-notetakers, who simply watched the video (Jennings & Taraban, 2014). The question we explored was whether good note taking skills improved students’ calibration of how much they know and thereby reduced the unskilled and unaware effect reported by Kruger and Dunning (1999).

In one experiment, participants watched a 30-minute video lecture while either taking notes (notetakers) or simply viewing the video (non-notetakers). They returned 24 hours later. They predicted the percentage of information they believed they would recall, using a scale of 0 to 100, and then took a free-recall test, without being given an opportunity to study their notes or mentally review the prior day’s video lecture. They then studied their notes (notetakers) or mentally reviewed the lecture (non-notetakers) from the previous day, for12 minutes, and took a second free-recall test. In order to assess the Dunning-Kruger effect, we subtracted the actual percent of lecture material that was recalled in each test (0 to 100) from participants’ predictions of how much they would recall on each test (0 to 100). For example, if a participant predicted he or she would correctly recall 75% of the material on a test and actually recalled 50% the calibration score would be +25 (75 – 50 = 25). Values close to +100 indicated extreme overconfidence, values close to -100 indicated extreme underconfidence, and values close to 0 indicated good calibration. To answer our question about how note taking relates to calibration, we compared the calibration scores for the two groups (note takers and non-notetakers) for the two situations (before reviewing notes or reflecting, and after reviewing notes or reflecting). These analyses indicated that the two groups did not differ in calibration for the first, free recall test. However, to our surprise, note takers became significantly more overconfident, and thus less calibrated in their predictions, than non-notetakers on the second test. After studying, notetakers’ calibration became worse.

Note taking increases test performance. So why doesn’t note taking improve calibration? Since note takers are more “skilled”, that is, have encoded and stored more information from the lecture, shouldn’t they be more “aware”, that is, better calibrated, as the Dunning-Kruger effect would imply? One possible explanation is that studying notes immediately increases the amount of information processed in working memory. The information that participants will be asked to recall shortly is highly active and available. This sense of availability produces the inflated (and false) prediction that much information will be remembered on the test. Is this overconfidence harmful to the learner? It could be harmful to the extent that individuals often self-generate predictions of how well they will do on a test in order to self-regulate their study behaviors. Poor calibration of these predictions could lead to the individual failing to recognize that he or she requires additional study time before all material is properly stored and able to be recalled.

If note taking itself is not the problem, then is there some way students can improve their calibration after studying in order to better regulate subsequent study efforts? The answer is “yes.” Research has shown that predictions of future performance improve if there is a short delay between studying information and predicting subsequent test performance (Thiede, Dunlosky, Griffin, & Wiley, 2005). In order to improve calibration after studying notes, students should be encouraged to wait, after studying their notes, before judging whether they need additional study time. In order to improve metacognitive awareness with respect to calibration, students need to understand that immediate judgments of how much they know may be inflated. They need to be aware that waiting a short time before judging whether they need more study will result in more effective self-regulation of study time.

References
Jennings, E., & Taraban, R. (May, 2014). Note-taking in the modern college classroom: Computer, paper and pencil, or listening? Paper presented at the Midwestern Psychological Association (MPA), Chicago, IL.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.

Thiede, K. W., Dunlosky, J., Griffin, T. D., & Wiley, J. (2005). Understanding the delayed-keyword effect on metacomprehension accuracy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(6), 1-25.


When is Metacognitive Self-Assessment Skill “Good Enough”?

Ed Nuhfer Retired Professor of Geology and Director of Faculty Development and Director of Educational Assessment, enuhfer@earthlink.net, 208-241-5029 (with Steve Fleisher CSU-Channel Islands; Christopher Cogan, Independent Consultant; Karl Wirth, Macalester College and Eric Gaze, Bowdoin College)

We noted the statement by Zell and Krizan (2014, p. 111) that: “…it remains unclear whether people generally perceive their skills accurately or inaccurately” in Nuhfer, Cogan, Fleisher, Gaze and Wirth (2016) In our paper, we showed why innumeracy is a major barrier to the understanding of metacognitive self-assessment.

Another barrier to progress exists because scholars who attempt separately to do quantitative measures of self-assessment have no common ground from which to communicate and compare results. This occurs because there is no consensus on what constitutes “good enough” versus “woefully inadequate” metacognitive self-assessment skills. Does overestimating self-assessment skill by 5% allow labeling a person as “overconfident?” We do not believe so. We think that a reasonable range must be exceeded before those labels should be considered to apply.

The five of us are working now on a sequel to our above Numeracy paper. In the sequel, we interpret the data taken from 1154 paired measures from a behavioral science perspective. This extends our first paper’s describing of the data through graphs and numerical analyses. Because we had a database of over a thousand participants, we decided to use it to propose the first classification scheme for metacognitive self-assessment. It defines categories based on the magnitudes of self-assessment inaccuracy (Figure 1).

metacogmarchfig1
Figure 1. Draft of a proposed classification scheme for metacognitive self-assessment based upon magnitudes of inaccuracy of self-assessed competence as determined by percentage points (ppts) differences between ratings of self-assessed competence and scores from testing of actual competence, both expressed in percentages.

If you wonder where the “good” definition comes from in Figure 1, we disclosed on page 19 of our Numeracy paper: “We designated self-assessment accuracies within ±10% of zero as good self-assessments. We derived this designation from 69 professors self-assessing their competence, and 74% of them achieving accuracy within ±10%.”

The other breaks that designate “Adequate,” “Marginal,” “Inadequate,” and “Egregious” admittedly derive from natural breaks based upon measures expressed in percentages. By distribution through the above categories, we found that over 2/3 of our 1154 participants had adequate self-assessment skills, a bit over 21% exhibited inadequate skills, and the remainder lay within the category of “marginal.” We found that less than 3% qualified by our definition as “unskilled and unaware of it.”

These results indicate that the popular perspectives found in web searches that portray people in general as having grossly overinflated views of their own competence may be incomplete and perhaps even erroneous. Other researchers are now discovering that the correlations between paired measures of self-assessed competence and actual competence are positive and significant. However, to establish the relationship between self-assessed competency and actual competency appears to require more care in taking the paired measures than many of us researchers earlier suspected.

Do the categories as defined in Figure 1 appear reasonable to other bloggers, or do these conflict with your observations? For instance, where would you place the boundary between “Adequate” and “Inadequate” self-assessment? How would you quantitatively define a person who is “unskilled and unaware of it?” How much should a person overestimate/underestimate before receiving the label of “overconfident” or “underconfident?”

If you have measurements and data, please compare your results with ours before you answer. Data or not, be sure to become familiar with the mathematical artifacts summarized in our January Numeracy paper (linked above) that were mistakenly taken for self-assessment measures in earlier peer-reviewed self-assessment literature.

Our fellow bloggers constitute some of the nation’s foremost thinkers on metacognition, and we value their feedback on how Figure 1 accords with their experiences as we work toward finalizing our sequel paper.


Are Academic Procrastinators Metacognitively Deprived?

By Aaron S. Richmond
Metropolitan State University of Denver

Academic Procrastinators Brief Overview

One of my favorite articles is Academic Procrastination of Undergraduates: Low Self-Efficacy to Self-Regulate Predicts Higher Levels of Procrastination by Robert M. Klassen, Lindsey. L. Krawchuk, and Sukaina Rajani (2007). Klassen and colleagues state that “…the rate for problematic academic procrastination among undergraduates is estimated to be at least 70-95% (Ellis & Knaus, 1977; Steel, 2007), with estimates of chronic or severe procrastination among undergraduates between 20% and 30%” (p. 916). Academic procrastination is, “the intentional delay of an intended course action, in spite of an awareness of negative outcomes (Steel, 2007; as cited in Klassen et al., 2006, p. 916). Based on the above stated statistics, it is obvious that academic procrastination is an issue in higher education, and that understanding what factors influence it and are related to its frequency is of utmost importance.

In their 2007 article, Klassen and colleagues conducted two studies to understand the relationship among academic procrastination and self-efficacy, self-regulation, and self-esteem and then understand this relationship within “negative procrastinators” (p. 915). In study 1, they surveyed 261 undergraduate students. They found that academic procrastination was inversely correlated to college/university GPA, self-regulation, academic self-efficacy and self-esteem. Meaning as students’ frequency of academic procrastination went down, their GPA and self-reported scores of self-efficacy, self-esteem, and self-regulation went up. They also found that self-regulation, self-esteem, and self-efficacy predicted academic procrastination.

In study 2, Klassen and colleagues (2007) they were interested in knowing whether there was a difference between negative and neutral procrastinators. That is when procrastinating caused a negative affect (e.g., grade penalty for assignment tardiness) or a neutral affect (e.g., no penalty for assignment tardiness). They surveyed 194 undergraduates and asked students to rate how academic procrastination affected, either positively or negatively, specific academic tasks (reading, research, etc.). They then, divided the sample into a group of students that self-reported that academic procrastination negatively affected them in some way or positive/neutrally affected them in some way.  What they found is that there were significant differences in GPA, daily procrastination, task procrastination, predicted class grade, actual class grade, and self-reported self-regulation between negative procrastinators and neutral procrastinators. They also found that students most often procrastinated on writing tasks.

So Where Does Metacognition Come in to Play?

Because a main factor of their focus was self-regulation, I think Klassen and colleagues study, gives us great insight and promise into the potential role (either causal or predictive) that metacognition plays in academic procrastination. First, in Study 1, they used the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich, Smith, Garcia, & MckKeachie, 1993) to measure self-efficacy for self-regulation. This MSLQ subscale assesses students’ awareness of knowledge and control of cognition (Klassen et al., 2007). It asks question like “If course materials are difficult to understand, I change the way I read the material.” or “I try to change the way I study in order to fit the course requirements and instructor’s teaching style.” (p. 920). As self-efficacy for self-regulation are a subset of metacognition, it is clear to me, that these questions indirectly, if not directly, partially measure elements of metacognition.

This makes me wonder, it would be interesting if the results of Klassen et al.’s study hold true with other forms of metacognition, such as metacognitive awareness. For example, how does it relate to metacognitive awareness factors that Schraw and Dennison (1994) suggest, such as knowledge and cognition (e.g., declarative knowledge, procedural knowledge, conditional knowledge) vs. regulation of cognition (e.g., planning, information management, monitoring, evaluation)?  Or, as Klassen et al. did not use the entire battery of measures in the MSLQ, how does academic procrastination relate to other aspects of the MSLQ like Learning Strategies, Help Seeking Scale, Metacognitive Self-Regulation, etc. (Pintrich et al., 1993). Or how might Klassen’s results relate to behavioral measures of metacognition such as calibration or, how does it relate to the Need for Cognition (Cacioppo & Petty, 1982)?  These questions suggest that metacognition could play a very prominent role in academic procrastination.

There Are Always More Questions Than Answers

To my knowledge, researchers have yet to replicate Klassen et al.’s (2007) with an eye towards investigating whether metacognitive variables predict and mediate rates of academic procrastination.  Therefore, I feel like I must wrap up this blog (as I always do) with a few questions/challenges/inspirational ideasJ

  1. What is the relationship among metacognitive awareness and academic procrastination?
  2. If there is a relationship between metacognition and academic procrastination, are there mediating and moderating variables that contribute to the relationship between metacognition and academic procrastination? For example, critical thinking? Intelligence? Past academic performance? The type of content and experience with this content (e.g., science knowledge)?
  3. Are there specific elements of metacognition (e.g., self-efficacy vs. metacognitive awareness vs. calibration, vs. monitoring, etc.) that predict the frequency of academic procrastination?
  4. Can metacognitive awareness training reduce the frequency of academic procrastination?
  5. If so, what type of training best reduces academic procrastination?

 References

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116.

Ellis, A., & Knaus, W. J. (1977). Overcoming procrastination. NY: New American Library

Klassen, R. M., Krawchuk, L. L., & Rajani, S. (2008). Academic procrastination of undergraduates: Low self-efficacy to self-regulate predicts higher levels of procrastination. Contemporary Educational Psychology, 33, 915-931. doi:10.1016/j.cedpsych.2007.07.001

Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53, 801–813.

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460-475.

Steel, P. (2007). The nature of procrastination: A meta-analytic and theoretical review of quintessential self regulatory failure. Psychological Bulletin, 133, 65–94.


Metacognitive Judgments of Knowing

Roman Taraban, Ph.D., Dmitrii Paniukov, John Schumacher, Michelle Kiser, at Texas Tech University

“The more you know, the more you know you don’t know.” Aristotle

Students often make judgments of learning (JOLs) when studying. Essentially, they make a judgment about future performance (e.g., a test) based on a self-assessment of their knowledge of studied items. Therefore, JOLs are considered metacognitive judgments. They are judgments about what the person knows, often related to some future purpose. Students’ accuracy in making these metacognitive judgments is academically important. If students make accurate JOLs, they will apply just the right amount of time to mastering academic materials. If students do not devote enough time to study, they will underperform on course assessments. If students spend more time than necessary, they are being inefficient.

As instructors, it would be helpful to know how accurate students are in making these decisions. There are several ways to measure the accuracy of JOLs. Here we will focus on one of these measures, termed calibration. Calibration is the difference between a student’s JOL related to some future assessment and his actual performance on that assessment. In the study we describe here, college students made JOLs (“On a scale of 0 to 100, what percent of the material do you think you can recall?”) after they read a brief expository text. Actual recall was measured in idea units (IUs) (Roediger & Karpicke, 2006). Idea units are the chunks of meaningful information in the text.   Calibration is here defined as JOL – Recalled IUs, or simply, predicted recall minus actual recall. If the calibration calculation leads to a positive number, you are overconfident to some degree; if the calculation result is negative, then you are underconfident to some degree. If the result is zero, then you are perfectly calibrated in your judgment.

The suggestion from Aristotle (see quote above) is that gains in how much we know lead us to underestimate how much we know, that is, we will be underconfident. Conversely, when we know little, we may overestimate how much we know, that is, we will be overconfident. Studies using JOLs have found that children are overconfident (predicted recall minus actual recall is positive) (Lipko, Dunlosky, & Merriman, 2009; Was, 2015). Children think they know more than they know, even after several learning trials with the material. Studies with adults have found an underconfidence with practice (UWP) effect (Koriat et al., 2002), that is, the more individuals learn, the more they underestimate their knowledge. The UWP effect is consistent with Aristotle’s suggestion. The question we ask here is ‘which is it’: If you lack knowledge, do your metacognitive judgments reflect overconfidence or underconfidence, and vice versa? Practically, as instructors, if students are poorly calibrated, what can we do to improve their calibration, that is, to recalibrate this metacognitive judgment.

We addressed this question with two groups of undergraduate students, as follows. Forty-three developmental-reading participants were recruited from developmental integrated reading and writing courses offered by the university, including Basic Literacy (n = 3), Developmental Literacy II (n = 29), and Developmental Literacy for Second Language Learners (n = 11). Fifty-two non-developmental participants were recruited from the Psychology Department subject pool. The non-developmental and developmental readers were comparable in mean age (18.3 and 19.8 years, respectively) and the number of completed college credits (11.8 and 16.7, respectively), and each sample represented roughly fifteen academic majors. All participants participated for course credit. The students were asked to read one of two expository passages and to recall as much as they could immediately. The two texts used for the study were each about 250 words in length and had an average Flesch-Kincaid readability score of 8.2 grade level. The passages contained 30 idea units each.

To answer our question, we first calculated calibration (predicted recall – actual recall) for each participant. Then we divided the total sample of 95 participants into quartiles, based on the number of idea units each participant recalled. The mean proportion of correct recalled idea units, out of 30 possible, and standard deviation in each quartile for the total sample were as follows:

Q1: .13 (.07); Q2: .33 (.05); Q3: .51 (.06); Q4: .73 (.09). Using quartile as the independent variable and calibration as the dependent variable, we found that participants were overconfident (predicted recall > actual recall) in all four quartiles. However, there was also a significant decline in overconfidence from Quartile 1 to Quartile 4 as follows: Q1: .51; Q2: .39; Q3: .29; Q4: .08. Very clearly, the participants in the highest quartile were nearly perfectly calibrated, that is, they were over-predicting their actual performance by only about 8%, compared to the lowest quartile, who were over-predicting by about 51%. This monotonic trend of reducing overconfidence and improving calibration was also true when we analyzed the two samples separately:

NON-DEVELOPMENTAL: Q1: .46; Q2: .39; Q3: .16; Q4: .10;

DEVELOPMENTAL: Q1: .57; Q2: .43; Q3: .39; Q4: .13.

The findings here suggest that Aristotle may have been wrong when he stated that “The more you know, the more you know you don’t know.” Our findings would suggest that the more you know, the more you know you know. That is, calibration gets better the more you know. What is striking here is the vulnerability of weaker learners to overconfidence. It is the learners who have not encoded a lot of information from reading that have an inflated notion of how much they can recall. This is not unlike the children in the Lipko et al. (2009) research mentioned earlier. It is also clear in our analyses that typical college students as well as developmental college students are susceptible to overestimating how much they know.

It is not clear from this study what variables underlie low recall performance. Low background knowledge, limited vocabulary, and difficulty with syntax, could all contribute to poor encoding of the information in the text and low subsequent recall. Nonetheless, our data do indicate that care should be taken in assisting students who fall into the lower performance quartiles to make better calibrated metacognitive judgments. One way to do this might be by asking students to explicitly make judgments about future performance and then encouraging them to reflect on the accuracy of those judgments after they complete the target task (e.g., a class test). Koriat et al. (1980) asked participants to give reasons for and against choosing responses to questions before the participants predicted the probability that they had chosen the correct answer. Prompting students to consider the amount and strength of the evidence for their responses reduced overconfidence. Metacognitive exercises like these may lead to better calibration.

References

Koriat, A., Lichtenstein, S., Fischoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6(2), 107-118.

Koriat, A., Sheffer, L., & Ma’ayan, H. (2002). Comparing objective and subjective learning curves: Judgments of learning exhibit increased underconfidence with practice. Journal of Experimental Psychology: General, 131, 147–162.

Lipko, A. R., Dunlosky, J., & Merriman, W. E. (2009). Persistent overconfidence despite practice: The role of task experience in preschoolers’ recall predictions. Journal of Experimental Child Psychology, 102(2), 152-166.

Roediger, H., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255.

Was, C. (2015). Some developmental trends in metacognition. Retrieved from

https://www.improvewithmetacognition.com/some-developmental-trends-in-metacognition/.

 


Some Developmental Trends in Metacognition

By Chris Was, PhD; Kent State University

Recently, I have conducted some experiments with K – 6 grade students related to children’s ability to predict their ability to recall simple items. Although a simple measure, this form of calibration is a measure of a child’s knowledge of the own memory abilities. This is, at its most basic level, metacognition.

The work in which my collaborators and I are currently engaged builds on the work of Amanda Lipko and colleagues (e.g., Lipko, Dunlosky, & Merriman, 2009). What was most striking about Lipko’s work was the robust overconfidence displayed by preschool children. Granted, there is a large body of literature that demonstrates young children are overconfident in both their physical abilities (e.g., Plumert, 1995) as well as their cognitive abilities (e.g., Cunningham & Weaver, 1989; Flavell, Friedrichs, & Hoyt, 1970). Much of this work indicates that with preschool children this overconfidence is quite persistent. But Lipko et al.’s (2009) work found that even following repeated practice and feedback, specifically salient feedback when children recalled their own previous performance, this overconfidence remained.

There are several hypotheses, both tested and untested, as to why this overconfidence exists and why it is robust against correction. Perhaps it is wishful thinking (a hypothesis test by Lipko et al.), perhaps it is a developmental issue, or perhaps it serves as a learning mechanism (children who give up the first time they fail may not learn to do succeed at much). In any case, I became interested in circumstances in which young children are capable of making accurate predictions of their cognitive abilities.

A review of the experimental methodology used by Lipko et al. is warranted. In their 2009 study Lipko et al. presented young children (mean age of approximately 5 years 0 months) with pictures of common items. As children were presented with pictures they were asked to name them. If correctly named the picture was placed on a board until 10 pictures were on the board. The experimenter then said to the children, “I am going to cover up the pictures,“ and asked, “how many do you think you will remember after I cover them?” The children then made a prediction of how many pictures they would remember. Finally, the children attempted to recall the pictures. In a series of experiments, children were overconfident in their ability even after repeated trials and even after correctly recalling their poor performance on previous trials.

Are there circumstances when children are more accurate? The simple answer is, “yes.” In a recent experiment (Was & Al-Harthy, 2015) we found that when children complete the Lipko task with unfamiliar items, their predictions of how many items they might remember are significantly lower than for familiar items. This familiarity overconfidence bias is likely due something similar to the fluency effect. That is, when the pictures are familiar to children, they seem easy to remember, but when the pictures are unfamiliar, children understand that they might be hard to recall later.

We are also investigating the developmental trends of the ability to predict recall. Our most interesting finding to date, is calibration (accuracy of recall predictions) is strongly related to the increase in working memory capacity. Put differently, as the number of items children are able to recall increases, so does their ability to accurately predict the number of items they will recall. Some will argue that this is not an unsuspected finding. The argument being that as working memory capacity increases, the ability to think about one’s own memory should also increase. My response is that it is not clear if metacognition is directly related to working memory or executive functions. Perhaps a mediating relationship exists. Recent investigations have suggested that performance on many measures of working memory are more dependent on strategy than they are on cognitive ability. Perhaps, metacognition is just good strategy use, or perhaps it is a cognitive ability.

The finding of the relationship between recall performance and calibration (the difference between predicted performance and actual performance) supports the hypothesis that metacognition is not a single skill that children have or not, but rather it is a complex of many skills and processes the children acquire through experiences and maturation. I suggest that developmental research in metacognition need focus on aptitude-by-treatment interactions. Questions such as, “What variety of academic activities contribute to the development of metacognition at different stages or levels of cognitive development?” will not only forward our understanding of metacognition, but perhaps also how to help young students develop metacognitive strategies and perhaps metacognitive performance.

Cunningham, J. G., & Weaver, S. L. (1989). Young children’s knowledge of their memory             span: Effects of task and experience. Journal of Experimental Child Psychology, 48,   32–44.

Flavell, J. H., Friedrichs, A. G., & Hoyt, J. D. (1970). Developmental changes in memorization    processes. Cognitive Psychology, 1,324–340.

Lipko, A. R., Dunlosky, J., & Merriman, W. E. (2009). Persistent overconfidence despite practice: The role of task experience in preschoolers’ recall predictions. Journal of Experimental Child Psychology103(2), 152-166.

Plumert, J. M. (1995). Relations between children’s overestimation of their physical abilities and accident proneness. Developmental Psychology31(5), 866-876. doi: http://dx.doi.org/10.1037/0012-1649.31.5.866

Was, C. A., & Al-Harthy, I. (2015). Developmental differences in overconfidence: When do children understand that attempting to recall predicts memory performance? The Researcher, 27(1), 1-5, Conference Proceedings of the 32nd Annual Conference of the Northern Rocky Mountain Education Research Association.