Collateral Metacognitive Damage

      Comments Off on Collateral Metacognitive Damage


Why Seeing Others as “The Little Engines that Could” beats Seeing Them as “The Little Engines Who Were Unskilled and Unaware of It”

by Ed Nuhfer,Ph.D. Professor of Geology, Director of Faculty Development and Director of Educational Assessment, California State Universities (retired)

What is Self-Assessment?

At its root, self-assessment registers as an affective feeling of confidence in one’s ability to perform in the present. We can become consciously mindful of that feeling and begin to distinguish the feeling of being informed by expertise from the feeling of being uninformed. The feeling of ability to rise in the present to a challenge is generally captured by the phrase “I think I can….” Studies indicate that we can improve our metacognitive self-assessment skill with practice.

Quantifying Self-Assessment Skill

Measuring self-assessment accuracy assessment lies in quantifying the difference between a felt competence to perform and a measure of the actual competence demonstrated. However, what at first glance appears to be a calculation of simple subtraction has proven to be a nightmarish challenge to a researcher’s efforts in presenting data clearly and interpreting it accurately. I speak of this “nightmare” with personal familiarity. Some colleagues and I recently summarized different kinds of self-assessments, self-assessment’s relationship to self-efficacy, the importance of self-assessment to achievement, and the complexity of interpreting self-assessment measurements (Nuhfer and others, 2016; 2017).

Can we or can’t we do it?

The children’s story, The Little Engine that Could is a well-known story of the power of positive self-assessment. The throbbing “I think I can, I think I can…” and the success that follows offers an uplifting view of humanity’s ability to succeed. That view is close to the traits of the “Growth Mindset” of Stanford Psychologist Carol Dweck (2016). It is certainly more uplifting than an alternative title, The Little Engine that Was Unskilled and Unaware of It,” which predicts a disappointing ending to “I think I can I think I can….” The dismal idea that our possible competence is capped by what nature conferred at birth is a close analog to the traits of Dweck’s “Fixed Mindset,” which her research revealed as toxic to intellectual development.

As writers of several Improve with Metacognition blog entries have noted, “Unskilled and Unaware of It” are key words from the title of a seminal research paper (Kruger & Dunning, 1999) that offered one of the earliest credible attempts to quantify the accuracy of metacognitive self-assessment. That paper noted that some people were extremely unskilled and unaware of it. Less than a decade later, psychologists were claiming: “People are typically overly optimistic when evaluating the quality of their performance on social and intellectual tasks” (Ehrlinger and others, 2008). Today, laypersons cite the “Dunning-Kruger Effect” and often use it to label any individual or group that they dislike as “unskilled and unaware of it.” We saw the label being applied wholesale in the 2016 presidential election, not just to the candidates but also to the candidates’ supporters.

Self-assessment and vulnerability

Because self-assessment is about taking stock of ourselves rather than judging others, using the Dunning-Kruger Effect to label others is already on shaky ground. But are the odds that those we are tempted to label as “unskilled and unaware of it” likely to be correct? While the consensus in the literature of psychology seems to indicate that they are, our investigation of the numeracy underlying the consensus indicates otherwise (Nuhfer and others, 2017).

We think that nearly two decades of replicated studies that concluded that people are “…typically overly optimistic…” exhibited replication because they all relied on variants of a unique graphic introduced in the seminal paper in 1999. These graphs generate artifact patterns from both actual data and random numbers that are patterns expected from a Dunning-Kruger Effect, and the artifacts are easily mistaken for expressions of actual human self-assessment traits.

After gaining some understanding of the hazards presented by the devilish nature of self-assessment measures, our quantitative results showed that people, in general, have a surprisingly good awareness of their capabilities (Nuhfer and others, 2016, 2017). About half of our studied populace of over a thousand students and faculty accurately self-assessed their performance within ± 10 percentage points (ppts), and about two-thirds of people proved accurate within ±15 ppts. About 25% might qualify as having inadequate self-assessment skills (greater than ± 20 ppts), but only about 5% of our academic populace might merit the label “unskilled and unaware of it” (overestimated their abilities by 30 ppts or more). Odds seem high against a randomly selected person being seriously “unskilled and unaware of it” and are very high against this label being validly applicable to a group.

Others often rise to the expectations we have of them.

Consider the collective effects of people’s accepting beliefs about themselves and others as “unskilled and unaware of it.” This negative perspective can predispose an organization to accept, as a given, that people are less capable than they really are. Further, for those of us with power, such as instructors over students or tenured peers over untenured instructors, we should become aware of a term called “gaslighting.” In gaslighting, our negatively biased actions or comments may result in taking away the self-confidence of others who accept us as credible, trustworthy, and important to their lives. This type of influence can lead to lower performance, thus seeming to substantiate the initial negative perspective. When gaslighting is deliberate, it constitutes a form of emotional abuse.

Aren’t you CURIOUS yet?

Wondering about your self-assessment skills and how they compare with those of novices and experts? Give yourself about 45 minutes and try the self-assessment instrument used in our research at <>. You will receive a confidential report if you furnish your email at the end of completing that self-assessment.

Several of us, including our blog founder Lauren Scharff, will be presenting the findings and implications of our recent numeracy studies in August, at the Annual Meeting of the American Psychological Association in Washington DC. We hope some of our fellow bloggers will be able to join us there.


Dweck, C. (2016). Mindset: The New Psychology of Success. New York: Ballantine.

Ehrlinger J., Johnson, K., Banner M., Dunning, D., and Kruger, J. (2008). Why the unskilled are unaware: Further explorations of absent self-insight among the incompetent. Organizational Behavior and Human Decision Processes 105: 98–121.

Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one‘s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology 77: 1121‒ 1134. 3514.77.6.1121

Nuhfer, E. B., Cogan, C., Fleisher, S., Gaze, E., and Wirth, K., (2016). Random number simulations reveal how random noise affects the measurements and graphical portrayals of self-assessed competency.” Numeracy 9 (1): Article 4.

Nuhfer, E. B., Cogan, C., Fleisher, S., Wirth, K. and Gaze, E., (2017), “How random noise and a graphical convention subverted behavioral scientists’ explanations of self-assessment data: Numeracy underlies better alternatives. Numeracy: 10 : (1): Article 4.



About Ed Nuhfer

Ed Nuhfer received his PhD in geology from University of New Mexico, and served as a geologist and researcher in industry and government before starting an academic career. He held tenure as a full professor at four different universities, authored publications on environmental geology, sedimentary geology, geochemistry, petrology and geoscience education, served as a mentor for hundreds of geology and reclamation students. He served as a regional/national officer for the American Society for Surface Mining and Reclamation, the American Institute of Mining Engineers and as national editor for The American Institute of Professional Geologists from which he received three presidential certificates of merit and the John Galey Sr Pubic Service Award. His book, The Citizens' Guide to Geologic Hazards, won a Choice award for "outstanding academic books" from the Association of College and Research Libraries. While on sabbatical on 1988-1989 in Colorado, he discovered faculty development and returned to found one the first faculty development centers in Wisconsin. He subsequently served as Director of Faculty Development for University of Wisconsin at Platteville, University of Colorado at Denver, and Idaho State University, as Director of Faculty Development and Assessment of Student Learning at California State University Channel Islands, founded the one-week faculty development program "Boot Camp for Profs," which he directed for nearly twenty years, received the national Innovation Award Finalist and the Faculty Development Innovation Award, from POD and served in his last full-time job as Director of Educational Effectiveness at Humboldt State University "years beyond when I thought I would want to retire" before finally retiring in 2014. He has authored over a hundred publications in faculty development, and served as an invited presenter and featured speaker of workshops for The Geological Society of America, POD, AAC&U, WASC, Lilly Conferences, and as an invited presenter of workshops and keynotes on faculty development and assessment for many universities and conferences. He continues to work from as a writer and researcher, as a columnist for National Teaching and Learning Forum for which he has written Developers' Diary for over twelve years --a column based on the unique theme of using fractals and chaos as a key to understanding teaching and learning. Ed remains on as a member of the editorial review boards for several journals and publishers and is winding up a seven-year project with colleagues as principal investigator in developing and testing the Science Literacy Concept Inventory.