Metacognition for Guiding Students to Awareness of Higher-level Thinking (Part 1)


by Ed Nuhfer (Contact:; 208-241-5029)

When those unfamiliar with “metacognition” first learn the term, they usually hear: “Metacognition is thinking about thinking.” This is a condensation of John Flavell’s (1976) definition: “Metacognition refers to one’s knowledge concerning one’s own cognitive processes or anything related to them…” Flavell’s definition reveals that students cannot engage in metacognition until they first possess a particular kind of knowledge. This reminds us that students do not innately understand what they need to be “thinking about” in the process of “thinking about thinking.” They need explicit guidance.

When students learn in most courses, they engage in a three-component effort toward achieving an education: (1) gaining content knowledge, (2) developing skills (which are usually specific to a discipline), and (3) gaining deeper understanding of the kinds of thinking or reasoning required for mastery of the challenges at hand. The American higher educational system generally does best at helping students achieve the first two. Many students have yet to even realize how these components differ, and few ever receive any instruction on mastering Component 3. Recently, Arum and Roksa (2011) summarized the effectiveness of American undergraduate education in developing students’ capacity for thinking. The record proved dismal and revealed that allowing the first two components to push aside the third produces serious consequences.

This imbalance has persisted for decades. Students often believe that education is primarily about gaining content knowledge—that the major distinction between freshmen and seniors is “Seniors know more facts.” Those who never get past this view will likely acquire a degree without acquiring any significantly increased ability to reason.

We faculty are also products of this imbalanced system, so it is not too surprising to hear so many of us embracing “covering the material” as a primary concern when planning our courses. Truth be told, many of us have so long taught to content and to skills necessary for working within the disciplines that we are less practiced in guiding our students to be reflective on how to improve their thinking. Adding metacognitive components to our assignments and lessons can provide the explicit guidance that students need. However, authoring these components will take many of us into new territory, and we should expect our first efforts to be awkward compared to what we will be authoring after a year of practice. Yet, doing such work and seeing students grow because of our efforts is exciting and very worthwhile. Now is the time to start.

Opportunities for developing metacognitive reflection exist at scales ranging from single-lesson assignments to large-scale considerations. In my first blog for this site, I chose to start with the large-scale considerations of what constitutes development of higher-level thinking skills.


What Research Reveals about Adult Thinking

More than five decades have passed since William Perry distinguished nine stages of thinking that successful adult intellectual development (Table 1) produces. The validity of his developmental model in general seems firmly established (Journal of Adult Development, 2004). Contained within this model is the story of how effective higher education improves students’ abilities to think and respond to challenges. Knowing this story enables us to be explicit in getting students aware of what ought to be happening to them if higher education is actually increasing their capacity for thinking. This research enables us to guide students in what to look for as they engage in the metacognition of understanding their own intellectual development.

Enhanced capacity to think develops over spans of several years. Small but important changes produced at the scale of single quarter or semester-long courses are normally imperceptible to students and instructors alike. Even the researchers who discovered the developmental stages passed through them as students, without realizing the nature of the changes that they were undergoing. For learning that occurs in the shorter period of a college course, it is easier to document measurable changes in learning of disciplinary content and the acquisition of specific skills than it is to assess changes in thinking. Research based on longitudinal studies of interviews with students as they changed over several years finally revealed the nature of these subtle changes and the sequence in which they occur (Table 1).


Table 1: A Summary of Perry’s Stages of Adult Intellectual Development

Stage 1 & 2 thinkers believe that all problems have right and wrong answers, that all answers can be furnished by authority (usually the teacher), and that ambiguity is a needless nuisance that obstructs getting at right answers.
Stage 3 thinkers realize that authority is fallible and does not have good answers for all questions. Thinkers at this stage respond by concluding that all opinions are equally valid and that arguments are just about proponents’ thinking differently. Evidence to the contrary does not change this response.
Stage 4 thinkers recognize that not all challenges have right or wrong answers, but they do not yet recognize frameworks through which to resolve how evidence best supports one among several competing arguments.
Stage 5 thinkers can use evidence. They also accept that evaluations that lead to best solutions can be relative to the context of the situation within which a problem occurs.
Stage 6 thinkers appreciate ambiguity as a legitimate quality of many issues. They can use evidence to explore alternatives. They recognize that the most reasonable answers often depend upon both context and value systems.
Stages 7, 8 and 9 thinkers incorporate metacognitive reflection in their reasoning, and they increasingly perceive how their personal values act alongside context and evidence to influence chosen decisions and actions.

In part 2 of this blog, we will provide metacognitive class exercises that help students to understand what occurs during intellectual development and why they must strive for more than learning content when gaining an education.

About Ed Nuhfer

Ed Nuhfer received his PhD in geology from University of New Mexico, and served as a geologist and researcher in industry and government before starting an academic career. He held tenure as a full professor at four different universities, authored publications on environmental geology, sedimentary geology, geochemistry, petrology and geoscience education, served as a mentor for hundreds of geology and reclamation students. He served as a regional/national officer for the American Society for Surface Mining and Reclamation, the American Institute of Mining Engineers and as national editor for The American Institute of Professional Geologists from which he received three presidential certificates of merit and the John Galey Sr Pubic Service Award. His book, The Citizens' Guide to Geologic Hazards, won a Choice award for "outstanding academic books" from the Association of College and Research Libraries. While on sabbatical on 1988-1989 in Colorado, he discovered faculty development and returned to found one the first faculty development centers in Wisconsin. He subsequently served as Director of Faculty Development for University of Wisconsin at Platteville, University of Colorado at Denver, and Idaho State University, as Director of Faculty Development and Assessment of Student Learning at California State University Channel Islands, founded the one-week faculty development program "Boot Camp for Profs," which he directed for nearly twenty years, received the national Innovation Award Finalist and the Faculty Development Innovation Award, from POD and served in his last full-time job as Director of Educational Effectiveness at Humboldt State University "years beyond when I thought I would want to retire" before finally retiring in 2014. He has authored over a hundred publications in faculty development, and served as an invited presenter and featured speaker of workshops for The Geological Society of America, POD, AAC&U, WASC, Lilly Conferences, and as an invited presenter of workshops and keynotes on faculty development and assessment for many universities and conferences. He continues to work from as a writer and researcher, as a columnist for National Teaching and Learning Forum for which he has written Developers' Diary for over twelve years --a column based on the unique theme of using fractals and chaos as a key to understanding teaching and learning. Ed remains on as a member of the editorial review boards for several journals and publishers and is winding up a seven-year project with colleagues as principal investigator in developing and testing the Science Literacy Concept Inventory.

6 thoughts on “Metacognition for Guiding Students to Awareness of Higher-level Thinking (Part 1)

  1. Lauren Scharff

    Thanks Ed for a thought-provoking post! As someone who regularly works with faculty on SoTL projects, I especially appreciated the following: “Adding metacognitive components to our assignments and lessons can provide the explicit guidance that students need. However, authoring these components will take many of us into new territory, and we should expect our first efforts to be awkward compared to what we will be authoring after a year of practice. Yet, doing such work and seeing students grow because of our efforts is exciting and very worthwhile. Now is the time to start.”.

    I do also have a question: Later in the post you state that “Enhanced capacity to think develops over spans of several years. Small but important changes produced at the scale of single quarter or semester-long courses are normally imperceptible to students and instructors alike.” Is this within a “typical” series of courses? What about when an instructor intentionally and pervasively builds in opportunities for students to practice thinking? At my institution we have some evidence that such an approach does make a very measurable difference in aspects of critical thinking as measured by the CAT (Critical Thinking Assessment, a nationally normed, standardized test develop at Tenn Technological Univeristy with NSF funding). I wasn’t sure how efforts such as this fit with the research you cited.

  2. Ed Nuhfer Post author

    Great post Lauren; thank you for that question: “Is this within a “typical” series of courses?” The answer is “yes”.

    As I noted in the blog post, teaching thinking is not what usually occurs. My own growing Science Literacy Concept Inventory (SLCI) database of about 17,000 undergraduates, grad students and faculty includes both “typical” courses and courses in which faculty tried to stress the thinking aspects more. It also includes open admission and highly selective institutions, Several of the faculty who use the SLCI are also well versed in assessment, and they are able to document immense gains in content, but their pre-post SLCI scores show surprisingly small gains–maybe a point difference. A four-point average class gain is huge. These are measurable gains, but statistically these are marginally significant across single courses.

    Further correlating pre-post scores of over a thousand students who had pre-post measures in single courses gave an r of 0.7, thus confirming that those who score low at the start generally continue to score low at the end, and the same is generally true for those who score high.

    The CLA (Collegiate Learning Assessment) is designed as a measure of ability to think and reason using evidence and may have the closest semblance of a paper test attempting to address the intellectual development documented by the Perry model. Institutionally, the CLA correlates highly with SAT and ACT institutional averages at r of about 0.9. Institutionally, the SLCI correlates with ACT and SAT at only slightly less than r = 0.9. These tests thus seem to be capturing an overlapping trait of educational achievement. As Steven Brookfield noted in his 2012 book, science’s way of knowing constitutes one of the traditions of “critical thinking.” We have evidence that the SLCI thus generally captures ability to think sufficiently to make it useful. If you can find how the CAT correlates with these common assessments (SAT, ACT), that might tell us the degree to which we are looking at tests that measure a common concept of achievement in thinking.

    With our larger database, we can see much beyond single courses. We can see highly significant gains occurring between freshmen and seniors, seniors and graduate students, and graduate students and professors. These are gains reflecting development of abilities that seem not to “wire up” in a 16-week semester. This really agrees with the research on adult development, no matter whose model (most are mentioned in the Module 12) that we use. On the Perry scale, the typical high school graduate is at about level 3 2/3 and the typical college graduate is a level 4. That is only one-third of a Perry stage gain made across 4-5 years of college. Now, think of a degree program as about 120 semester hours, and about 30 or 40 courses. How easy would it be to see the contribution to that 1/3 stage gain from within a single course? Our results are also in accord with the work of Pavelich and Moore (See reference in that same Module 12) that shows that it takes curriculum to produce measurable gains across the Perry stages; single courses can contribute only a little. The difference an institution produces between freshman and senior years thus reveals generally how much an institution is advancing students’ ability to think.

    My point in this blog is that only some courses strive to produce thinking, and that “typical” courses (nationally) do not. If “typical courses” did strive to produce thinking, we might accelerate the process by actually fielding institutional curricula that actually teach students to think. We should be able to move students to higher levels over a typical 4-5 year Baccalaureate program–maybe to Perry stage 5 & 6. That would be huge. Thus giving faculty the capability to expend some effort onto teaching thinking is something developers really need to emphasize.

  3. Lauren Scharff

    Hello again Ed

    The CAT site at Tenn Tech has several publications and presentations available. One of the 2010 presentations (SACS/COC 2010) shares correlation data on page 15: CAT with ACT (.50), SAT (.52), Academic Profile (.56), GPA (.30), California critical thinking skills test (.65) and the CAAP critical thinking module (.70). So, these correlations are lower than what you report for the CLA and the SCLI, which suggests there is some, but less, overlap in what is being measured. This slide show also gives other information about the CAT: how it was developed, what it aims to assess, and with what it does not correlate (e.g. negative correlation with NSSE question on memorizing information).

    Beyond the CAT facts, your point that typically we see little improvement over 4 years with respect to Perry’s stages is sobering. And one I think we should all ponder – for too many years / decades, we (college faculty) have enjoyed (too?) much autonomy. We rarely discuss with colleagues even in our own departments what we’re doing in our classes and why, much less carry out such conversations across disciplines. However, without some pervasive and connected efforts across courses and disciplines, the college experience will continue to largely be a disconnected series of courses (even with in the major), and many opportunities for meaningful development (and later reinforcement of that development) are lost.

    At USAFA we have been working for 4 years on an effort to explicitly connect the development of critical thinking across up to eight core courses in the freshman year. We aim to provide a solid foundation of basic critical thinking skills and awareness for the subsequent courses. It is an ongoing challenge of communication as instructors rotate through the course and as we explore how the very different disciplinary courses can make connections with each others’ efforts to develop critical thinking. No earth-shattering results yet, but good signs we are making progress…

  4. Ed Nuhfer

    Hi Lauren!

    Lauren, good post again–as usual!

    The CAT site you reported might report the correlations at the level of the individual scores. In assessment, reported correlations may arise from aggregate data such as institutional averages. One example is by Trudy Banta writing in Assessment Update, March–April 2008, Volume 20, Number 2, p 3 in an article titled Clothing the Emperor: “The tests being recommended—standardized tests of writing, critical thinking, and analytic reasoning—are first foremost tests of prior learning, as evidenced by the near-perfect .9 correlation. between CLA scores and SAT/ACT scores at the institutional level.”

    If the CAT site in Tennessee were reported instead at the institutional level, the correlations you noted might rise to about .9 too. In my blog contribution I tried to be overtly explicit that we had to use the institutional aggregate data: “Institutionally, the CLA correlates highly with SAT and ACT institutional averages at r of about 0.9. Institutionally, the SLCI correlates with ACT and SAT at only slightly less than r = 0.9. ”

    At the individual level, we had limited data, but if we did a correlation at the individual level with what we do have, this would yield lower r values at about the levels of .4 to .5.

    For social science research like this, these are quite strong correlations. The CLA correlates at about 0.5 with the results of the data of Perry stages diagnosed through scored interviews by trained raters, and the CLA was really designed to try to address such reasoning. While that level of correlation does not give rise to good prediction of any individual’s score on one measure from the other, r of 0.5 derived from individuals is easily strong enough, given ample data, for seeing differences in aggregate data and predicting an institutional average score on the SLCI given the institution’s average ACT score.

    That difference in the convention of reporting correlations derived from the individuals’ scores or from the institutional averages may account for the discrepancies that you noted. In general, all of the tests that you mentioned probably are capturing similar information that reflects the general reasoning/thinking capacities of students.

    The Quantitative Literacy/Reasoning Reasoning Assessment coordinated by Dr Eric Gaze at Bowdoin College likely does this too. It proves one of the most useful tests for predicting students’ success in college.

    Your observation: “… the college experience will continue to largely be a disconnected series of courses….” accurately describes most of higher education.

    The consequences of focusing our planning and our reward systems at the scale of the course rather than at the scale of the curriculum are greater than most realize.

  5. Pingback: The Stakes: “You’ve Been Mucking With My Mind” | Improve with Metacognition

  6. Pingback: Fostering Metacognition: Right-Answer Focused versus Epistemologically Transgressive - Improve with Metacognition

Leave a Reply