by Stephen Chew, Ph.D., Samford University, firstname.lastname@example.org
I like to take my wife out for dinner, but sometimes she insists on going to a place that doesn’t feature a drive through lane. That’s fine with me because It gives us a chance to see what is trendy in the food world. A few years ago, my wife ordered a salad made with quinoa. We’d vaguely heard about quinoa before, but had never tried it. We really liked it for its nutty taste. If you don’t know, Quinoa (typically pronounced KEEN-wah in English and kee-NO-ah in Spanish) is a grain that was first cultivated in the Andes several thousand years ago, and has become quite popular for its nutritional value. After we tried it, I decided to buy some and cook it myself. I found it in the store and next to it was another grain I had only vaguely heard of, farro. Farro (pronounced either FAY-roh or FAR-oh) is also an ancient grain, but it originated in the Mediterranean region around 10,000 years ago. I figured if I was going to try one ancient grain I might as well try another, so I bought them both. Little did I know that cooking them would be an adventure in good and bad metacognition.
First I cooked the quinoa, and that turned out fine. Next, I tried the farro, and that is where I ran into problems. I followed the directions on the package, but then I realized I had no idea how to tell if the farro was properly cooked. Unlike quinoa, I’d never eaten it before and I had no concept of what the desired end result was supposed to look or taste like. Was it supposed to have a mushy, al dente, or crunchy texture? I had no idea. Looking at photos and videos of cooked farro didn’t help much. There was nothing in the instructions about how to tell if it was done. For quinoa, I had already eaten some that was, presumably, expertly prepared. Furthermore, the cooking instructions had the helpful note that cooked quinoa becomes translucent and the germ bursts out in a spiral pattern. I had been able to check for that when I cooked it. No such luck with farro. As a result, my wife and I had to decide if we liked farro based on whatever version of it that I had cooked.
Now how does this story relate to metacognition? For effective metacognition, students must accurately judge how close or far their understanding is from the desired end goal. How can they do that if they have no concept (or an inaccurate concept) of the desired end goal? Consider self-regulated learning, which incorporates metacognition (Zimmerman, 1990). Pintrich (2004) makes explicit the necessity of students understanding the desired outcome for successful learning when he states that all models of self-regulated learning “assume that there is some type of goal, criterion, or standard against which comparisons are made in order to assess whether the learning process should continue as is or if some type of change is necessary” (p. 387). I’ve certainly made the mistake of believing students understood what the desired outcome of an assignment or activity was only to find out later (usually on the exam or final paper) that they did not understand the goal at all. I know what I mean when I tell them to use critical thinking or employ sound research methods or develop sound arguments, but I can’t assume that they know it unless I teach them what I mean and how to recognize when they have achieved it.
Failure to teach the desired level of understanding to students is a consequence of the curse of expertise. Because of our expertise, we tend to overestimate our ability to explain concepts thoroughly (Fisher & Keil, 2015) and we underestimate the difficulty for students to learn the concepts (Hinds, 1999). Fortunately, demonstrating to students what the desired understanding or end goal is for a concept is something we can accomplish through formative assessments such as think-pair-shares, “clicker” questions, and worked examples. We can assess their understanding of a concept using a low stakes activity before the high stakes assessment and demonstrate both the end result we are looking for and the strategies we use to achieve it. Not only are such formative assessments useful for students to monitor their understanding, they are also useful for helping us calibrate our teaching according to their understanding.
Recently I read the autobiography of Eric Ripert, a renowned chef. He makes the same point about the importance of understanding the desired outcome in recounting his development as a master chef.
Through repetition and determination to be great (or at least better than good), I began to understand the sauces I was preparing. I started to allow myself to feel my way through them, not just assemble them be rote. I knew when a sauce I had made was delicious—perfectly balanced and deeply flavored. (Ripert & Chambers, 2016, p. 215)
We must make sure students know what the desired goal is and how to recognize when they have achieved it in order to enable effective metacognition. It is a lesson I learned from cooking farro that I now apply to my teaching.
Fisher, M., & Keil, F. C. (2016). The curse of expertise: When more knowledge leads to miscalibrated explanatory insight. Cognitive Science, 40, 1251-1269. doi: 10.1111/cogs.12280
Hinds, P. J. (1999). The curse of expertise: The effects of expertise and debiasing methods on predictions of novice performance. Journal of Experimental Psychology: Applied, 5, 205-221. doi: 10.1037/1076-898X.5.2.205
Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16, 385-407. doi: 10.1007/s10648-004-0006-x
Ripert, E., & Chambers, V. (2016). 32 yolks: From my mother’s table to working the line. New York: Random House.
Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25, 3-17. doi: 10.1207/s15326985ep2501_2