Metacognition supports HIP undergraduate research

by Dr. John Draeger, SUNY Buffalo State

The Association of American Colleges and Universities (AAC&U) has identified a number of high-impact learning practices (e.g., undergraduate research, collaborative assignments, learning communities, service learning, study abroad, capstone seminars). Each of these learning practices involve a significant investment of student effort over time with multiple interactions between faculty and students about substantive matters as well as frequent,constructive feedback from faculty, and regular, structured processes for reflection and integration (Kuh 2008; Kilgo, Sheets & Pascarella 2015). This post offers some strategies for intentionally structuring undergraduate research experiences and building metacognition into the process. Subsequent posts will consider other high-impact practices (HIPs).

 Undergraduate research is a HIP because students ask the questions and set the research agenda. Inquiry-based projects, such as undergraduate research, promote student autonomy, self-direction, and teach students about the process (Healey & Jenkins 2009; Kilgo & Pascarella 2016). Without guidance, however, students can find themselves in a hot mess. After years of mentoring undergraduate research projects in philosophy, I’ve developed the following model to help keep students on track. Elements of this model may seem obvious and common practice. I don’t claim that it is novel, but I offer it as a distillation of some lessons that I’ve learned the hard way.

First, philosophers like to ask the big questions (and they should), but unless topics are reined in, student research can easily turn to sprawl and sloppy thinking. Thus, I talk with students about topic refinement early and often. I begin student meetings by asking them to give a one-minute “elevator pitch” for their topic. As the topic gets refined, the pitch becomes easier. En route to refining the topic and developing the elevator pitch, I ask a series of critical questions about the underlying conceptual issues. For example, if a student wants to consider what parents owe their children, I will push her to consider the nature of obligation (e.g., human rights, fairness, well-being, character, social roles) and concrete cases that may or may not fall within the scope of that obligation (e.g., providing food, a new bike, college tuition). Prodding them to consider the nature and scope of the obligation prompts them to consider the underlying philosophical substructure, which is what I believe philosophical inquiry is all about (Draeger 2014). However, once students begin making deep conceptual connections, it is easy for a topic to sprawl as students believe that each connected idea will need its own separate discussion. Metacognition encourages students to be aware of their own learning process (e.g., research) and make intentional adjustments based on that awareness. Encouraging students to be aware of the possibility topic sprawl can help them better evaluate whether their current thinking is moving away from the core issue or towards a better version of that core issue.

Second, all of us are standing on the shoulders of giants. It is good scholarship to acknowledge the original thinking efforts of others by using proper citation. However, the research experience should teach students more than to not plagiarize. Rather, undergraduate research allows students the opportunity to become co-inquirers within an existing scholarly conversation. Becoming familiar with the literature allows them to tap into long-standing debates and utilize conceptual distinctions developed by others. As students begin their research, each comes with their own background and dispositions. Some believe they need to read everything on a topic before they venture an opinion. Others are so eager to begin that they skip the literature review and soon find themselves lost without the resources found within the tradition. Metacognition can help students become aware of when they are reading too much or too little as well as point the way to adjustments in their process.

Third, many students struggle with how to find the relevant source material in philosophy. Even if they know how to use the library, they are often unfamiliar with idiosyncrasies of philosophy as a discipline. For this reason, I explicitly discuss how to go about doing library work (e.g., how to use library databases, how to conduct keyword searches, how to decide which articles seem promising), discuss reading strategies (e.g., how to read at different speeds to find articles most deserving attention, how to read identified articles more carefully, how to annotate a text with an eye towards research), and discuss note taking strategies (e.g., how to organize summaries, critical questions, conceptual applications, personal reflections). When undergraduate research is embedded in my course, we discuss these strategies in class. When undergraduate research takes the form of an independent project, I discuss these strategies one-on-one. In either case, I encourage students to practice becoming aware of what’s working, what’s not, and when they need to adjust their strategies.

Fourth, my undergraduate research students are required to keep a weekly journal. Students are asked to track pesky questions, troublesome counter-examples, and worrisome objections. Beyond their focus on content, however, students are also asked to focus on their own process, including a sketch of the library, reading, and writing strategies attempted as well as whether those strategies were successful. Journaling about these strategies is another way to encourage metacognitive awareness about the research process and locate opportunities for intentional self-regulation.

Undergraduate research can be a HIP (if implemented well) because it encourages students to learn about the research process on their own terms as well as producing their own research product. Metacognition helps monitor whether students are engaged in the sort of deep learning that makes undergraduate research a HIP.  Moreover, intentionally structuring metacognitive opportunities can encourage greater learner autonomy and help facilitate inquiry-based research long after undergraduate experiences have officially concluded. In this way, undergraduate research and metacognition can be highly-impactful because they support the skills necessary for lifelong learning.

References

Draeger, J. (posted July 11, 2014). Using metacognition to uncover the substructure of moral issues.” Retrieved from https://www.improvewithmetacognition.com.

Healey, M., & Jenkins, A. (2009). Developing undergraduate research and inquiry. York: HE Academy.

Kilgo, C. A., Sheets, J. K. E., & Pascarella, E. T. (2015). The link between high-impact practices and student learning: Some longitudinal evidence. Higher Education, 69(4), 509-525.

Kilgo, C. A., & Pascarella, E. T. (2016). Does independent research with a faculty member enhance four-year graduation and graduate/professional degree plans? Convergent results with different analytical methods. Higher Education, 71(4), 575-592.

Kuh, G. D. (2008). Excerpt from high-impact educational practices: What they are, who has access to them, and why they matter. Association of American Colleges and Universities.


Developing Metacognition with Student Learning Portfolios

In this IDEA paper #44, The Learning Portfolio: A Powerful Idea for Significant Learning, Dr. John Zubizarreta shares models and guidance for incorporating learning portfolios. He also makes powerful arguments regarding the ability of portfolios to engage students in meaningful reflection about their learning, which in turn will support a metacognitive development and life-long learning.

 


Can Reciprocal Peer Tutoring Increase Metacognition in Your Students?

Aaron S. Richmond, Ph. D.

How many of you use collaborative learning in your classroom? If you do, do you specifically use it to increase metacognition in your students? If the answer is yes, you are likely building on the work of Hadwin, Jarvela, and Miller (2011) and Schraw, Crippen, and Hartley (2006). For those of you unfamiliar with collaborative learning, I tend to agree with Slavich and Zimbardo’s (2012) definition, in collaborative learning students “…tackle problems and question with peers—especially more knowledgeable peers—insofar as such experiences provide students with opportunities to learn new problem-solving strategies and to debate ideas in a way that challenges their understanding of concepts” (p. 572). There are many ways to use collaborative learning in the classroom, jigsaw classroom, paired annotations, send-a-problem, think-pair-share, three-step interview, peer tutoring, number heads, etc. Of particular interest, recent research on collaborative learning suggests that reciprocal peer tutoring may be particularly useful when your goal is to not only learn course material, but to increase your student’s metacognition (De Backer, Van Keer, Moerkerke, & Valcke, 2016).

In their innovative study, De Backer and colleagues (2016) investigated the effects of using reciprocal peer tutoring (RPT) to support and increase metacognitive regulation in higher education. De Backer and colleagues defined RPT as “the structured exchange of the tutor role among peers in groups/pairs…and enables each student to experience the specific benefits derived from providing and receiving academic guidance.” (p. 191) De Backer et al. had students, over a course of the semester, complete eight peer tutoring sessions. All students were trained to be a tutor,  experienced being a tutor, and tutored their peers at least twice. Tutoring sessions were 120 minutes in length and occurred outside of class. The tutor’s role was to manage the tutees and promote collaborative learning. During each tutoring session, the tutees were asked to solve a problem related to the class content. Each problem had three specific components:

(1) An outline of learning objectives to guide peers’ discussion to central course-related topics; (2) a subtask aimed at getting familiar with the theme-specific terminology; and (3) a subtask in which students were instructed to apply theoretical notions to realistic instructional cases. (De Backer et al., 2016, p. 193)

The problems presented, often did not have clear-cut answers and required considerable cognitive effort. De Backer et al. video recorded all the tutoring sessions and then scored each session on the amount and type of metacognitive regulation that occurred by both tutors and tutees. For example, they looked at the student’s ability to orient, plan, monitor, and evaluate. They also measured the level of processing (whether it was shallow or deep processing of metacognitive strategies). Appendix D of De Backer et al.’s article provided examples of how to code metacognitive data. See Table 1 for an example of the scoring (De Backer et al., 2016, p. 41). They then scored the frequency of metacognitive regulations that occurred per session.

Table 1. Examples of Lower and Deep Level Metacognitive Regulation in Reciprocal Peer Tutoring by De Backer et al. (2016, pp. 41-42)
Metacognition–Monitoring
Comprehension Monitoring Noting lack of comprehension T: “Does everyone understand the outlines of instructional behaviorism?”
t1: “I still don’t understand the concept of aptitude.”
Checking comprehension by repeating (LL) T: “Does everyone agree now that instructional behaviorism and instructional constructivism are opposites?”
t1: “I think (…) because in behaviorism the instructor decides on everything but constructivism is about learners being free to construct their own knowledge.:
t2: “Yes constructivist learners are much more independent and active, not so?”
Checking comprehension by elaborating (DL) T: “The behavioristic instructor permanently provides feedback. Who knows why?”
t1: “Is it not to make sure that learners don’t make mistakes?”
t2: “Could that also be the reason why they structure the learning materials extensively? And why they don’t like collaborative learning? Because collaborative learning requires

spontaneous discussions between students. You cannot really structure it in advance, not

so?”

Note. DL = Deep learning, LL = low or shallow learning, T = tutor, t1 and t2 = tutees.

De Backer and colleagues (2016) found that as the semester progressed, students engaged in more and more metacognitive regulatory processes. Specifically, their orientation increased, their monitoring increased and their evaluation increased (in general the frequency was 3 times greater at the end of the semester than at the beginning of the semester). However, planning stayed stagnant over the course of the semester. Specifically, the frequency of planning use continued to be low throughout the semester.  Far more interesting was that students (over the course of the semester) decreased their use of shallow or low-level metacognitive strategies and increased their use of deep-level metacognitive strategies as result. Increases in metacognitive regulation occurred across most types of metacognitive strategies (e.g., regulation, orientation, activating prior knowledge, task analysis, monitoring, and evaluation).

 As demonstrated by De Backer and colleagues study and the work of other researchers (e.g., King, 1997; De Backer, Van Keer, & Valcke, 2012), RPT and other collaborative learning instructional methods may be a useful in increasing metacognitive processes of students.

Concluding Thoughts and Questions for You

After reading De Backer et al. (2016), I was fascinated by the possible use of RPT in my own classroom. So, I started to think about how to implement it myself. Some questions arose that I thought you might help me with:

  1. How do I specifically scaffold the use of RPT in my classroom? More so, what does a successful RPT session resemble? Fortunately, De Backer and colleagues did provide an appendix to their study (Appendix C) that gives an example of what a tutoring session may look like.
  2. How many tutoring sessions is enough to increase the metacognition in my students? De Backer et al. had 8 sessions. This would be difficult for me to squeeze into my course planning. Would 3-4 be enough? What do you think? But then not all students could be a tutor. Do they get more (metacognitively) out of being a tutor vs. a tutee? This is something that De Backer and colleagues did not analyze. (Hint, hint all you folks—SoTL project in the making;)
  3. De Backer et al. briefly described that the tutors had a 10-page manual on how to be a tutor. Hmm…I don’t know if my students would be able to effectively learn from this. What other simple ways might we use to teach students how to be effective tutors in the context of RPT?
  4. Finally, are you do anything like De Backer et al.? And if so, do you think it is improving your student’s metacognitive regulation?

 References

De Backer, L., Van Keer, H., Moerkerke, B., & Valcke, M. (2016). Examining evolutions in the adoption of metacognitive regulation in reciprocal peer tutoring groups. Metacognition and Learning, 11, 187-213. doi:10.1007/s11409-015-9141-7

De Backer, L., Van Keer, H., & Valcke, M. (2012). Exploring the potential impact of reciprocal peer tutoring on higher education students’ metacognitive knowledge and metacognitive regulation. Instructional Science, 40, 559–588.

Hadwin, A. F., Järvelä, S., & Miller, M. (2011). Self-regulated, co-regulated, and socially shared regulation of learning. In B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 65–84). New York: Routledge.

King, A. (1997). Ask to think-tell why©: A model to transactive peer tutoring for scaffolding higher level complex learning. Educational Psychologist, 32, 221–235.

Schraw, G., Crippen, K. J., & Hartley, K. (2006). Promoting self-regulation in science education: metacognition as part of a broader perspective on learning. Research in Science Education, 36, 111–139.

Slavich, G. M., & Zimbardo, P. G. (2012). Transformational teaching: Theoretical underpinnings, basic principles, and core methods. Educational Psychology Review, 24, 569-608. doi:10.1007/s10648-012-9199-6


The Challenge of Deep Learning in the Age of LearnSmart Course Systems

by Lauren Scharff, Ph.D. (U. S. Air Force Academy)

One of my close friends and colleague can reliably be counted on to point out that students are rational decision makers. There is only so much time in their days and they have full schedules. If there are ways for students to spend less time per course and still “be successful,” they will find the ways to do so. Unfortunately, their efficient choices may short-change their long-term, deep learning.

This tension between efficiency and deep learning was again brought to my attention when I learned about the “LearnSmart” (LS) text application that automatically comes with the e-text chosen by my department for the core course I’m teaching this semester. As a plus, the publisher has incorporated learning science (metacognitive prompts and spacing of review material) into the design of LearnSmart. Less positive, some aspects of the LearnSmart design seem to lead many students to choose efficiency over deep learning.

In a nutshell, the current LS design prompts learning shortcuts in several ways. Pre-highlighted text discourages reading from non-highlighted material, and the fact that the LS quiz questions primarily come from highlighted material reinforces those selective reading tendencies. A less conspicuous learning trap results from the design of the LS quiz credit algorithm that incorporates the metacognitive prompts. The metacognition prompts not only take a bit of extra time to answer, but students only get credit for completing questions for which they indicate good understanding of the question material. If they indicate questionable understanding, even if they ultimately answer correctly, that question does not count toward the required number of pre-class reading check questions. [If you’d like more details about the LS quiz process design, please see the text at the bottom of this post.]

Last semester, the fact that many of our students were choosing efficiency over deep learning became apparent when the first exam was graded. Despite very high completion of the LS pre-class reading quizzes and lively class discussions, exam grades on average were more than a letter grade lower than previous semesters.

The bottom line is, just like teaching tools, learning tools are only effective if they are used in ways that align with objectives. As instructors, our objectives typically are student learning (hopefully deep learning in most cases). Students’ objectives might seem to be correlated with learning (e.g. grades) or not (e.g. what is the fastest way to complete this assignment?). If we instructors design our courses or choose activities that allow students to efficiently (quickly) complete them while also obtaining good grades, then we are inadvertently supporting short-cuts to real learning.

So, how do we tackle our efficiency-shortcut challenge as we go into this new semester? There is a tool that the publisher offers to help us track student responses by levels of self-reported understanding and correctness. We can see if any students are showing the majority of their responses in the “I know it” category. If many of those are also incorrect, it’s likely that they are prioritizing short-term efficiency over long-term learning and we can talk to them one-on-one about their choices. That’s helpful, but it’s reactionary.

The real question is, How do we get students to consciously prioritize their long-term learning over short-term efficiency? For that, I suggest additional explicit discussion and another layer of metacognition. I plan to regularly check in with the students, have class discussions aimed at bringing their choices about their learning behaviors into their conscious awareness, and positively reinforcing their positive self-regulation of deep-learning behaviors.

I’ll let you know how it goes.

——————————————–

Here is some additional background on the e-text and the complimentary LearnSmart (LS) text .

There are two ways to access the text. One way is an electronic version of the printed text, including nice annotation capabilities for students who want to underline, highlight or take notes. It’s essentially an electronic version of a printed text. The second way to access the text is through the LS chapters. As mentioned above, when the students open these chapters, they will find that some of the text has already been highlighted for them!

As they read through the LS chapters, students are periodically prompted with some LS quiz questions (primarily from highlighted material). These questions are where some of the learning science comes in. Students are given a question about the material. But, rather than being given the multiple choice response options right away, they are first given a metacognitive prompt. They are asked how confident they are that they know the answer to the question without seeing the response options. They can choose “I know it,” “Think so,” “Unsure,” or “No idea.” Once they answer about their “awareness” of their understanding, then they are given the response options and they try to correctly answer the question.

This next point is key: it turns out that in order to get credit for question completion in LS, students must do BOTH of the following: 1) choose “I know it” when indicating understanding, and 2) answer the question correctly. If students indicate any other level of understanding, or if they answer incorrectly, LS will give them more questions on that topic, and the effort for that question won’t count towards completion of the required number of questions for the pre-class activity.

And there’s the rub. Efficient students quickly learn that they can complete the pre-class reading quiz activity much more quickly if they chose “I know it” to all the metacognitive understanding probes prior to each question. If they guess at the subsequent question answer and get it correct, it counts toward their completion of the activity and they move on. If they answer incorrectly, LS would give them another question from that topic, but they weren’t any worse off with respect to time and effort than if they had indicated that they weren’t sure of the answer.

If students actually take the time to take advantage of rather than shortcut the LS quiz features (there are additional ones I haven’t mentioned here), their deep learning should be enhanced. However, unless they come to value deep learning over efficiency and short-term grades (e.g. quiz completion), then there is no benefit to the technology. In fact it might further undermine their learning through a false sense of understanding.


Metacognition in STEM courses: A Developmental Path

by Roman Taraban, PHD, Texas Tech University

There is a strong focus in science, technology, engineering, and math (STEM) courses to solve problems (Case & Marshall, 2004). Does problem solving in STEM involve metacognition? I argue that the answer must surely be ‘yes’. That’s because metacognition involves monitoring the effectiveness of learning and problem-solving strategies and using metacognitive knowledge to regulate behavior (Draeger, 2015). But when does metacognition become part of problem solving, and how does it come about? Can we discern development in metacognitive monitoring and regulation? In this post, I will present some qualitative data from a study on problem-solving in order to reflect on these questions. The study I draw from was not about metacognition per se, however, it may provide some insights into the development of metacognition.

The study I conducted involved freshman engineering majors. These students were asked to solve typical problems from the course in mechanics in which they were currently enrolled (Taraban, 2015). Not surprisingly, students varied in how they began each problem and how they proceeded towards a solution. In order to gain some insight into their problem-solving strategies, I asked students to simply state why they started with the equation they chose and not some other equation, after they had solved the problems.

Students’ responses fell into at least three types, using labels from Case and Marshall (2004): surface, algorithmic, and deep conceptual. When asked why they started with their first equation, some students responded:

  • “I don’t know, it’s just my instinct”.
  • “No special reason. I’m just taking it randomly”.
  • “It’s just habit.”
  • “The first thing that came to my mind.”

Of interest here, these students did not appear to reflect on the specific problem or show evidence of modulating their behavior to the specific problemheir responses fit a surface learning approach: “no relationships sought out or established, learn by repetition and memorization of formulae” (Case & Marshall, 2004, p. 609).

Other students’ responses reflected an algorithmic approach to learning — “identifying and memorizing calculation methods for solving problems” (Case & Marshall, 2004, p. 609):

  • “I am getting three variables in three unknowns so I can solve it.”

Here the student verbally expresses a more structured approach to the problem. The student believes that he needs three equations involving three unknowns and uses that as a goal. Students who take an algorithmic approach appear to be more reflective and strategic about their solutions to problems, compared to surface problem solvers.

Case and Marshall (1995) regarded both the surface and algorithmic pathways as part of development towards deeper understanding of domain concepts and principles, the latter which they labeled the conceptual deep approach to learning: “relating of learning tasks to their underlying concepts or theory” with the intention “to gain understanding while doing this” (p. 609). Basically, their suggestion is that at some point students recognize that a goal of learning is to understand the material more deeply, and that this recognition guides how they learn. Case and Marshall’s description of conceptual deep learning fits Draeger’s (2015) earlier suggestion that monitoring the effectiveness of learning and regulating one’s behavior is characteristic of metacognitive thinking. Once students reach this level, we should be able to more readily observe students’ intentions to understand the material and observe their overt attempts to grasp the material through their explicit reflection and reasoning. Examples of this type of reflection from my study could be gleaned from those students who did not jump directly to writing equations without first thinking about the problem:

  • “If I choose the moment equation first, then directly I am getting the value of F. So in the other equations I can directly put the value of F.”

As students progress from surface to algorithmic to deep conceptual processing, there is certainly development. However, in the present examples that track that development, it is difficult to partial out students’ thinking about the problem content from their thinking-about-thinking, that is, their metacognitions. Draeger (2015) helps here by distinguishing between metacognition and critical thinking. The latter often requires domain-specific knowledge. Draeger suggests that “many students are able to solve complex problems, craft meaningful prose, and create beautiful works of art without understanding precisely how they did it” (p. 2). Basically, critical thinking is about methodology within a domain – e.g., the person knows how to format a narrative or select an appropriate statistical procedure, without necessarily reflecting on the effectiveness of those choices, that is, without metacognition. In the examples I provided above from my work with undergraduates on problem solving, there is invariably a mix of critical thinking and metacognition. Draeger’s distinction signals a need to better decouple these two distinct kinds of cognitive processes in order to better clarify the developmental trajectory of metacognitive processing in problem solving.

Finally, why do we observe such wide variance in students’ approaches to problem-solving, and, relatedly, to metacognition? One reason is that instructors may emphasize assessment and grades (Case & Marshall, 2004). As a consequence, students may focus more on gaining points for the correct answer rather than on the process. Welsh (2015) has suggested that course structure can act as a barrier to deeper learning: “high stakes assessments may overshadow resources designed for metacognitive development” (p. 2). Welsh found that students were more concerned with test performance than with reflecting upon their study strategies and implementing learning strategies recommended by the instructor.

How are we to understand this discord between concern with test performance and metacognition? At some level, when students set goals to do well on tests they are regulating their behavior. Metacognitive resources from the instructor may be in competition with students’ perceived resources (e.g., access to old tests, study buddies, cramming the night before). The instructor can facilitate change, but the leap from surface and algorithmic learner to deep conceptual learner must be undertaken by the student.

Passion and commitment to a topic are strong motivators to find the means to access and acquire deeper conceptual understanding. One measure of teacher success is class test performance, but another can be found in student comments. Here is one that I recently received that I found encouraging: Despite the fact that I was a bit uninterested in the subject matter, this was one of my favorite classes. By the end of the semester, not only was I interested in the subject matter, I was fascinated by it. Perhaps as instructors we need to facilitate good metacognitive practices but also nurture interest in what we teach in order to motivate students to pursue it more deeply through more effective metacognitive practices.

References

Case, J., & Marshall, D. (2004). Between deep and surface: procedural approaches to learning in engineering education contexts. Studies in Higher Education, 29(5), 605-615.

Draeger, J. (2015). Two forms of ‘thinking about thinking’: metacognition and critical thinking. Retrieved from https://www.improvewithmetacognition.com/two-forms-of-thinking-about-thinking-metacognition-and-critical-thinking/ .

Taraban, R. (2015, November). Transition from means-ends to working-forward problem solving. 56th Annual Conference of the Psychonomic Society. Chicago, IL.

Welsh, A. (2015). Supports and barriers to students’ metacognitive development in a large intro chemistry course. Retrieved from https://www.improvewithmetacognition.com/supports-and-barriers-to-students-metacognitive-development-in-a-large-intro-chemistry-course/


Pausing Mid-Stride: Mining Metacognitive Interruptions In the Classroom

By Amy Ratto Parks, Ph.d., University of Montana

Metacognitive interventions are often the subject of research in educational psychology because researchers are curious about how these planned, curricular changes might impact the development of metacognitive skills over time. However, as a researcher in the fields of metacognition and rhetoric and composition, I am sometimes struck by the fact that the planned nature of empirical research makes it difficult for us to take advantage of important kairic moments in learning.

The rhetorical term kairic, taken from the Greek concept of kairos, generally represents a fortuitous window in time in which to take action toward a purpose. In terms of learning, kairic moments are those perfect little slivers in which we might suddenly gain insight into our own or our students’ learning. In the classroom, I like to think of these kairic moments as metacognitive interruptions rather than interventions because they aren’t planned ahead of time. Instead, the “interruptions” arise out of the authentic context of learning. Metacognitive interruptions are kairic moments in which we, as teachers, might be able to briefly access a point in which the student’s metacognitive strategies have either served or not served them well.

A few days ago I experienced a very typical teaching moment that turned out to be an excellent example of a fruitful metacognitive interruption: I asked the students to take out their homework and the moment I began asking discussion questions rooted in the assignment, I sensed that something was off. I saw them looking at each other’s papers and whispering across the tables, so I asked what was going on. One brave student said, “I think a bunch of us did the homework wrong.”

They were supposed to have completed a short analysis of a peer-reviewed article titled, “The Daily Show Effect: Candidate Evaluations, Efficacy, and American Youth” (Baumgartner & Morris, 2014). I got out the assignment sheet and asked the brave student, Rasa*, to read it aloud. She said, “For Tuesday, September 15. Read The Daily Show Effect: Candidate Evaluations…. oh wait. I see what happened. I read the other Jon Stewart piece in the book.” Another student jumped in and said, “I just analyzed the whole show” and a third said, “I analyzed Jon Stewart.”

In that moment, I experienced two conflicting internal reactions. The teacher in me was annoyed. How could this simple set of directions have caused confusion? And how far was this confusion going to set us back? If only half of the class had done the work, the rest of my class plan was unlikely to go well. However, the researcher in me was fascinated. How, indeed, had this simple set of instructions caused confusion? All of these students had completed a homework assignment, so they weren’t just trying to “get out of work.” Plus, they also seemed earnestly unsure about what had gone wrong.

The researcher in me won out. I decided to let the class plan go and I began to dig into the situation. By a show of hands I saw that 12 of the 22 students had done the correct assignment and 10 had completed some customized, new version of the homework. I asked them all to pause for a moment and engage in a metacognitive activity: they were to think back to moment they read the assignment and ask themselves, where did I get mixed up?

Rasa said that she just remembered me saying something about The Daily Show in class, and when she looked in the table of contents, she saw a different article, “Political Satire and Postmodern Irony in the Age of Stephen Colbert and Jon Stewart” (Colletta, 2014), and read it instead. Other students said that they must not have read closely enough, but then another student said something interesting. She said, “I did read the correct essay, but it sounded like it was going to be too hard to analyze and I figured that you hadn’t meant for this to be so hard, so I just analyzed the show.” Other students nodded in agreement. I asked the group to raise their hands if had read the correct essay. Many hands went up. Then I asked if they thought that the analysis they chose to do was easier than the one I assigned. All of them raised their hands.

Again, I was fascinated. In this very short conversation I had just watched rich, theoretical research play out before me. First, here was an example of the direct effect of power browsing (Kandra, Harden, & Babbra, 2012) mistakenly employed in the academic classroom. Power browsing is a relatively recently coined term that describes “skimming and scanning through text, looking for key words, and jumping from source to source” (Kandra et al., 2012).  Power browsing can be a powerful overviewing strategy (Afflerbach & Cho, 2010) in an online reading environment where a wide variety of stimuli compete for the reader’s attention. Research shows that strong readers of non-electronic texts also employ pre-reading or skimming strategies (Dunlosky & Metcalfe, 2009), however, when readers mistakenly power browse in academic settings, it may result in “in missed opportunities or incomplete knowledge” (Kandra et al., 2012, par. 18). About metacognition and reading strategies, Afflerbach and Cho (2010) write, “the good strategy user is always aware of the context of reading” (p. 206); clearly, some of my students had forgotten their reading context. Some of the students knew immediately that they hadn’t thoroughly read the assignment. As soon as I described the term “power browse” their faces lit up. “Yes!” said, Rasa, “that’s exactly what I did!” Here was metacognition in action.

Second, as students described the reasoning behind choosing to read the assigned essay, but analyze something unassigned, I heard them offering a practical example of Flower and Hayes’ (1981/2011) discussion of goal-setting in the writing process. Flower and Hayes (1981/2011) said that writing includes, “not only the rhetorical situation and audience which prompts one to write, it also includes the writer’s own goals in writing” (p. 259). They went on to say that although some writers are able to “juggle all of these demands” others “frequently reduce this large set of restraints to a radically simplified problem” (p. 259). Flower and Hayes allow that this can sometimes cause problems, but they emphasize that “people only solve the problems they set for themselves” (p. 259).

Although I had previously seen many instances of students “simplifying” larger writing assignments in my classroom, I had never before had a chance to talk with students about what had happened in the moment when they realized something hadn’t worked. But here, they had just openly explained to me that the assignment had seemed too difficult, so they had recalibrated, or “simplified” it into something they thought they could do well and/or accomplish during their given timeframe.

This metacognitive interruption provided an opportunity to “catch” students in the moment when their learning strategies had gone awry, but my alertness to the kairic moment only came as a result of my own metacognitive skills: when it became clear that the students had not completed the work correctly, I paused before reacting and that pause allowed me to be alert to a possible metacognitive learning opportunity. When I began to reflect on this class period, I realized that my own alertness came as a result of my belief in the importance of teachers being metacognitive professionals so that we can interject learning into the moment of processing.

There is yet one more reason to mine these metacognitive interruptions: they provide authentic opportunities to teach students about metacognition and learning. The scene I described here could have had a very different outcome. It can be easy to see student behavior in a negative light. When students misunderstand something we thought we’d made clear, we sometimes make judgments about them being “lazy” or “careless” or “belligerent.” In this scenario it seems like it would have been justifiable to have gotten frustrated and lectured the students about slowing down, paying attention to details, and doing their homework correctly.

Instead, I was able to model the kind of cognitive work I would actually want to teach them: we slowed down and studied the mistake in a way that led the class to a conversation about how our minds work when we learn. Rather than including a seemingly-unrelated lecture on “metacognition in learning” I had a chance to teach them in response to a real moment of misplaced metacognitive strategy. Our 15-minute metacognitive interruption did not turn out to be a “delay” in the class plan, but an opening into a kind of learning that might sometimes just have to happen when the moment presents itself.

References

Baumgartner, J., & Morris, J., (2014). The Daily Show effect: Candidate evaluations, efficacy, and American youth. In C. Cucinella (Ed.), Funny. Southlake, Fountainhead Press. (Reprinted from American Politics Journal, 34(3), (2006), pp.341-67).

Colletta, L. (2014). Political satire and postmodern irony in the age of Stephen Colbert and Jon Stewart. In C. Cucinella (Ed.), Funny. Southlake, Fountainhead Press. (Reprinted from The Journal of Popular Culture, 42(5), (2009), pp. 856-74).

Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Thousand Oaks, CA: Sage.

Flower, L., & Hayes, J. (2011). A cognitive process theory of writing. In V. Villanueva & K. Arola (Eds.), Cross-talk in comp theory: A reader, (3rd ed.), (pp. 253-277). Urbana, IL: NCTE. (Reprinted from College Composition and Communication, 32(4), (Dec., 1981), pp. 365-387).

Kandra, K. L., Harden, M., & Babbra, A. (2012). Power browsing: Empirical evidence at the college level. National Social Science Journal, 2, article 4. Retrieved from http://www.nssa.us/tech_journal/volume_2-2/vol2-2_article4.htm

Waters, H. S., & Schneider, W., (Eds.). (2010). Metacognition, strategy use, and instruction. New York, NY: The Guilford Press.

* Names have been changed to protect the students’ privacy.


A Metacognitive Learning Cycle: A Better Warranty for Student Understanding?

Blank’s study “proposes a revised learning cycle model, termed the Metacognitive Learning Cycle, which emphasizes formal opportunities for teachers and students to talk about their science ideas. Working collaboratively, the researcher and a seventh-grade science teacher developed a 3-month ecology unit based on the revised model.” Results showed that even though students that were in the metacognitive classroom didn’t gain more content knowledge of ecology, they did however have more “permanent restructuring of their ecology. “

Blank, M. Lisa. (2000). A Metacognitive Learning Cycle: A Better Warranty for Student Understanding? Science Education, Volume 84, Issue 4, pages 486-506, July 2000.

A Metacognitive Learning Cycle: A Better Warranty for Student Understanding?