The Challenge of Deep Learning in the Age of LearnSmart Course Systems (Part 2)

      Comments Off on The Challenge of Deep Learning in the Age of LearnSmart Course Systems (Part 2)

Facebooktwittergoogle_plusmailFacebooktwittergoogle_plusmail

A few months ago, I shared Part 1 of this post. In it I presented the claim that, “If there are ways for students to spend less time per course and still “be successful,” they will find the ways to do so. Unfortunately, their efficient choices may short-change their long-term, deep learning.” I linked this claim to some challenges that I foresaw with respect to two aspects of the online text chosen for the core course I was teaching: 1) the pre-highlighted LearnSmart text, and 2) the metacognition-focused LearnSmart quizzing feature. This feature required students to not only answer the quiz question, but also report their confidence in the correctness of that response. (See Part 1 for details to explain my concerns. Several other posts on this site also discuss confidence ratings as a metacognition tool. See references below.) My stated plan was to “regularly check in with the students, have class discussions aimed at bringing their choices about their learning behaviors into their conscious awareness, and positively reinforcing their positive self-regulation of deep-learning behaviors.” 

This post, Part 2, will share my reflections on how things turned out, along with a summary of some feedback from my students.

With respect to my actions, I did the following in order to increase student awareness of their learning choices and the impact of those choices. Twice early in the semester I took class time to explicitly discuss the possible learning shortcuts students might be tempted to take when reading the chapters (e.g. only reading the highlighted text) and when completing the LearnSmart pre-class quizzes (see Part 1 for details). I shared some alternate completion options that would likely enhance their learning and long-term retention of the material (e.g. reading the full text without highlights and using the online annotation features). Additionally, I took time to share other general learning / studying strategies that have been shown through research to support better learning. These ways of learning were repeatedly reinforced throughout the semester (and linked to content material when applicable, such as when we discussed human learning and memory).

Did these efforts impact student behaviors and choices of learning strategies? Although I cannot directly answer that question, I can share some insights based on some LearnSmart data, course performance, and reflections shared by the students.

With respect to the LearnSmart application that quizzed students at the end of each chapter, one type of data I was able to retrieve was the overall percent of time that student LearnSmart quiz question responses fell into the following correctness and confidence categories (a metacognition-related evaluation):

  1. Students answered correctly and indicated confidence that they would answer correctly
  2. Students answered correctly but indicated that they were not confident of the correctness of their response
  3. Students answered incorrectly and knew they didn’t know the answer
  4. Students answered incorrectly but reported confidence in giving the correct answer

I examined how the percentage of time student responses fell in each category correlated with two course performance measures (final exam grade and overall course grades). Category 1 (correct and confident) and Category 3 (incorrect and knew it) both showed essentially a zero relationship with performance. There was a small positive relationship between being correct but not certain (Category 2). Category 2 responses might prompt more attention to the topic and ultimately lead to better learning. The strongest correlations (negative direction) occurred for Category 4, which was the category about which I was most concerned with respect to student learning and metacognition. There are two reasons students might have responses in that category. They could be prioritizing time efficiency over learning because they were intentionally always indicating they were confident (so that if they got lucky and answered correctly, the question would count toward the required number that they had to answer both correctly and with confidence; if they indicated low confidence, then the question would not count toward the required number they had to complete for the chapter). Alternately, Category 4 responses could be due to students being erroneous with respect their own state of understanding, suggesting poor metacognitive awareness and a likelihood to perform poorly on exams despite “studying hard.” Although there was no way for me to determine which of these two causes were underlying the student responses in this category, the negative relationship clearly indicated that those who had more such responses performed worse on the comprehensive final exam and in the course at large.

I also asked my students to share some verbal and written reflections regarding their choices of learning behaviors. These reflections didn’t explicitly address their reasons for indicating high or low confidence for the pre-class quizzes. However, they did address their choices with respect to reading only the highlighted versus the full chapter text. Despite the conversations at the beginning of the semester stressing that exam material included the full text and that their learning would be more complete if they read the full text, almost half the class reported only reading the highlighted text (or shifting from full to highlighted). These students indicated that their choice was primarily due to perceived time constraints and the fact that the pre-class LearnSmart quizzes focused on the highlighted material so students could be successful on the pre-class assignment without reading the full text. More positively, a couple students did shift to reading the full text because they saw the negative impact of only reading the highlighted text on their exam grades. Beyond the LearnSmart behaviors, several students reported increasing use (even in other courses) of the general learning / study strategies we discussed in class (e.g. working with a partner to discuss and quiz each other on the material), and some of them even shard these strategies with friends!

So, what are my take-aways?

Although this should surprise no one who has studied operant conditioning, the biggest take-away for me is that for almost half my students the immediate reinforcement of being able to more quickly complete the pre-class LearnSmart quiz was the most powerful driver of their behavior, despite explicit in-class discussion and their own acknowledgement that it hurt their performance on the later exams. When asked what they might do differently if they could redo the semester, several of these students indicated that they would tell themselves to read the full text. But, I have to wonder if this level of awareness would actually drive their self-regulatory behaviors due the unavoidable perceptions of time constraints and the immediate reinforcement of “good” performance on the pre-class LearnSmart quizzes. Unfortunately, at this point, instructors do not have control over the questions asked in the LearnSmart quizzes, so that particular (unwanted) reinforcement factor is unavoidable if you use those quizzes. A second take-away is that explicit discussion of high-efficacy learning strategies can lead to their adoption. These strategies were relatively independent from the LearnSmart quiz requirement for the course, so there was no conflict with those behaviors. Although the reinforcement was less immediate, students reported positive results from using the strategies, which motivated them to keep using them and to share them with friends. Personally, I believe that the multiple times that we discussed these general learning strategies also helped because they increased student awareness of them and their efficacy (awareness being an important first step in metacognition).

————

Some prior blog posts related to Confidence Ratings and Metacognition

Effects of Strategy Training and Incentives on Students’ Performance, Confidence, and Calibration, by Aaron Richmond October 2014

Quantifying Metacognition — Some Numeracy behind Self-Assessment Measures, by Ed Nuhfer, January 2016

The Importance of Teaching Effective Self-Assessment, by Stephen Chew, Feb 2016

Unskilled and Unaware: A Metacognitive Bias, by John R. Schumacher, Eevin Akers, and Roman Taraban, April 2016