Knowledge Surveys Part 1 — Benefits of Knowledge Surveys to Student Learning and Development

by Karl Wirth, Macalester College,
Ed Nuhfer, California State Universities (retired)
Christopher Cogan, Memorial University
McKensie Kay Phillips, University of Wyoming

Introduction

Knowledge surveys (KSs) present challenges like exam questions or assignments, but respondents do not answer these. Instead, they express their felt ability to address the challenges with present knowledge. Knowledge surveys focus on self-assessment, which is a special kind of metacognition. 

Overall, metacognition is a self-imposed internal dialogue that is a distinguishing feature of “expert learners” regardless of the discipline (e.g., Ertmer & Newby, 1996). Because all students do not begin college as equally aware and capable of thinking about their learning, instructors must direct students to keep them in constant contact with their metacognition. Paul Pintrich, a pioneer in metacognition, stressed that “instruction about metacognition must be explicit.” Knowledge surveys enable what Ertmer & Newby and Pintrich advocate in any class in any subject.

road sign with words "data" pointing to words "information" pointing to word "knowledge" with the word "learning above
Image by Gerd Altmann from Pixabay

Knowledge surveys began in 1992 during a conversation about annual reviews between the guest editor and a faculty member who stated: “They never ask about what I teach.” Upon hearing the faculty member, the guest editor decided to create a 200-item form to survey student ratings of their mastery of detailed content for his geology course at the start and end of the class. The items were simply an array of test and quiz questions, ordered in the sequence the students would encounter during the course. The students responded to each item through a 3-point response at the start and end of the course. 

The information from this first knowledge survey proved so valuable that the guest editor described this in 1996 in a geology journal as a formative assessment. As a result, geoscience faculty elsewhere started taking the lead in researching them and describing more benefits.

In 2003, U.S. Air Force Academy’s physics professor Delores Knipp and the guest editor published the first peer-reviewed paper (Nuhfer and Knipp, 2003) for multiple disciplines. If new to knowledge surveys, click the hotlink to that paper now and read at least the first page to gain a conceptual understanding of the instrument.

Self-assessment, Metacognition, and Knowledge Surveys

Becoming educated is a process of understanding self and the phenomena that one experiences. Knowledge surveys structure practices in understanding both. 

Our series’ earlier entries revealed the measurable influence of self-assessment on dispositions such as self-efficacy, mindset, and intellectual and ethical development that prove indispensable to the lifelong process of becoming educated. The entries on bias and privilege revealed that the privilege of having the kind of education that renders the unconscious conscious may determine the collective quality of a society and how well we treat one another within it.

Knowledge surveys prompt self-assessment reflections during learning every aspect of the content. Over a baccalaureate education, cumulative, repetitive practice can significantly improve understanding of one’s present knowledge and self-assessing accuracy.

Improving Learning

Knowledge surveys’ original purpose was to improve student learning (e.g., Nuhfer & Knipp, 2003Wirth et al., 20162021). Providing students with a knowledge survey at the beginning of a course or unit of instruction offered an interactive roadmap for an entire course that overtly disclosed the instructor’s intentions for learning to students. 

Early on, users recognized that knowledge surveys might offer a measure of changes in learning produced by a unit of instruction. Demonstrating the validity of such self-assessed competence measures was crucial but was finally achieved in 2016 and 2017.

Deeper Reading

Students quickly learned the value of prioritizing knowledge through engaging with the knowledge survey prior to and during engaging in reading. The structure of the KSs enabled reading with the purpose of illuminating known learning objectives. The structure also primed students to understand concepts by using the reading to clarify the connectedness between knowledge survey items.

Rather than just sitting down to “complete a reading,” students began reading assignments with appropriate goals and strategies; a characteristic of “expert readers” (Paris et al., 1996). When they encountered difficult concepts, they displayed increasing effort to improve their understanding of the topics identified as being essential to understand the concept. Further, knowledge surveys facilitated mentoring. When students did not understand the material, they proved more likely to follow up with a colleague or instructor to complete their understanding. 

Facilitating Acquiring Self-Regulation

Well-constructed knowledge surveys are detailed products of instructor planning and thinking. They communicate instructor priorities and coordinate the entire class to focus on specific material in unison. That students’ comments expressing they “didn’t know that would be on the exam” nearly disappeared from classroom conversations cannot be overly appreciated. 

Replacing scattered class-wide guessing of what to study allowed a collective focus on “How will we learn this material?” That reframing led to adopting learning strategies that expert learners employ when they have achieved self-regulation. Students increasingly consulted with each other or the instructor when they sensed or realized their current response to a knowledge survey item was probably inadequate. 

Levels and Degrees of Understanding

In preparing a knowledge survey for a course, the instructor carefully writes each survey item and learning objective so that learning addresses the desired mastery at the intended Bloom level (Krathwohl, 2002). Providing awareness of Bloom levels to students and reinforcing this throughout a course clarifies student awareness of the deep understanding required to teach the content at the required Bloom level to another person. Whereas it may be sufficient to remember or comprehend some content, demonstrating higher cognitive processes by having to explain to another how to apply, synthesize or evaluate central concepts and content of a course feels different because it is different. 

Knowledge surveys can address all Bloom levels and provide the practices needed to enable the paired understanding of knowing and “feeling of knowing” like no other instrument. Including the higher Bloom levels, combined with the explicitly stated advanced degree of understanding as the level of “teaching” or “explaining” to others, builds self-assessment skills and fosters the development of well–justified self-confidence. A student with such awareness can better focus efforts on extending the knowledge in which they recognize their weakness.

Building Skills with Feedback

The blog entries by Fleisher et al. in this series stressed the value of feedback in developing healthy self-assessments. Knowledge survey items that address the same learning outcomes as quizzes, exams, assignments, and projects promote instructional alignment. Such alignment allows explicit feedback from the demonstrated competence measures to calibrate the accuracy of self-assessments of understanding. Over time, knowledge surveys confer awareness that appropriate feedback builds both content mastery and better self-assessment skills.

A robust implementation directs students to complete the relevant portions of a knowledge survey after studying for an exam but before taking it. After the teacher grades the exams, students receive their self-assessed (knowledge survey score) and demonstrated (graded exam score) competence in a single package. From this information, the instructor can direct students to compare their two scores and to receive mentoring from the instructor when there is a large discrepancy (>10 points) between the two scores. 

Generally, a significant discrepancy from a single knowledge survey-exam pair comparison is not as meaningful as longer-term trends illuminated by cumulative data. Instructors who use KSs skillfully mentor students to become familiar with their trends and tendencies. When student knowledge survey responses consistently over- or under-estimate their mastery of the content, the paired data reveal this tendency to the student and instructor and open the opportunity for conversations about the student’s habitually favored learning strategies.

A variant implementation adds an easy opportunity for self-assessment feedback. Here, instructors assign students to estimate their score on an assignment or exam at the start of engaging the project and after completing the test or assignment prior to submission. These paired pre-post self-assessments help students to focus on their feelings of knowing and to further adjust toward greater self-assessment accuracy.

Takeaways

Knowledge surveys are unique in their utility for supporting student mastery of disciplinary knowledge, developing their affect toward accurate feelings of knowing, and improving their skills as expert learners. Extensive data show that instructors’ skillful construction of knowledge surveys as part of class design elicits deeper thinking and produces higher quality classes. After construction, class use facilitates mutual monitoring of progress and success by students and instructors. In addition to supporting student learning of disciplinary content, knowledge surveys keep students in constant contact with their metacognition and develop their capacity for lifelong learning. 

In Part 2, we follow from our more recent investigations on (1) more robust knowledge survey design, (2) learning about temporal qualities of becoming educated, (3) student authoring of knowledge surveys, and (4) mentoring students with large mis-calibrations in self-assessed competence toward greater self-assessment accuracy. 


Metacognition and Mindset for Growth and Success: Part 2 – Documenting Self-Assessment and Mindset as Connected

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

Self-assessment measures and categorizing mindset preference both employ self-reported metacognitive responses that produce noisy data. Interpreting noisy data poses difficulties and generates peer-reviewed papers with conflicting results. Some published peer-review works question the legitimacy and value of self-assessment and mindset.

Yeager and Dweck (2020) communicate frustration when other scholars deprecate mindset and claim it makes no difference under what mindset students pursue education. Indeed, that seems similar to arguing that enjoyment of education and students’ attitudes toward it makes no difference in the quality of their education.

We empathize with that frustration when we recall our own from seeing in class after class that our students were not “unskilled and unaware of it” and reporting those observations while a dominant consensus that “Students can’t self-assess” proliferated. The fallout that followed from our advocacy in our workplaces (mentioned in Part 2 of the entries on privilege) came with opinions that since “the empiricists have spoken,” there was no reason we should study self-assessment further. Nevertheless, we found good reason to do so. Some of our findings might serve as an analogy to demonstrating the value of mindsets despite the criticisms being leveled against them.

How self-assessment research became a study of mindset

In the summer of 2019, the guest editor and the first author of this entry taught two summer workshops on metacognition and learning at CSU Channel Islands to nearly 60 Bridge students about to begin their college experience. We employed a knowledge survey for the weeklong program, and the students also took the paired-measures Science Literacy Concept Inventory (SLCI). Students had the option of furnishing an email address if they wanted a feedback letter. About 20% declined feedback, and their mean score was 14 points lower (significant at the 99.9% confidence level) than those who requested feedback.

In revisiting our national database, we found that every campus revealed a similar significant split in performance. It mattered not whether the institution was open admissions or highly selective; the mean score of the majority who requested feedback (about 75%) was always significantly higher than those who declined feedback. We wondered if the responses served as an unconventional diagnosis of Dweck’s mindset preference.

Conventional mindset diagnosis employs a battery of agree-disagree queries to determine mindset inclination. Co-author Michael Roberts suggested we add a few mindset items on the SLCI, and Steven Fleisher selected three items from Dweck’s survey battery. After a few hundred student participants revealed only a marginal definitive relationship between mindset diagnosed by these items and SLCI scores, Steve increased our items to five.

Who operates in fixed, and who operates in growth mindsets?

The personal act of choosing to receive or avoid feedback to a concept inventory offers a delineator to classify mindset preference that differs from the usual method of doing so through a survey of agree-disagree queries. We compare here the mindset preferences of 1734 undergraduates from ten institutions using (a) feedback choice and (b) the five agree-disagree mindset survey items that are now part of Version 7.1a of the SLCI. That version has been in use for about two years.

We start by comparing the two groups’ demonstrable competence measured by the SLCI. Both methods of sorting participants into fixed or growth mindset preferences confirmed a highly significant (99.9% confidence) greater cognitive competence in the growth mindset disposition (Figure 1A). As shown in the Figure, feedback choice created two groups of fixed and growth mindsets whose mean SLCI competency scores differed by 12 percentage points (ppts). In contrast, the agree-disagree survey items defined the two groups’ means as separated by only 4 ppts. However, the two methods split the student populace differently, with the feedback choice determining that about 20% of the students operated in the fixed mindset. In contrast, the agree-disagree items approach determined that nearly 50% were operating in that mindset.

We next compare the mean self-assessment accuracy of the two mindsets. In a graph, it is easy to compare mean skills between groups by comparing the scatter shown by one standard deviation (1 Sigma) above and below the means of each group (Figure 1B). The group members’ scatter in overestimating or underestimating their actual scores reveals a group’s developed capacity for self-assessment accuracy. Groups of novices show a larger scatter in their group’s miscalibrations than do groups of those with better self-assessment skills (see Figure 3 of resource at this link).

Graphs showing how fixed and growth mindsets relate to SLCI scores, differing based on how mindset is categorized.

Figure 1. A. Comparisons of competence (SLCI scores) of 1734 undergraduates between growth mindset participants (color-coded blue) and fixed mindset participants (color-coded red) mindsets as deduced by two methods: (left) agree-disagree survey items and (right) acceptance or opting-out or receiving feedback. “B” displays the measures of demonstrated competence spreads of one standard deviation (1 Sigma) in growth (blue) and fixed mindset (red) groups as deduced by the two methods. The thin black line at 0 marks a perfect self-assessment rating of 0, above which lie overconfident estimates and below which lie underconfident estimates in miscalibrations of self-assessed accuracy. The smaller the standard deviation revealed by the height of the rectangles in 2B, the better the group’s ability to self-assess accurately. Differences shown in A of 4 and 12 ppts and B of 2.3 and 3.5 ppts are differences between means.

On average, students classified as operating in a growth mindset have better-calibrated self-assessment skills (less spread of over- and underconfidence) than those operating in a fixed mindset by either classification method (Figure 1B). However, the difference between fixed and growth was greater and more statistically significant when mindset was classified by feedback choice (99% confidence) rather than by the agree-disagree questions (95% confidence).

Overall, Figure 1 supports Dweck and others advocating for the value of a growth mindset as an asset to learning. We urge contextual awareness by referring readers to Figure 1 of Part 1 of this two-part thematic blog on self-assessment and mindset. We have demonstrated that choosing to receive or decline feedback is a powerful indicator of cognitive competence and at least a moderate indicator of metacognitive self-assessment skills. Still, classifying people into mindset categories by feedback choice addresses only one of the four tendencies of mindset shown in that Figure. Nevertheless, employing a more focused delineator of mindset preference (e.g., choice of feedback) may help to resolve the contradictory findings reported between mindset type and learning achievement.

At this point, we have developed the connections between self-assessment, mindset, and feedback we believe are most valuable to the readers of the IwM blog. Going deeper is primarily of value to those researching mindset. For them, we include an online link to an Appendix to this Part 2 after the References, and the guest editor offers access to SLCI Version 7.1a to researchers who would like to use it in parallel with their investigations.

Takeaways and future direction

Studies of self-assessment and mindset inform one another. The discovery of one’s mindset and gaining self-assessment accuracy require knowing self, and knowing self requires metacognitive reflection. Content learning provides the opportunity for developing the understanding of self by practicing for self-assessment accuracy and acquiring the feeling of knowing while struggling to master the content. Learning content without using it to know self squanders immense opportunities.

The authors of this entry have nearly completed a separate stand-alone article for a follow-up in IwM that focuses on using metacognitive reflection by instructors and students to develop a growth mindset.

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487


Metacognition and Mindset for Growth and Success: APPENDIX to Part 2 – Documenting Self-Assessment and Mindset as Connected

by Ed Nuhfer, Guest Editor, California State University (retired)
Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

This Appendix stresses numeracy and employs a dataset of 1734 participants from ten institutions to produce measures of cognitive competence, self-assessed competence, self-assessment accuracy, and mindset categorization. The database is sufficient to address essential issues introduced in our blogs.

Finding replicable relationships in noisy data employs groups from a database collected from instruments proven to produce high-reliability measures. (See Figure 10 at this link.). If we assemble groups, say, groups of 50, as shown in Figure 1 B, we can attenuate the random noise in individuals’ responses (Fig. 1A) and produce a clearer picture of the signal hidden within the noise (Fig. 1B).

graphs showing postdicted self-assessment and SLCI a) individual data and b) group data

Figure 1 Raw data person-by-person on over 9800 participants (Fig. 1 A) shows a highly significant correlation between measures of actual competence from SLCI scores and postdicted self-assessed competence ratings. Aggregating the data into over 180 groups of 50 (Fig. 1 B) reduces random noise and clarifies the relationship.

Random noise is not simply an inconvenience. In certain graphic types, random noise generates patterns that do not intuitively appear random. Researchers easily interpret these noise patterns as products of a human behavior signal. The “Dunning-Kruger effect” appears built on many researchers doing that for over twenty years. 

Preventing confusing noise with signal requires knowing what randomness looks like. Researchers can achieve this by ensuring that the surveys and test instruments used in any behavioral science study have high reliability and then constructing a simulated dataset by completing these instruments with random number responses. The simulated population should equal that of the participants in the research study, and graphing the simulated study should employ the same graphics researchers intend to present the participants’ data in a publication.

The 1734 participants addressed in Parts 1 and 2 of this blog’s theme pair on mindset are part of the larger dataset represented in Figure 1. The number is smaller than 9800 because we only recently added mindset questions. 

The blog containing this Appendix link showed the two methods of classifying mindset as consistent in designating growth mindset as associated with higher scores on cognitive measures and more accurate self-assessments. However, this finding does not directly test how the two classification methods are related to one another. The fact noted in the blog that the two methods classified people differently indicated a reason to anticipate that the two may not prove to be directly statistically related.

We need to employ groups to attenuate noise, and ideally, we want large groups with good prospects of a spread of values. We first picked the groups associated with furnishing information about privilege (Table 1) because these are groups large enough to attenuate random noise. Further, the groups displayed highly significant statistical spreads when we looked at self-assessed and demonstrable competence within these categories. Note well: we are not trying to study privilege aspects here. Our objective, for now, is to understand the relationship between mindset defined by agree-disagree items and mindset defined by requests for feedback.

We have aggregated our data in Table 1 from four parameters to yield eight paired measures and are ready to test for relationships. Because we already know the relationship between self-assessed competence and demonstrated competence, we can verify whether our existing dataset of 1734 participants presented in 8 paired measures groups is sufficient to deduce the relationship we already know. Looking at self-assessment serves as a calibration to help answer, “How good is our dataset likely going to be for distinguishing the unknown relationships we seek about mindset?”

Mindset and self-assessment indicators by large groups.

Table 1. Mindset and self-assessment indicators by large groups. The table reveals each group’s mindset composition derived from both survey items and feedback and the populace size of each group.

Figure 2 shows that our dataset in Table 1 proved adequate in capturing the known significant relationship between self-assessed competence and demonstrated competence (Fig. 2A). The fit-line slope and intercept in Figure 2A reproduce the relationship established from much larger amounts of data (Fig. 1 B). However, the dataset did not confirm a significant relationship between the results generated by the two methods of categorizing people into mindsets (Fig. 2B).

In Figure 2B, there is little spread. The plotted points and the correlation are close to significant. Nevertheless, the spread clustered so tightly that we are apprehensive that the linear relationship would replicate in a future study of a different populace. Because we chose categories with a large populace and large spreads, more data entered into these categories probably would not change the relationships in Figure 2A or 2B. More data might bump the correlation in Figure 2B into significance. However, this could be more a consequence of the spread of the categories chosen for Table 1 than a product of a tight direct relationship between the two methods employed to categorize mindset. However, we can resolve this by doing something analogous to producing the graph in Figure 1B above.

Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

Figure 2. Relationships between self-assessed competence and demonstrated competence (A) and growth mindset diagnosed by survey items and requests for feedback (B). The data graphed is from Table 1.

We next place the same participants from Table 1 into different groups and thereby remove the spread advantages conferred by the groups in Table 1. We randomize the participants to get a good mix of the populace from the ten schools, sort the randomized data by class rank to be consistent with the process used to produce Figure 1B and aggregate them into groups of 100 (Table 2).

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

Table 2. 1700 students are randomized into groups of 100, and the means are shown for four categories for each group.

The results employing different participant groupings appear in Figure 3. Figure 3A confirms that the different groupings in Table 2 attenuate the spread introduced by the groups in Table 1.

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

Figure 3. The data graphed is from Table 2. Relationships between self-assessed competence and demonstrated competence appear in (A). In (B), plotting classified by agree-disagree survey items versus mindset classified by requesting or opting out of feedback fails to replicate the pattern shown in Figure 2 B

The matched pairs of self-assessed competence and demonstrable competence continue in Figure 3A to reproduce a consistent line-fit that despite diminished correlation that still attains significance like Figures 1B and 2A. 

In contrast, the ability to show replication between the two methods for categorizing mindsets has completely broken down. Figure 2B shows a very different relationship from that displayed in 1B. Deducing the direct relationship between the two methods of categorizing mindset proves not replicable across different groups.

To allow readers who may wish to try different groupings, we have provided the raw dataset used for this Appendix that can be downloaded from https://profcamp.tripod.com/iwmmindsetblogdata.xls.

Takeaways

The two methods of categorizing mindset, in general, designate growth mindset as associated with higher scores on tests of cognitive competence and, to a lesser extent, better self-assessment accuracy. However, the two methods do not show a direct relationship with each other. This indicates the two are addressing different dimensions of the multidimensional character of “mindsets.”


Metacognition and Mindset for Growth and Success: Part 1 – Understanding the Metacognitive Connections between Self-Assessment and Mindset

by Steven Fleisher, California State University
Michael Roberts, DePauw University
Michelle Mason, University of Wyoming
Lauren Scharff, U. S. Air Force Academy
Ed Nuhfer, Guest Editor, California State University (retired)

When I first entered graduate school, I was flourishing. I was a flower in full bloom. My roots were strong with confidence, the supportive light from my advisor gave me motivation, and my funding situation made me finally understand the meaning of “make it rain.” But somewhere along the way, my advisor’s support became only criticism; where there was once warmth, there was now a chill, and the only light I received came from bolts of vindictive denigration. I felt myself slowly beginning to wilt. So, finally, when he told me I did not have what it takes to thrive in academia, that I wasn’t cut out for graduate school, I believed him… and I withered away.                                                                              (actual co-author experience)

schematic of person with band aid and flowers growing who is facing other people
Image by Moondance from Pixabay

After reading the entirety of this two-part blog entry, return and read the shared experience above once more. You should find that you have an increased ability to see the connections there between seven elements: (1) affect, (2) cognitive development, (3) metacognition, (4) self-assessment, (5) feedback, (6) privilege, and (7) mindset. 

The study of self-assessment as a valid component of learning, educating, and understanding opens up fascinating areas of scholarship for new exploration. This entry draws on the same paired-measures research described in the previous blog entries of this series. Here we explain how measuring self-assessment informs understanding of mindset and feedback. Few studies connect self-assessment with mindset, and almost none rest on a sizeable validated data set. 

Mindset, self-assessment, and privilege

Mindset theory proposes that individuals lean toward one of two mindsets (Dweck, 2006) that differ based on internalized beliefs about intelligence, learning, and academics. According to Dweck and others, people fall along a continuum that ranges from having a fixed mindset defined by a core belief that their intelligence and thinking abilities remain fixed, and effort cannot change them. In contrast, having a growth mindset comes with the belief that, through their effort, people can expand and improve their abilities to think and perform (Figure 1). 

Indeed, a growth mindset has support in the stages of intellectual, ethical, and affective development discovered by Bloom & Krathwohl and William Perry mentioned earlier in this series. However, mindset theory has evolved into making broader claims and advocating that being in a state of growth mindset also enhances performance in high-stakes functions such as leadershipteaching, and athletics

diagram showing the opposite nature of fixed and growth mindset with respect to how people view effort, challenge, failure and feedback. From https://trainugly.com/portfolio/growth-mindset/

Figure 1. Fixed – growth mindset tendencies. (From https://trainugly.com/portfolio/growth-mindset/)

Do people choose their mindset or do their experiences place them in their positions on the mindset continuum?  Our Introduction to this series disclosed that people’s experiences from degrees of privilege influence their positioning along the self-assessment accuracy continuum, and self-assessment has some commonalities with mindset. However, a focused, evidence-based study of privilege on determining mindset inclination seems lacking.

Our Introduction to this series indicated that people do not choose their positions along the self-assessment continuum. People’s cumulative experiences place them there. Their positions result from their individual developmental histories, where degrees of privilege influence the placement through how many experiences an individual has that are relevant and helpful to building self-assessment accuracy. The same seems likely for determining positions along the mindset continuum.

Acting to improve equity in educational success

Because the development during pre-college years primarily occurs spontaneously by chance rather than by design, people are rarely conscious of how everyday experiences form their dispositions. College students are unlikely even to know their positions on either continuum unless they receive a diagnostic measure of their self-assessment accuracy or their tendency toward a growth or a fixed mindset. Few get either diagnosis anywhere during their education. 

Adapting a more robust growth mindset and acquiring better self-assessment accuracy first requires recognizing that these dispositions exist. After that, devoting systematic effort to consciously enlisting metacognition during learning disciplinary content seems essential. Changing the dispositions takes longer than just learning some factual content. However, the time required to see measurable progress can be significantly reduced by a mentor/coach who directs metacognitive reflection and provides feedback.

Teaching self-assessment to lower-division undergraduates by providing numerous relevant experiences and prompt feedback is a way to alleviate some of the inequity produced by differential privilege in pre-college years. The reason to do this early is to allow students time in upper-level courses to ultimately achieve healthy self-efficacy and graduate with the capacity for lifelong learning. A similar reason exists for teaching students the value of affect and growth mindset by providing awareness, coaching, and feedback. Dweck describes how achieving a growth mindset can mitigate the adverse effects of inequity in privilege.

Recognizing good feedback

Dweck places high value on feedback for achieving the growth mindset. The Figure 1 in our guest series’ Introduction also emphasizes the importance of feedback in developing self-assessment accuracy and self-efficacy during college.

Depending on a person’s beliefs about their particular skill to address a challenge, they will respond in predictable ways when a skill requires effort, when it seems challenging, when effort affects performance, and when feedback informs performance. Those with a fixed mindset realize that feedback will indicate imperfections, which they take as indicative of their fixed ability rather than as applicable to growing their ability. To them, feedback shames them for their imperfections, and it hurts. They see learning environments as places where stressful competitions occur between their own and others’ fixed abilities. Affirmations of success rest in grades rather than growing intellectual ability.

Those with a growth mindset value feedback as illuminating the opportunities for advancing quickly in mastery during learning. Sharing feedback with peers in their learning community is a way to gain pleasurable support from a network that encourages additional effort. There is little doubt which mindset promotes the most enjoyment, happiness, and lasting friendships and generates the least stress during the extended learning process of higher education.

Dweck further stresses the importance of distinguishing feedback that is helpful from feedback that is damaging. Our lead paragraph above revealed a devastating experience that would influence any person to fear feedback and seek to avoid it. A formative influence that disposes us to accept or reject feedback likely lies in the nature of feedback that we received in the past. A tour through traits of Dweck’s mindsets suggests many areas where self-perceptions can form through just a single meaningful feedback event. 

Australia’s John Hattie has devoted his career to improving education, and feedback is his specialty area. Hattie concluded that feedback is “…the most powerful single moderator that enhances achievement” and noted in this University of Auckland newsletter “…arguably the most critical and powerful aspect of teaching and learning.” 

Hattie and Timperley (2007) synthesized many years of studies to determine what constitutes feedback helpful to achievement. In summary, valuable feedback focuses on the work process, but feedback that is not useful focuses on the student as a person or their abilities and communicates evaluative statements about the learner rather than the work. Hattie and Dweck independently arrived at the same surprising conclusion: even praise directed at the person, rather than focusing on the effort and process that led to the specific performance, reinforces a fixed mindset and is detrimental to achievement.

Professors seldom receive mentoring on how to provide feedback that would promote growth mindsets. Likewise, few students receive mentoring on how to use peer feedback in constructive ways to enhance one another’s learning. 

Takeaways

Scholars visualize both mindset and self-assessment as linear continuums with two respective dispositions at each of the ends: growth and fixed mindsets and perfectly accurate and wildly inaccurate self-assessments. In this Part 1, we suggest that self-assessment and mindset have surprisingly close connections that scholars have scarcely explored.

Increasing metacognitive awareness seems key to tapping the benefits of skillful self-assessment, mindset, and feedback and allowing effective use of the opportunities they offer. Feedback seems critical in developing self-assessment accuracy and learning through the benefits of a growth mindset. We further suggest that gaining benefit from feedback is a learnable skill that can influence the success of individuals and communities. (See Using Metacognition to Scaffold the Development of a Growth Mindset, Nov 2022.)

In Part 2, we share findings from our paired measures data that partially explain the inconsistent results that researchers have obtained between mindset and learning achievement. Our work supports the validity of mindset and its relationship to cognitive competence. It allows us to make recommendations for faculty and students to apply this understanding to their advantage.

 

References

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Heft, I. & Scharff, L. (July 2017). Aligning best practices to develop targeted critical thinking skills and habits. Journal of the Scholarship of Teaching and Learning, Vol 17(3), pp. 48-67. http://josotl.indiana.edu/article/view/22600

Isaacson, Randy M., and Frank Fujita. 2006. “Metacognitive Knowledge Monitoring and Self-Regulated Learning: Academic Success and Reflections on Learning.” Journal of Scholarship of Teaching and learning6, no. 1: 39–55. Retrieved from https://eric.ed.gov/?id=EJ854910

Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies? American Psychologist, 75(9), 1269–1284. https://doi.org/10.1037/amp0000794