Predictors of college retention/success.


Facebooktwittergoogle_plusmailFacebooktwittergoogle_plusmail

In a recent investigation completed with Randy Isaacson and Tara Beziat, it was found that high school GPA and SAT scores did not predict retention as well as GPA in the first semester. It was also found that first semester GPA was a good predictor of retention and student progression. Now, this is not surprising. What is important, is that individual differences in students’ knowledge monitoring accuracy was correlated with student GPA. Further, knowledge monitoring accuracy increased following a semester of simple training.

This article is accessible from the following links:

http://nrmera.org/researcher.html 

http://nrmera.org/PDF/Researcher/Researcherv26n1Beziat_et%20al.pdf

5 thoughts on “Predictors of college retention/success.

  1. Lauren Scharff

    Great study! But, as a fellow researcher interested in metacognition, I was wondering about the Knowledge Monitoring Assessment (KMA) that you used, as I am interested in using something like that in some of my projects. I didn’t notice a reference and there were no details about the measure given. How many questions does it have? What type of questions? How was it scored?

  2. Chris Was

    The knowledge monitoring assessment (KMA) used in this study was adapted from simple a KMA described by Tobias and Everson (2009). Details of our methodology can be found in Isaacson and Was (2010) and Hartwig, Was, Dunlosky and Isaacson (2011). The basic task is to present students/participants with a list of vocabulary items and require them to report whether of not the item is known or unknown (“do you know the meaning of this word?” yes/no). Then after knowledge judgments have been made on all items the participants are given a multiple-choice test on the same vocabulary. This produces a 2 (judgment: known/unknown) x 2 (MC: correct vs. incorrect) contingency table. The gamma coefficient is the typical measure of accuracy but there are many different ways to score the contingency table data each with different interpretations (cf. Schraw, 2009).

    In our research we have been using a 50-item vocabulary test. 33 of the items were taken from educational psychology textbooks (as they are relevant to the participant sample we have used) and 17 were general vocabulary items.

    Hartwig, M., Was, C. A., Dunlosky, J., & Isaacson, R. M. (2011). General knowledge monitoring as a predictor of in-class exam performance. British Journal of Educational Psychology. doi: 10.1111/j.2044-8279.2011.02038.x

    Isaacson, R. M., & Was, C. A. (2010). Believing You’re Correct vs. Knowing You’re Correct: A Significant difference? The Researcher,23(1), 1-12.

    Tobias, S., & Everson, H. (2009). The importance of knowing what you know: A knowledge monitoring framework for studying metacognition in education. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of Metacognition in Education. (pp. 107-128). New York, NY: Routledge.

    Schraw, G. (2009). Measuring metacognitive judgments. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of Metacognition in Education. (pp. 415-429). New York, NY: Routledge.

  3. Chris Was Post author

    The knowledge monitoring assessment (KMA) used in this study was adapted from simple a KMA described by Tobias and Everson (2009). Details of our methodology can be found in Isaacson and Was (2010) and Hartwig, Was, Dunlosky and Isaacson (2011). The basic task is to present students/participants with a list of vocabulary items and require them to report whether of not the item is known or unknown (“do you know the meaning of this word?” yes/no). Then after knowledge judgments have been made on all items the participants are given a multiple-choice test on the same vocabulary. This produces a 2 (judgment: known/unknown) x 2 (MC: correct vs. incorrect) contingency table. The gamma coefficient is the typical measure of accuracy but there are many different ways to score the contingency table data each with different interpretations (cf. Schraw, 2009).

    In our research we have been using a 50-item vocabulary test. 33 of the items were taken from educational psychology textbooks (as they are relevant to the participant sample we have used) and 17 were general vocabulary items.

    Hartwig, M., Was, C. A., Dunlosky, J., & Isaacson, R. M. (2011). General knowledge monitoring as a predictor of in-class exam performance. British Journal of Educational Psychology. doi: 10.1111/j.2044-8279.2011.02038.x

    Isaacson, R. M., & Was, C. A. (2010). Believing You’re Correct vs. Knowing You’re Correct: A Significant difference? The Researcher,23(1), 1-12.

    Tobias, S., & Everson, H. (2009). The importance of knowing what you know: A knowledge monitoring framework for studying metacognition in education. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of Metacognition in Education. (pp. 107-128). New York, NY: Routledge.

    Schraw, G. (2009). Measuring metacognitive judgments. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of Metacognition in Education. (pp. 415-429). New York, NY: Routledge.

  4. John Draeger

    I am intrigued by the suggestion that benefits can be achieved through “limited” and “simple” training. Could you point me in the direction of best practices for KMA training?

  5. Chris Was

    John,

    Randy Isaacson and I published an article in the National Teaching and Learning Forum (Isaacson, R. M., & Was, C. A. (2010). An Educational Psychology Curriculum to Teach Metacognition. The National Teaching and Learning Forum) that describes practices Randy and I have both used in our classes to try to create a metacognitive “habit of mind” with our students. Specifically, our goal has always been to get students to be able to better monitor their knowledge.

    Other researchers that are examining whether one can improve knowledge monitoring are Doug Hacker and his colleagues, and Tyler Miller & Lisa Geraci.

Leave a Reply