What’s context got to do with it? Comparative difficulty of test questions influences metacognition and corrected scores for formula-scored exams

Michelle Arnold, Kristin Graham, Sinead Hollingworth-Hughes

    Research output: Contribution to journalArticle

    1 Citation (Scopus)

    Abstract

    Summary: On formula-scored exams students receive points and penalties for correct and incorrect answers, respectively, but they can avoid the penalty by withholding incorrect answers. However, test-takers have difficulty strategically regulating their accuracy and often set an overly conservative metacognitive response bias (e.g., Higham, 2007). The current experiments extended these findings by exploring whether the comparative difficulty of surrounding test questions (i.e., easy vs. hard)—a factor unrelated to the knowledge being tested—impacts metacognitive response bias for medium-difficulty test questions. Comparative difficulty had no significant influence on participants' ability to choose correct answers for medium questions, but it did affect willingness to report answers and confidence ratings. This difference carried over to corrected scores (scores after penalties are applied) when comparative difficulty was manipulated within-subjects: Scores were higher in the hard condition. Results are discussed in terms of implications for interpreting formula-scored tests and underlying mechanisms of performance.

    Original languageEnglish
    Pages (from-to)146-155
    Number of pages10
    JournalApplied Cognitive Psychology
    Volume31
    Issue number2
    Early online date2017
    DOIs
    Publication statusPublished - Mar 2017

    Fingerprint Dive into the research topics of 'What’s context got to do with it? Comparative difficulty of test questions influences metacognition and corrected scores for formula-scored exams'. Together they form a unique fingerprint.

    Cite this