The Impact of Spelling Errors on Trained Raters' Scoring Decisions

Keywords: error correction, human rater scoring, L2 writing assessment, spelling errors


Second language (L2) writing assessments seldom allow a spellchecker and often have a time limit. Naturally, test takers often submit responses with spelling errors. However, little is known about whether and how spelling errors in test taker responses affect trained raters’ scoring decisions. In this study, we investigated the impact of spelling errors on trained raters' holistic evaluation of response quality. We selected 148 responses to four L2 writing tasks, and created error-free versions of the responses by correcting spelling errors in the original responses. Both the original and corrected responses were randomly assigned to trained raters, who scored the responses according to the same holistic scoring rubrics. We compared the resulting scores of original and corrected responses to gauge the impact of spelling errors. We also examined whether the impact of spelling errors varied across different task types and spelling error characteristics. The results showed that the spelling error correction led to an average of more than a half score point increase. The score gains from the error correction varied across task types and the quantity of corrected errors. These findings have multiple implications, including suggestions for rater training and assessment development.

Author Biographies

Ikkyu Choi, Educational Testing Service

Ikkyu Choi is a research scientist at Educational Testing Service. His research interests include second language development profiles, test taking processes, and scoring of constructed responses.

Yeonsuk Cho, Educational Testing Service

Yeonsuk Choi is a research scientist at Educational Testing Service, Princeton, NJ, USA. Her current work focuses on the development and validation of language tests for adults and young learners.