Examining Automated Corrective Feedback in EFL Writing Classrooms: A Case Study of Criterion
AuthorHoang, Thi Linh Giang
AffiliationSchool of Languages and Linguistics
Document TypePhD thesis
Access StatusOpen Access
© 2019 Thi Linh Giang Hoang
Automated writing evaluation (AWE) systems are increasingly used in classroom settings to provide formative feedback to learners. Yet, there is a scarcity of research evidence about the impact of automated feedback on accuracy development or writing/revision practices and a lack of longitudinal studies into learners’ engagement with automated feedback. This research examines the value of the automated corrective feedback (ACF) generated by ETS Criterion as a learning and assessment tool in the EFL writing classroom. Specifically, it seeks to answer the questions about (1) the nature and accuracy of Criterion ACF; (2) students’ engagement and perceptions; and (3) their changed accuracy following the use of such feedback. The interaction between individual learner factors and their response to Criterion ACF is also investigated in search of explanatory factors for selected cases’ engagement and observed accuracy development over a five-month period. This study adopted a pre-post quasi-experimental design on a sample of 104 English majors divided into two groups: experimental and comparison. During three practice sessions, the comparison group wrote their essays on paper and submitted them to the instructor for feedback. The experimental group, however, composed their writing on Criterion and revised their drafts in response to its feedback before submitting revised drafts to the teacher for feedback. Besides the test essays, data included first and revised drafts from Criterion practice sessions, recorded think-aloud protocols conducted with 14 students as they revised essays using Criterion corrective feedback, stimulated recall interviews and end-of-term focus group interviews. Students’ changes in writing accuracy were calculated using error analysis of the test scripts of the two groups. These were triangulated with the qualitative data of how the students engaged with the feedback from Criterion and their revision practices. Further triangulation came from students’ perceptions of automated feedback in the stimulated recall and focus group interviews. The validation of Criterion ACF as a learning and assessment tool in the EFL writing classroom reveals a mixture of support and rebuttal evidence. Criterion was able to address EFL learners’ needs for surface-level errors, but it still lacked coverage of some major issues in the students’ L2 writing. It can be praised for facilitating revising as well as self-regulatory writing strategies and triggering noticing among the students, but Criterion’s approach to feedback generation was not pedagogically based, resulting in a lack of meaningful engagement with the feedback. Overall, despite students’ positive feelings about Criterion ACF, reservations about its value remain due to learners’ middling revision success rates and the absence of significant intervention or retention effects of the use of Criterion ACF on their accuracy gains over the studied period. The findings extend our understanding about students’ engagement and use of the automated corrective feedback. The study’s main implications relate to formative feedback practices in the classroom, including the need to supplement Criterion automated feedback with teacher feedback to support L2 writing instruction and classroom-based assessment. Also, Criterion corrective feedback should be designed to be more adaptable to focus learners’ attention on the relevant issues for their developmental stage.
KeywordsCriterion; Automated corrective feedback; Automated writing evaluation; Engagement with feedback; L2 writing
- Click on "Export Reference in RIS Format" and choose "open with... Endnote".
- Click on "Export Reference in RIS Format". Login to Refworks, go to References => Import References