- School of Languages and Linguistics - Research Publications
School of Languages and Linguistics - Research Publications
Permanent URI for this collection
Search Results
Now showing
1 - 10 of 13
-
ItemMeasuring the Speaking Proficiency of Advanced EFL Learners in China: The CET-SET SolutionZhang, Y ; Elder, C (ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2009)
-
ItemImplicit and Explicit Knowledge of an L2 and Language ProficiencyELDER, C. ; Ellis, R. (Multilingual Matters, 2009)
-
ItemEvaluating the Effectiveness of Heritage Language Education: What Role for Testing?Elder, C (Informa UK Limited, 2005-03-15)
-
ItemValidating a test of metalinguistic knowledgeElder, C ; Ellis, R ; Loewen, S ; Elder, C ; Erlam, R ; Philp, J ; Reinders, H (Multilingual Matters, 2009-01-01)
-
Item
-
ItemDiagnosing the Support Needs of Second Language Writers: Does the Time Allowance Matter?ELDER, C. ; ZHANG, R. ; KNOCH, U. ( 2009)
-
ItemExploring the Utility of a Web-Based English Language Screening ToolElder, C ; von Randow, J (ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2008)
-
ItemExplicit language knowledge and focus on form: options and obstacles for TESOL teacher traineesELDER, C ; Erlam, ; Philp, (Oxford University Press, 2007)
-
ItemNo Preview AvailablePlanning for test performance Does it make a difference?Elder, C ; Iwashita, N (Benjamins - John Benjamins Publishing Company, 2005-01-01)
-
ItemNo Preview AvailableEvaluating rater responses to an online training program for L2 writing assessmentElder, C ; Barkhuizen, G ; Knoch, U ; von Randow, J (SAGE Publications, 2007-01-01)The use of online rater self-training is growing in popularity and has obvious practical benefits, facilitating access to training materials and rating samples and allowing raters to reorient themselves to the rating scale and self monitor their behaviour at their own convenience. However there has thus far been little research into rater attitudes to training via this modality and its effectiveness in enhancing levels of inter- and intra-rater agreement. The current study explores these issues in relation to an analytically-scored academic writing task designed to diagnose undergraduates’ English learning needs. 8 ESL raters scored a number of pre-rated benchmark writing samples online and received immediate feedback in the form of a discrepancy score indicating the gap between their own rating of the various categories of the rating scale and the official ratings assigned to the benchmark writing samples. A batch of writing samples was rated twice (before and after participating in the online training) by each rater and Multifaceted Rasch analyses were used to compare levels of rater agreement and rater bias (on each analytic rating category). Raters’ views regarding the effectiveness of the training were also canvassed. While findings revealed limited overall gains in reliability, there was considerable individual variation in receptiveness to the training input. The paper concludes with suggestions for refining the online training program and for further research into factors influencing rater responsiveness.