- School of Languages and Linguistics - Research Publications
School of Languages and Linguistics - Research Publications
Permanent URI for this collection
13 results
Filters
Reset filtersSettings
Statistics
Citations
Search Results
Now showing
1 - 10 of 13
-
ItemA comparative discourse study of simulated clinical roleplays in two assessment contexts: Validating a specific-purpose language testWoodward-Kron, R ; Elder, C (SAGE PUBLICATIONS LTD, 2016-04)The aim of this paper is to investigate from a discourse analytic perspective task authenticity in the speaking component of the Occupational English Test (OET), an English language screening test for clinicians designed to reflect the language demands of health professional–patient communication. The study compares the OET speaking sub-test roleplay performances of 12 doctors who were successful OET candidates with practice Objective Structured Clinical Examination (OSCE) roleplay performances of 12 international medical graduates (IMGs) preparing for the Australian Medical Council clinical examination. The premise for the comparison is that the OSCE roleplays can represent communication practices that are valued within the medical profession; therefore a finding of similarity in the discourse structure across the OET and the OSCE roleplays could be taken as supporting the validity of the OET as a tool for eliciting relevant communication skills in the medical profession. The study draws on genre theory as developed in Systemic Functional Linguistics (SFL) in order to compare the roleplay discourse structure and the linguistic realizations of the two tasks. In particular, it examines the role relationships of the participants (i.e. the tenor of the discourse), and the ways in which content is represented (i.e. the field of the discourse) by roleplay participants. The findings reveal some key similarities but also important differences. Although both tests inevitably fall short in terms of authentic representation of real world interactions, the findings suggest that the OET task, for a range of reasons including time allowances, training of test interlocutors, and the limits of contextual information provided to candidates, constrains candidate topic exploration and treatment negotiation, compared to the OSCE format. The paper concludes with proposals for mitigating these limitations in the interests of enhancing the OET’s capacity to elicit more professionally relevant language and communication skills.
-
ItemPerspectives from physiotherapy supervisors on student-patient communicationWoodward-Kron, R ; van Die, D ; Webb, G ; Pill, J ; Elder, C ; McNamara, T ; Manias, E ; McColl, G (INT JOURNAL MEDICAL EDUCATION-IJML, 2012)
-
ItemHealth Professionals' Views of Communication: Implications for Assessing Performance on a Health-Specific English Language TestElder, C ; Pill, J ; Woodward-Kron, R ; McNamara, T ; Manias, E ; Webb, G ; McColl, G (WILEY, 2012-06)
-
ItemDeveloping and validating language proficiency standards for non-native English speaking health professionalsElder, C ; McNamara, T ; Woodward-Kron, R ; Manias, E ; McColl, G ; Webb, G ; Pill, J ; O'Hagan, S (ALTAANZ-ASSOC LANGUAGE TESTING & ASSESSMENT AUSTRALIA, 2013)n/a
-
ItemMeasuring the Speaking Proficiency of Advanced EFL Learners in China: The CET-SET SolutionZhang, Y ; Elder, C (ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2009)
-
ItemEvaluating the Effectiveness of Heritage Language Education: What Role for Testing?Elder, C (Informa UK Limited, 2005-03-15)
-
Item
-
ItemDiagnosing the Support Needs of Second Language Writers: Does the Time Allowance Matter?ELDER, C. ; ZHANG, R. ; KNOCH, U. ( 2009)
-
ItemExploring the Utility of a Web-Based English Language Screening ToolElder, C ; von Randow, J (ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2008)
-
ItemNo Preview AvailableEvaluating rater responses to an online training program for L2 writing assessmentElder, C ; Barkhuizen, G ; Knoch, U ; von Randow, J (SAGE Publications, 2007-01-01)The use of online rater self-training is growing in popularity and has obvious practical benefits, facilitating access to training materials and rating samples and allowing raters to reorient themselves to the rating scale and self monitor their behaviour at their own convenience. However there has thus far been little research into rater attitudes to training via this modality and its effectiveness in enhancing levels of inter- and intra-rater agreement. The current study explores these issues in relation to an analytically-scored academic writing task designed to diagnose undergraduates’ English learning needs. 8 ESL raters scored a number of pre-rated benchmark writing samples online and received immediate feedback in the form of a discrepancy score indicating the gap between their own rating of the various categories of the rating scale and the official ratings assigned to the benchmark writing samples. A batch of writing samples was rated twice (before and after participating in the online training) by each rater and Multifaceted Rasch analyses were used to compare levels of rater agreement and rater bias (on each analytic rating category). Raters’ views regarding the effectiveness of the training were also canvassed. While findings revealed limited overall gains in reliability, there was considerable individual variation in receptiveness to the training input. The paper concludes with suggestions for refining the online training program and for further research into factors influencing rater responsiveness.