School of Languages and Linguistics - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 13
  • Item
    Thumbnail Image
    A comparative discourse study of simulated clinical roleplays in two assessment contexts: Validating a specific-purpose language test
    Woodward-Kron, R ; Elder, C (SAGE PUBLICATIONS LTD, 2016-04)
    The aim of this paper is to investigate from a discourse analytic perspective task authenticity in the speaking component of the Occupational English Test (OET), an English language screening test for clinicians designed to reflect the language demands of health professional–patient communication. The study compares the OET speaking sub-test roleplay performances of 12 doctors who were successful OET candidates with practice Objective Structured Clinical Examination (OSCE) roleplay performances of 12 international medical graduates (IMGs) preparing for the Australian Medical Council clinical examination. The premise for the comparison is that the OSCE roleplays can represent communication practices that are valued within the medical profession; therefore a finding of similarity in the discourse structure across the OET and the OSCE roleplays could be taken as supporting the validity of the OET as a tool for eliciting relevant communication skills in the medical profession. The study draws on genre theory as developed in Systemic Functional Linguistics (SFL) in order to compare the roleplay discourse structure and the linguistic realizations of the two tasks. In particular, it examines the role relationships of the participants (i.e. the tenor of the discourse), and the ways in which content is represented (i.e. the field of the discourse) by roleplay participants. The findings reveal some key similarities but also important differences. Although both tests inevitably fall short in terms of authentic representation of real world interactions, the findings suggest that the OET task, for a range of reasons including time allowances, training of test interlocutors, and the limits of contextual information provided to candidates, constrains candidate topic exploration and treatment negotiation, compared to the OSCE format. The paper concludes with proposals for mitigating these limitations in the interests of enhancing the OET’s capacity to elicit more professionally relevant language and communication skills.
  • Item
    Thumbnail Image
    Perspectives from physiotherapy supervisors on student-patient communication
    Woodward-Kron, R ; van Die, D ; Webb, G ; Pill, J ; Elder, C ; McNamara, T ; Manias, E ; McColl, G (INT JOURNAL MEDICAL EDUCATION-IJML, 2012)
  • Item
    Thumbnail Image
    Health Professionals' Views of Communication: Implications for Assessing Performance on a Health-Specific English Language Test
    Elder, C ; Pill, J ; Woodward-Kron, R ; McNamara, T ; Manias, E ; Webb, G ; McColl, G (WILEY, 2012-06)
  • Item
    Thumbnail Image
    Developing and validating language proficiency standards for non-native English speaking health professionals
    Elder, C ; McNamara, T ; Woodward-Kron, R ; Manias, E ; McColl, G ; Webb, G ; Pill, J ; O'Hagan, S (ALTAANZ-ASSOC LANGUAGE TESTING & ASSESSMENT AUSTRALIA, 2013)
    n/a
  • Item
    Thumbnail Image
    Measuring the Speaking Proficiency of Advanced EFL Learners in China: The CET-SET Solution
    Zhang, Y ; Elder, C (ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2009)
  • Item
  • Item
  • Item
  • Item
    Thumbnail Image
    Exploring the Utility of a Web-Based English Language Screening Tool
    Elder, C ; von Randow, J (ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2008)
  • Item
    No Preview Available
    Evaluating rater responses to an online training program for L2 writing assessment
    Elder, C ; Barkhuizen, G ; Knoch, U ; von Randow, J (SAGE Publications, 2007-01-01)
    The use of online rater self-training is growing in popularity and has obvious practical benefits, facilitating access to training materials and rating samples and allowing raters to reorient themselves to the rating scale and self monitor their behaviour at their own convenience. However there has thus far been little research into rater attitudes to training via this modality and its effectiveness in enhancing levels of inter- and intra-rater agreement. The current study explores these issues in relation to an analytically-scored academic writing task designed to diagnose undergraduates’ English learning needs. 8 ESL raters scored a number of pre-rated benchmark writing samples online and received immediate feedback in the form of a discrepancy score indicating the gap between their own rating of the various categories of the rating scale and the official ratings assigned to the benchmark writing samples. A batch of writing samples was rated twice (before and after participating in the online training) by each rater and Multifaceted Rasch analyses were used to compare levels of rater agreement and rater bias (on each analytic rating category). Raters’ views regarding the effectiveness of the training were also canvassed. While findings revealed limited overall gains in reliability, there was considerable individual variation in receptiveness to the training input. The paper concludes with suggestions for refining the online training program and for further research into factors influencing rater responsiveness.