School of Languages and Linguistics - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 3 of 3
  • Item
    Thumbnail Image
    The role of test-taker feedback in the validation of a language proficiency test
    BROWN, ANNE ( 1991)
    The development of a language test consists of a series of stages, one of which involves the trialling of the test on a suitable selection of candidates. While the performance of these candidates provides the test developer with information on the adequacy both of the test as a whole and of individual test items, feedback from the candidates in the form of comments on the test, a potentially valuable source of information, is generally little utilised. This thesis investigates the role of test-taker feedback in test development. It focuses specifically on feedback collected during the trialling of a test of Japanese for the tourism and hospitality industry, a tape-based test of oral/aural proficiency. Trial candidates completed a post-test questionnaire, providing reactions both to the test as a whole and to task types and individual test items. In the first part of the study the reactions of test-takers were examined to determine whether they varied according to characteristics of the test-taker, including gender, amount of study, type of course undertaken (specific purpose or general), relevant occupational experience and proficiency. Following this the value of the test-taker feedback in the test revision process was examined. This feedback was found to be of value in test item revision, in the refinement of the student handbook and the test rubric, and in providing evidence of test validity. The incorporation of test-taker feedback into the test development process was felt to contribute not only to a better product but one which candidates themselves felt to be a fair and accurate measure of their proficiency.
  • Item
    Thumbnail Image
    The importance and effectiveness of moderation training on the reliability of teacher assessments of ESL writing samples
    McIntyre, Philip N. ( 1993)
    This thesis reports the findings of a study of the inter-rater reliability of assessment of ESL Writing by teachers in the Australian Adult Migrant Education Program, using the ASLPR, a language proficiency scale used throughout the program. The study investigates the individual ratings assigned to 15 writing samples by 83 teachers, both before and after training aimed at moderation of raters' perceptions of descriptors in the scale by reference to features of other 'anchor' writing samples. The thesis argues the necessity for on-going training of assessors of ESL writing, at a time of change in the program, from assessment of language proficiency to that of language competencies, since both forms of assessments are increasingly having consequences which affect the lives of the candidates. The importance and necessity for moderation training is established by reference to the problems of validity in the scale itself and in its use in the program, and by reference to the literature of assessor-training and features of writing which influence rater-judgements. The findings indicate that training is effective in substantially increasing inter-rater reliability of the subjects, by reducing the range of levels assigned to the samples and increasing the percentages of ratings at the mode (most accurate) level and at the Mode +/- 1 level (an allowance for 'error' due to the subjective nature of the assessment), after training. The paper concludes that on-going training is effective in achieving greater consensus i.e. inter-rater reliability amongst the assessors, but suggests that variability needs to be further reduced and offers suggestions for further research aimed at other assessors and variables.
  • Item
    Thumbnail Image
    The predictive validity of the IELTS and TOEFL: a comparison
    Broadstock, Harvey James ( 1994)
    This study compared two groups of overseas students who entered Melbourne and Monash universities in Melbourne in semester 1 1993. One group entered on the basis of an IELTS score and the other group entered on the basis of TOEFL score. Their academic performance at the end of semester 1 1993 was compared. Predictive validity coefficients were also compared. Differences were minimal with a slight tendency for the TOEFL to correlate more strongly than IELTS with undergraduate academic performance. The assumption made by admissions officers who use the two tests to make admissions decisions that the two tests are equivalent in their predictive validity was not refuted.