Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    A multi source measurement approach to the assessment of higher order competencies
    Connally, Justin Andrew ( 2004)
    This study investigated the application of a multi-source measurement approach to the assessment of higher order competencies within the public services industry. While it has been well documented that the assessment of management and higher order competencies is inherently difficult and requires the gathering of evidence from a range of sources, limited guidance has been available to assessment practitioners regarding techniques for evidence synthesis and competence decision-making. Thus, the primary aim of this investigation was to develop and validate strategies to synthesise multiple sources of evidence to inform judgements of workplace competence. The methodology adopted in this investigation integrates developments in two fields of study: performance appraisals and psychometrics. Seventy-five candidates were assessed against the unit of competency Facilitate People Management using a combination of assessment methods. Candidates initially completed a self-assessment, and based on the 360-degree feedback model used widely in performance appraisals, observer reports were developed and distributed to supervisors, peers and subordinates for completion. These techniques were used in conjunction with more traditional CBA methodologies (e.g., interview, portfolio). This approach allowed for multiple sources of evidence to be gathered across a range of contexts, and covering an extended period of time. A multi-faceted Rasch model was used, with success, to combine ratings obtained from observer reports and candidates' self-assessments. This analysis revealed no real differences in the rating severity of different observer groups. Further, it was found that the modal, or most frequently occurring rating, was representative of the complete set of observer ratings. A Rasch partial credit model was used to synthesise ratings obtained from candidates' self assessment, observer reports and interview. This analysis also allowed for an investigation of variations in candidate competence and assessment method difficulty. Only marginal differences in assessment method difficulty were found. Rasch analysis techniques proved useful for synthesising evidence gathered from a range of sources, and can be used to inform the competence decision-making process. Significantly, the present study found that an empirically derived variable, based on Rasch analysis, corresponded to a hypothesised construct, based on expert judgement. The implications of research findings for theory, policy and assessment practice are discussed.