Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    The evolution of the OSCA-OSCE-Clinical Examination of the Royal Australasian College of Surgeons
    Serpell, Jonathan William ( 2010)
    The overall question this research aimed to address was how and why the Objective Structured Clinical Examination (OSCE) of The Royal Australasian College of Surgeons (RACS) evolved. A literature review and introduction are presented as a background to the evolution of the Objective Structured Clinical Assessment (OSCA)-OSCE-Clinical Examination of RACS. A brief history of surgery and training, an outline of the functions of RACS and a description of evolution from the apprenticeship model to formal surgical training programs is given. A background to the purpose of assessment within RACS, and formative and summative assessments precedes a description of the Part 1 Examination of RACS. By 1985 it was realised that not all objectives of basic surgical training of the RACS could be assessed in the Part 1 Examination using Multiple Choice Questions (MCQs); hence the introduction of an OSCE Clinical Examination, to assess clinical skills such as history taking, examination of patients, procedural skills and communication skills. A description of the Part 2 exit examination and the relation of RACS to universities and government are given. To undertake clinical examinations, clear definitions of clinical competence are required, and the differences between knowledge, the application of knowledge, competence and performance are considered and elucidated. These form the background to the clinical examination as a competency assessment, as opposed to a performance assessment in actual clinical practice. Then follows a detailed analysis of some important components of any examination process including: clear definition of the purpose of the assessment; blueprinting for type and content of assessment; reliability; validity; educational impact or consequential validity; cost; and feasibility and acceptability. Reliability of different clinical examination types is considered in detail, and an outline of definitions and the method of determining reliability described. Factors affecting reliability include: length of testing time; number of testing samples; number of examiners; standardised patient performance; and variation of examinees across testing stations (inter-case variability or content specificity). Validity is examined to ensure an examination is actually testing what it is intended to test. Face and content validity, alignment between the curriculum, the references and the examination, and consequential validity (the effect of the examination on learning) are highlighted as important validity components. Then follows an evaluation of rating scales for OSCE exams, using check lists or global assessments, assessor training, and methods to determine standards and pass marks for the examination. This includes relative and absolute standards, Angoff’s judgemental method, and the importance of examiner selection, standard setting meetings, and the determination of the standard. The overall question this research aimed to address was how and why the OSCE clinical examination of RACS evolved. To answer this, the mechanics of the RACS OSCE examination process were assessed. Twenty one problem areas were identified, analysed and evaluated, and the OSCE clinical examination was assessed against the known background literature on reliability, validity, educational impact, accessibility, cost, blueprinting, alignment of curriculum and resources and examinations, utility of a data base, standard setting, rating scales and global competency versus check list scores. Seven RACS-OSCE examinations were analysed in detail to elucidate the extent to which the RACS-OSCE matches the benchmark expectations in the areas outlined above. Some of the major problems identified with the original RACS-OSCE examination included: inappropriate inclusion of written questions; inability to rate overall or global performance as opposed to check list rating; lack of electronic data base questions and reliance on hardcopy exams; lack of statistical analysis of the examination; lack of consistent nomenclature; and lack of alignment of curriculum, resources and references and examination questions. It was also determined that: examiner recruitment and examination logistics required review; the role of the Clinical Committee which administers the OSCE, needed refining; reading lists needed updating; and the clinical examination needed to reflect the nine competencies of RACS recently introduced. These problems were addressed leading to changes in the practice and evaluation of the examination process by: introduction of competency scores for global assessment in the areas of counselling, procedure, examination and history taking; consistent clinical nomenclature was introduced; the 20 station (12 assessed, 8 written) examination was replaced with a 16 assessed station examination and the written questions were discontinued; the role of the administering Clinical Committee was defined in detail; the process of new question and station creation was clarified, including essential documentation for each station; recruitment and recognition of clinical examiners was instituted; the logistics of running the examination were refined; an electronic Clinical Committee data base was established; and statistical analysis of performance of the examination was undertaken. The overall reliability of the OSCE clinical examination of RACS in multiple examinations is of the order of 0.60-0.73, which is a modest level only. Removal of written questions and increasing observed clinical stations from 12 to 16 has not altered this reliability level. The most important factor affecting reliability is sample size; to deal with the major problem of content specificity or inter-case variability. This suggests an increased number of observed stations (perhaps up to 20) will be required to increase the reliability of the RACS Clinical OSCE. Differences in reliability and geographic centres have been demonstrated, suggesting that this is related to the examiners, which raises the issues of examiner performance and training. The content validity of the OSCE is good as evidenced by: the fact that surgical experts are creating, reviewing and revising the content of the OSCE exam; the use of blueprinting, and quality control by the Clinical Committee; and examination stations are statistically analysed for correlation and reliability. Evidence was found that assessment drives learning in the consequential validity analysis of the examination. The examination was found to have good face validity and authenticity. The OSCE was found to be feasible and acceptable. Standard setting still requires further development for the RACS-OSCE Clinical Examination and it is recommended a modified Angoff method be utilised. Overall, this thesis details the modification and evolution of the RACS-OSCE clinical examination over a sixteen year period, demonstrating it is robust, reliable, and valid.