Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 5 of 5
  • Item
    Thumbnail Image
    To what extent is the Script Concordance Test a valid measure of clinical reasoning for Advanced Paediatric Life Support Training?
    Stanford, Jane Susan ( 2018)
    Although Advanced Paediatric Life Support (APLS) and other Structured Resuscitation Training (SRT) programs receive widespread professional endorsement, studies have shown limited and short term change in clinicians’ knowledge, skills and behaviour.  This could be because SRT outcomes (knowledge, skills and an approach to care) are measured in isolation, which is not how the content of these programs is applied in the clinical context. Script Concordance Tests (SCTs) have been validated as a measure of knowledge and clinical reasoning following clinical placement training programs. However, SCTs have not been validated as an assessment tool for SRT programs. This project is a validation study of an SCT for the APLS program.  Guided by the frameworks of Messick and Kane, this study created and piloted an APLS SCT to collect qualitative and quantitative data for a validation argument.  Despite small numbers, psychometric analysis indicated that the APLS SCT as designed, performed in a similar manner to SCTs created for other contexts. Larger studies with APLS learners will be required to further validate the SCT for the APLS context. However, this preliminary work indicated positive results.
  • Item
    Thumbnail Image
    Exploring the qualities of Electronic Health Record medical student documentation
    Cheshire, Lisa ( 2016)
    Written communication within the health professions has been rapidly changing over the last decade. Implementation of Electronic Health Records (EHR) in health services is now widespread. Medical student teaching and learning of the skills specifically required for EHRs has lagged behind the implementation. Very few original studies have focused on EHR skills and there are no validated measures by which to assess any of the EHR skills students are expected to develop. Our study explored the attributes of quality EHR documentation recorded by medical students, with the purpose of the EHR documentation being the communication between health care professionals to share or transfer the clinical care of a patient. Recently there have been published validated instruments for measuring quality in physician EHR documentation, one being Physician Documentation Quality Instrument (PDQI-9). The purpose of this study was to explore the attributes of quality of EHR documentation written by first-year clinical medical students by building upon existing literature. The PDQI-9 was used as a basis for defining the attributes of quality in EHR documentation as a foundation for assessing and providing feedback on the performance of documentation to medical students. With the focus on assessment, and providing a content validated test domain for assessment in quality EHR documentation, we utilised Kane’s framework for validity to structure the study and a mixed method study design to achieve the depth of exploration required to examine the performance of quality documentation fully. The study was conducted in two stages. In the first stage of the study, an expert panel of assessors applied the PDQI-9 to existing EHR data recorded by first clinical year medical students in a graduate entry program. The assessors both scored the records and justified their grading. Descriptive statistics and thematic analysis were undertaken on the data collected, and the findings triangulated with the literature review. The second stage employed explanatory semi-structured interviews with the expert assessors to better understand the findings of the first stage and reach consensus on a test domain for assessing quality documentation recorded by medical students. Outcomes from our study indicated that the PDQI-9 in its current format was not valid in a medical student setting, however most of the attributes assessed by the PDQI-9 were deemed relevant and meaningful to assess if their interpretations were clarified. In addition, Professionalism of documentation was regarded as a quality attribute. Consensus was reached on modifications that have the potential to improve the validity of the assessment of quality documentation recorded by medical students. Further studies need to complete Kane’s framework of validity for an assessment instrument and collect evidence to broaden the validity of the scoring, the generalization of the assessment items, the extrapolation to the real world and the implications of this assessment for students and health services.
  • Item
    Thumbnail Image
    The evolution of the OSCA-OSCE-Clinical Examination of the Royal Australasian College of Surgeons
    Serpell, Jonathan William ( 2010)
    The overall question this research aimed to address was how and why the Objective Structured Clinical Examination (OSCE) of The Royal Australasian College of Surgeons (RACS) evolved. A literature review and introduction are presented as a background to the evolution of the Objective Structured Clinical Assessment (OSCA)-OSCE-Clinical Examination of RACS. A brief history of surgery and training, an outline of the functions of RACS and a description of evolution from the apprenticeship model to formal surgical training programs is given. A background to the purpose of assessment within RACS, and formative and summative assessments precedes a description of the Part 1 Examination of RACS. By 1985 it was realised that not all objectives of basic surgical training of the RACS could be assessed in the Part 1 Examination using Multiple Choice Questions (MCQs); hence the introduction of an OSCE Clinical Examination, to assess clinical skills such as history taking, examination of patients, procedural skills and communication skills. A description of the Part 2 exit examination and the relation of RACS to universities and government are given. To undertake clinical examinations, clear definitions of clinical competence are required, and the differences between knowledge, the application of knowledge, competence and performance are considered and elucidated. These form the background to the clinical examination as a competency assessment, as opposed to a performance assessment in actual clinical practice. Then follows a detailed analysis of some important components of any examination process including: clear definition of the purpose of the assessment; blueprinting for type and content of assessment; reliability; validity; educational impact or consequential validity; cost; and feasibility and acceptability. Reliability of different clinical examination types is considered in detail, and an outline of definitions and the method of determining reliability described. Factors affecting reliability include: length of testing time; number of testing samples; number of examiners; standardised patient performance; and variation of examinees across testing stations (inter-case variability or content specificity). Validity is examined to ensure an examination is actually testing what it is intended to test. Face and content validity, alignment between the curriculum, the references and the examination, and consequential validity (the effect of the examination on learning) are highlighted as important validity components. Then follows an evaluation of rating scales for OSCE exams, using check lists or global assessments, assessor training, and methods to determine standards and pass marks for the examination. This includes relative and absolute standards, Angoff’s judgemental method, and the importance of examiner selection, standard setting meetings, and the determination of the standard. The overall question this research aimed to address was how and why the OSCE clinical examination of RACS evolved. To answer this, the mechanics of the RACS OSCE examination process were assessed. Twenty one problem areas were identified, analysed and evaluated, and the OSCE clinical examination was assessed against the known background literature on reliability, validity, educational impact, accessibility, cost, blueprinting, alignment of curriculum and resources and examinations, utility of a data base, standard setting, rating scales and global competency versus check list scores. Seven RACS-OSCE examinations were analysed in detail to elucidate the extent to which the RACS-OSCE matches the benchmark expectations in the areas outlined above. Some of the major problems identified with the original RACS-OSCE examination included: inappropriate inclusion of written questions; inability to rate overall or global performance as opposed to check list rating; lack of electronic data base questions and reliance on hardcopy exams; lack of statistical analysis of the examination; lack of consistent nomenclature; and lack of alignment of curriculum, resources and references and examination questions. It was also determined that: examiner recruitment and examination logistics required review; the role of the Clinical Committee which administers the OSCE, needed refining; reading lists needed updating; and the clinical examination needed to reflect the nine competencies of RACS recently introduced. These problems were addressed leading to changes in the practice and evaluation of the examination process by: introduction of competency scores for global assessment in the areas of counselling, procedure, examination and history taking; consistent clinical nomenclature was introduced; the 20 station (12 assessed, 8 written) examination was replaced with a 16 assessed station examination and the written questions were discontinued; the role of the administering Clinical Committee was defined in detail; the process of new question and station creation was clarified, including essential documentation for each station; recruitment and recognition of clinical examiners was instituted; the logistics of running the examination were refined; an electronic Clinical Committee data base was established; and statistical analysis of performance of the examination was undertaken. The overall reliability of the OSCE clinical examination of RACS in multiple examinations is of the order of 0.60-0.73, which is a modest level only. Removal of written questions and increasing observed clinical stations from 12 to 16 has not altered this reliability level. The most important factor affecting reliability is sample size; to deal with the major problem of content specificity or inter-case variability. This suggests an increased number of observed stations (perhaps up to 20) will be required to increase the reliability of the RACS Clinical OSCE. Differences in reliability and geographic centres have been demonstrated, suggesting that this is related to the examiners, which raises the issues of examiner performance and training. The content validity of the OSCE is good as evidenced by: the fact that surgical experts are creating, reviewing and revising the content of the OSCE exam; the use of blueprinting, and quality control by the Clinical Committee; and examination stations are statistically analysed for correlation and reliability. Evidence was found that assessment drives learning in the consequential validity analysis of the examination. The examination was found to have good face validity and authenticity. The OSCE was found to be feasible and acceptable. Standard setting still requires further development for the RACS-OSCE Clinical Examination and it is recommended a modified Angoff method be utilised. Overall, this thesis details the modification and evolution of the RACS-OSCE clinical examination over a sixteen year period, demonstrating it is robust, reliable, and valid.
  • Item
    Thumbnail Image
    Descriptive feedback: informing and improving practice in an assessment as learning context
    Christopher Damian, Dinneen ( 2010)
    This practitioner inquiry study explores the use of descriptive feedback as a means of ‘constructing the way forward’ (Tunstall & Gipps, 1996) for the learning of six students in a Year 2 classroom. The study was undertaken in a large independent school in a Victorian country town. A qualitative methodology was adopted for the study’s purpose of gaining insights into the interplay of factors that determined the students’ uptake of descriptive feedback. This included their responsiveness to ‘Thinkit-tickets’ — a self-assessment strategy developed by the researcher to promote reflective thinking, and provide evidence of the students’ affective and cognitive responses to their learning. The data collection methods involved: audio-taped semi-structured interviews; audio-taped in situ descriptive feedback conversations with students; non-participant observations conducted by an educational consultant to the school; a teacher journal and students’ written self-reflections (Thinkit-tickets). The study revealed that unambiguous immediate feedback, learning criteria and understanding the student’s individual perceptions of a given task (Muis, 2007) were key factors in determining the effectiveness of descriptive feedback. Data highlighted the connection between descriptive feedback and the importance of engendering a classroom culture based on dialogic teaching (Alexander, 2003), and reflective thinking (Ritchhart, 2002) for promoting a self-regulated, metacognitive approach to improve their learning. The study also identified the challenge of breaking down the unequal power relationship between the teacher and students in order to facilitate learning through the co-construction of next step actions on the way to achieving specified goals. In conclusion, the study reiterates the call for further empirical research on the use of descriptive feedback within formative assessment practices (Black & Wiliam, 1998, Brinko, 1993; Hattie & Timperley, 2007; Rodgers, 2006). Recommendations are subsequently made for closer investigations of engaging with dialogic teaching to generate more effective descriptive feedback practices that build learner agency.
  • Item
    Thumbnail Image
    Examining the alignment of the intended curriculum and performed curriculum in primary school mathematics and integrated curriculum
    ZIEBELL, NATASHA ( 2010)
    Curriculum alignment can be defined as the degree to which the intended curriculum (standards and teaching plans) and the performed curriculum (instruction and assessment) are in agreement with one another. Curriculum alignment research indicates that a coherent, or a well-aligned system has a positive effect on student achievement (English, 2000; Squires, 2009; Webb, 1997). This research focuses on Webb’s (1997) criteria for alignment of expectations and assessment, which provides a comprehensive framework that could be adapted for use with the current Victorian Essential Learning Standards curriculum. Webb’s (1997) criteria focus on alignment between curriculum and assessment. This research builds on Webb’s (1997) model by highlighting the importance of the inclusion of ‘instruction’ or the ‘performed curriculum’ in studies determining curricular alignment. The data for this comparative case study was collected from two Grade 3 classrooms. Data collection methods that were used were direct observation, interviews (pre and post observation), audio recording of lessons and document analysis. The data was analysed using an adaptation of six criteria for alignment focusing on ‘Content’ and ‘Pedagogical Implications’. The criteria focusing on ‘Content’ are Categorical Concurrence, Depth of Knowledge Consistency, Range of Knowledge Correspondence and Dispositional Consonance. The criteria focusing on ‘Pedagogical Implications’ are Effective Classroom Practices and Use of Technology, Materials and Tools. The findings of the study indicate that qualitative methods can be applied successfully in a study of curriculum alignment. This study found that data is readily available in the primary school setting which can be used with an adaptation of Webb’s (1997) criteria to determine the level of curriculum alignment. A key finding demonstrated that the process of planning occurs through a series of interpretations of the curriculum performed at various levels within the school, year level team or by the individual teacher. Throughout the process of planning and implementation of curriculum, the results showed that the teachers themselves customised the prescribed curriculum in response to their own priorities and the content that they felt reflected the needs of their students. It is a recommendation of this study that further research needs to be concerned with planning, pedagogical and assessment practices that effectively strengthen curriculum alignment. The benefits of further research would enable the identification of practices that improve alignment and could inform targeted, appropriate and effective professional development for practicing teachers.