Melbourne Medical School Collected Works - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 3 of 3
  • Item
    Thumbnail Image
    Assessor grades and comments: private thoughts and public judgements
    Scarff, Catherine Elizabeth ( 2020)
    Assessment of medical trainees’ performance in the workplace aims to provide them with accurate and meaningful information and guidance on their learning and developing competence. However, in practice, such goals aren’t always achieved. Sometimes assessors find it difficult to deliver clear and consistent assessment messages to a trainee, especially when the information or judgement they have to give is negative. While this can occur for many reasons, including disagreement with or uncertainty about assessment processes, the MUM effect — the widespread human tendency to keep mum about undesirable messages — may also have relevance for the situation. With reference to this framework, this thesis explores how and why reluctance to deliver negative assessment messages manifests in a medical specialty training setting in Australia. Literature reviews on the MUM effect and trainee perspectives of assessment messages informed the design of a mixed methods study which explores the MUM effect in this context. The study involved four parts: - a questionnaire study of assessor self-reports of discomfort and MUM behaviours in assessment; - a questionnaire study of trainee perspectives of MUM behaviours by their assessors and their views of the clinical performance assessments; - a review of a sample of previously submitted assessment forms comparing the messages sent by numerical ratings with those by written comments; and - an interview study of assessors to further understandings of their experiences with and perspectives of these assessment formats. The findings show that reluctance to deliver negative assessment messages — which can result in failure to give feedback, failure to fail and grade inflation — are real and continuing issues in medical education. The MUM effect offers one explanation for their persistence, despite the many methods which have been employed to date to address them. The study shows how the MUM effect permits an expanded view of the problem and that assessor reluctance can lead to behaviours beyond the commonly reported failure to fail and grade inflation. These include behaviours such as delay, avoidance and distortion of assessment information. Further, the results show that reluctance can affect the comments part of an assessment in addition to the ratings, which have been the main focus to date. This study reveals the many pressures and dilemmas that assessors face in their role and in particular, that the amount of discomfort they experience can potentially affect their assessment behaviours and result in MUMing. This work also shows that trainees are aware that their assessors sometimes keep mum, meaning the judgement delivered may differ from the assessor’s private thoughts on their performance. Potential solutions are seen to be multifactorial and include addressing perceptions about “failure” in clinical performance assessments and the responsibility that assessors feel for the assessment decisions.
  • Item
    Thumbnail Image
    Exploring the qualities of Electronic Health Record medical student documentation
    Cheshire, Lisa ( 2016)
    Written communication within the health professions has been rapidly changing over the last decade. Implementation of Electronic Health Records (EHR) in health services is now widespread. Medical student teaching and learning of the skills specifically required for EHRs has lagged behind the implementation. Very few original studies have focused on EHR skills and there are no validated measures by which to assess any of the EHR skills students are expected to develop. Our study explored the attributes of quality EHR documentation recorded by medical students, with the purpose of the EHR documentation being the communication between health care professionals to share or transfer the clinical care of a patient. Recently there have been published validated instruments for measuring quality in physician EHR documentation, one being Physician Documentation Quality Instrument (PDQI-9). The purpose of this study was to explore the attributes of quality of EHR documentation written by first-year clinical medical students by building upon existing literature. The PDQI-9 was used as a basis for defining the attributes of quality in EHR documentation as a foundation for assessing and providing feedback on the performance of documentation to medical students. With the focus on assessment, and providing a content validated test domain for assessment in quality EHR documentation, we utilised Kane’s framework for validity to structure the study and a mixed method study design to achieve the depth of exploration required to examine the performance of quality documentation fully. The study was conducted in two stages. In the first stage of the study, an expert panel of assessors applied the PDQI-9 to existing EHR data recorded by first clinical year medical students in a graduate entry program. The assessors both scored the records and justified their grading. Descriptive statistics and thematic analysis were undertaken on the data collected, and the findings triangulated with the literature review. The second stage employed explanatory semi-structured interviews with the expert assessors to better understand the findings of the first stage and reach consensus on a test domain for assessing quality documentation recorded by medical students. Outcomes from our study indicated that the PDQI-9 in its current format was not valid in a medical student setting, however most of the attributes assessed by the PDQI-9 were deemed relevant and meaningful to assess if their interpretations were clarified. In addition, Professionalism of documentation was regarded as a quality attribute. Consensus was reached on modifications that have the potential to improve the validity of the assessment of quality documentation recorded by medical students. Further studies need to complete Kane’s framework of validity for an assessment instrument and collect evidence to broaden the validity of the scoring, the generalization of the assessment items, the extrapolation to the real world and the implications of this assessment for students and health services.
  • Item
    Thumbnail Image
    Post-test feedback: knowledge acquisition & learning behaviours
    Ryan, Anna Therese ( 2015)
    This study, situated within the conceptual framework of assessment for learning, was motivated by the desire to find a practical way of providing informative and useful post-assessment feedback to medical students. The work was informed by the theories of test-enhanced learning and the principles of good feedback. It employed mixed methods to explore the impact of the study interventions on learning behaviour and knowledge acquisition. Set within an authentic medical educational setting, this study modelled an innovative method for production and distribution of individualised feedback reports following written multiple choice assessment. Year two students in a graduate entry medical program received four modified progress tests during their academic year and were randomised into three feedback groups. Feedback formats were selected to provide information about performance and guidance for learning without requiring release of test questions and answers. All feedback groups received test scores and some form of instruction based elaboration. Two groups were provided with variations of item level verification and instruction based elaboration, while the other group received normative data with general (rather than item level) instruction based elaboration. Outcomes of interest included study diaries, progress test scores, summative examination results, questionnaires and semi-structured interviews. Triangulation of the research data was used to interpret results. Outcomes from this study suggest there was a learning benefit from the test and feedback interventions. It appears that this benefit was achieved through direct interaction with the tests, and through the ability to self-monitor levels of knowledge and evaluate the effectiveness of study activities. Behaviour changes identified as a result of the study interventions included general study prior to tests, increased study following tests and feedback, and altered study behaviours involving different content, techniques and study aids. Of the three feedback types provided in this experiment, feedback consisting of grades, general instruction based elaboration and normative comparison appeared to be most easily interpreted and provided motivation for study, but resulted in inferior performance for students in the lower quartile of the cohort. This experiment demonstrates that it is feasible to produce and distribute individualised post-test feedback reports following paper based clinical vignette MCQ tests within a clinical learning environment. It highlights the potential of regular formative assessment to play an important role in directing focus of study and clarification of expectation of study depth and breadth. Medical students are often considered a relatively homogenous and high achieving cohort, yet results of this study suggest their responses to feedback are influenced both by the type of feedback information provided and the students’ relative ability within their learning cohort.