Minerva Elements Records

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 12
  • Item
    Thumbnail Image
    From Traditional to Programmatic Assessment in Three (Not So) Easy Steps
    Ryan, A ; Judd, T (MDPI, 2022-07)
    Programmatic assessment (PA) has strong theoretical and pedagogical underpinnings, but its practical implementation brings a number of challenges—particularly in traditional university settings involving large cohort sizes. This paper presents a detailed case report of an in-progress programmatic assessment implementation involving a decade of assessment innovation occurring in three significant and transformative steps. The starting position and subsequent changes represented in each step are reflected against the framework of established principles and implementation themes of PA. This case report emphasises the importance of ongoing innovation and evaluative research, the advantage of a dedicated team with a cohesive plan, and the fundamental necessity of electronic data collection. It also highlights the challenge of traditional university cultures, the potential advantage of a major pandemic disruption, and the necessity for curriculum renewal to support significant assessment change. Our PA implementation began with a plan to improve the learning potential of individual assessments and over the subsequent decade expanded to encompass a cohesive and course wide assessment program involving meaningful aggregation of assessment data. In our context (large cohort sizes and university-wide assessment policy) regular progress review meetings and progress decisions based on aggregated qualitative and quantitative data (rather than assessment format) remain local challenges.
  • Item
    Thumbnail Image
    Tensions in post-examination feedback: information for learning versus potential for harm
    Ryan, A ; McColl, GJ ; O'Brien, R ; Chiavaroli, N ; Judd, T ; Finch, S ; Swanson, D (WILEY, 2017-09)
    OBJECTIVE: Self-regulation is recognised as being a requisite skill for professional practice This study is part of a programme of research designed to explore efficient methods of feedback that improve medical students' ability to self-regulate their learning. Our aim was to clarify how students respond to different forms and content of written feedback and to explore the impact on study behaviour and knowledge acquisition. METHODS: Year 2 students in a 4-year graduate entry medical programme completing four formative progress tests during the academic year were randomised into three groups receiving different feedback reports. All reports included proportion correct overall and by clinical rotation. One group received feedback reports including lists of clinical presentations relating to questions answered correctly and incorrectly; another group received reports containing this same information in combination with response certitude. The final group received reports involving normative comparisons. Baseline progress test performance quartile groupings (a proxy for academic ability) were determined by results on the first progress test. A mixed-method approach with triangulation of research findings was used to interpret results. Outcomes of interest included progress test scores, summative examination results and measures derived from study diaries, questionnaires and semi-structured interviews. RESULTS: Of the three types of feedback provided in this experiment, feedback containing normative comparisons resulted in inferior test performance for students in the lowest performance quartile group. This type of feedback appeared to stimulate general rather than examination-focused study. CONCLUSIONS: Medical students are often considered relatively homogenous and high achieving, yet the results of this study suggest caution when providing them with normative feedback indicating poorer performance relative to their peers. There is much need for further work to explore efficient methods of providing written feedback that improves medical students' ability to self-regulate their learning, particularly when giving feedback to those students who have the most room for improvement.
  • Item
    Thumbnail Image
    Methods and frequency of sharing of learning resources by medical students
    Judd, T ; Elliott, K (WILEY, 2017-11)
    Abstract University students have ready access to quality learning resources through learning management systems (LMS), online library collections and generic search tools. However, anecdotal evidence suggests they sometimes turn to peer‐based sharing rather than sourcing resources directly. We know little about this practice—how common it is, what sort of resources are involved and what impact it is likely to have on students' learning. This paper reports on an exploratory investigation of students' resource sharing habits, involving 338 respondents from the first 3 years of a 4‐year postgraduate medical curriculum. On average, students reported sharing learning resources with other students two or more times per week. They were most likely to share non‐curriculum resources (not available through their LMS) although curriculum and physical resources (eg, printed or handwritten notes and textbooks) were also often shared. Students employed a range of sharing technologies including email (most frequent), social media tools and cloud‐based file services. A cluster analysis revealed four distinct groups of students based on the frequency with which they share, the range of technologies they employ and whether they share both online and physical resources.
  • Item
    Thumbnail Image
    'Video selfies' for feedback and reflection
    Sellitto, T ; Ryan, A ; Judd, T (WILEY-BLACKWELL, 2016-05)
  • Item
    Thumbnail Image
    Supporting student academic integrity in remote examination settings
    Ryan, A ; Hokin, K ; Judd, T ; Elliott, S (WILEY, 2020-11)
  • Item
    Thumbnail Image
    Fully online OSCEs: A large cohort case study
    Ryan, A ; Carson, A ; Reid, K ; Smallwood, D ; Judd, T (Association for Medical Education in Europe (AMEE), 2020)
    Objective Structured Clinical Examinations (OSCEs) are extensively used for clinical assessment in the health professions. However, current social distancing requirements (including on-campus bans) at many universities have made the co-location of participants for large cohort OSCEs impossible. While there is a developing literature on remote OSCEs, particularly in response to the COVID-19 pandemic, this is dominated by approaches dealing with small participant numbers. This paper describes our recent large scale (n = 361 candidates) implementation of a remotely delivered 2 station OSCE. The planning for this OSCE was extensive and involved comprehensive candidate, examiner and simulated patient orientation and training. Our processes were explicitly designed to develop platform familiarity for all participants and included building on remote tutorial experiences and device testing. Our remote OSCE design and logistics made use of using existing enterprise solutions including videoconferencing, survey and collaboration platforms and allowed extra time between candidates in case of technical issues. We describe our process in detail including examiner, simulated patient, and candidate perspectives to provide precise detail, hopefully assisting other institutions to understand and adopt our approach. Although logistically complex, we have demonstrated that it is possible to deliver a remote OSCE assessment involving a large student cohort with a limited number of stations using commonly available enterprise solutions. We recognise it would be ideal to sample more broadly across stations and examiners, yet given the constraints of our current COVID-19 impacted environment, we believe this to be an appropriate compromise for a non-graduating cohort at this time.
  • Item
    Thumbnail Image
    Beyond right or wrong: More effective feedback for formative multiple-choice tests
    Ryan, A ; Judd, T ; Swanson, D ; Larsen, DP ; Elliott, S ; Tzanetos, K ; Kulasegaram, K (SPRINGERNATURE, 2020-10)
    INTRODUCTION: The role of feedback in test-enhanced learning is an understudied area that has the potential to improve student learning. This study investigates the influence of different forms of post-test feedback on retention and transfer of biomedical knowledge within a test-enhanced learning framework. METHODS: 64 participants from a Canadian and an Australian medical school sat two single-best-answer formative multiple choice tests one week apart. We compared the effects of conceptually focused, response-oriented, and simple right/wrong feedback on a learner's ability to correctly answer new (transfer) questions. On the first test occasion, participants received parent items with feedback, and then attempted items closely related (near transfer) to and more distant (far transfer) from parent items. In a repeat test at 1 week, participants were given different near and far transfer versions of parent items. Feedback type, and near and far transfer items were randomized within and across participants. RESULTS: Analysis demonstrated that response-oriented and conceptually focused feedback were superior to traditional right/wrong feedback for both types of transfer tasks and in both immediate and final retention test performance. However, there was no statistically significant difference between response-orientated and conceptually focused groups on near or far transfer problems, nor any differences in performance between our initial test occasion and the retention test 1 week later. As with most studies of transfer, participants' far transfer scores were lower than for near transfer. DISCUSSION: Right/wrong feedback appears to have limited potential to augment test-enhanced learning. Our work suggests that item-level feedback and feedback that identifies and elaborates on key conceptual knowledge are two important areas for future research on learning, retention and transfer.
  • Item
    Thumbnail Image
    Not just digital natives: Integrating technologies in professional education contexts
    Smith, EE ; Kahlke, R ; Judd, T (AUSTRALASIAN SOC COMPUTERS LEARNING TERTIARY EDUCATION-ASCILITE, 2020-01-01)
    In 2001, Prensky characterised a new generation of learners entering higher education as digital natives – naturally digitally literate and inherently proficient users of technology. While many educational technology researchers have long argued for the need to move beyond the digital native assumptions proposed by Prensky and other futurists, a critical review of the literature reveals that this concept remains influential in academia broadly and within professional education specifically. In light of this, we propose an alternative approach to technology integration in professional education settings that aims to avoid unhelpful digital native stereotypes by instead developing digital literacies in ways that leverage technological affordances. By building digital literacies across the procedural and technical, cognitive, and sociocultural domains connected to professional competencies, learners can effectively adopt and utilise emerging technologies through professional digital practices.
  • Item
    Thumbnail Image
    If at first you don't succeed ... adoption of iPad marking for high-stakes assessments
    Judd, T ; Ryan, A ; Flynn, E ; McColl, G (SPRINGERNATURE, 2017-10)
    Large-scale interview and simulation-based assessments such as objective structured clinical examinations (OSCEs) and multiple mini interviews (MMIs) are logistically complex to administer, generate large volumes of assessment data, and are strong candidates for the adoption of computer-based marking systems. Adoption of new technologies can be challenging, and technical failures, which are relatively commonplace, can delay and/or create resistance to ongoing implementation.This paper reports on the adoption process of an electronic marking system for OSCEs and MMIs following an unsuccessful initial trial. It describes how, after the initial setback, a staged implementation, progressing from small to larger-scale assessments, single to multiple assessment types, and lower to higher stakes assessments, was used to successfully adopt and embed iPad-based marking within our medical school.Critical factors in the success of this approach included thorough appraisal and selection of technologies, rigorous assurance of system reliability and security, constant review and refinement, and careful attention to implementation and end-user training. Engagement of stakeholders is also crucial, especially in the case of previous failures or setbacks. The early identification and recruitment of staff to provide specific expertise and support for adoption of an innovation helps to facilitate this process with four key roles proposed; those of innovation advocate, champion, expert and sponsor.
  • Item
    Thumbnail Image
    A five-year study of on-campus Internet use by undergraduate biomedical students
    Judd, T ; Kennedy, G (PERGAMON-ELSEVIER SCIENCE LTD, 2010-12)