Show simple item record

dc.contributor.authorZoanetti, N
dc.contributor.authorGriffin, P
dc.contributor.authorBeaves, M
dc.contributor.authorWallace, EM
dc.date.available2014-05-22T01:15:27Z
dc.date.available2009-04-29
dc.date.available2009-04-29
dc.date.available2009-04-29
dc.date.issued2009-04-29
dc.identifierpii: 1472-6920-9-20
dc.identifier.citationZoanetti, N., Griffin, P., Beaves, M. & Wallace, E. M. (2009). Rasch scaling procedures for informing development of a valid Fetal Surveillance Education Program multiple-choice assessment. BMC MEDICAL EDUCATION, 9 (1), https://doi.org/10.1186/1472-6920-9-20.
dc.identifier.issn1472-6920
dc.identifier.urihttp://hdl.handle.net/11343/30547
dc.description.abstractBACKGROUND: It is widely recognised that deficiencies in fetal surveillance practice continue to contribute significantly to the burden of adverse outcomes. This has prompted the development of evidence-based clinical practice guidelines by the Royal Australian and New Zealand College of Obstetricians and Gynaecologists and an associated Fetal Surveillance Education Program to deliver the associated learning. This article describes initial steps in the validation of a corresponding multiple-choice assessment of the relevant educational outcomes through a combination of item response modelling and expert judgement. METHODS: The Rasch item response model was employed for item and test analysis and to empirically derive the substantive interpretation of the assessment variable. This interpretation was then compared to the hierarchy of competencies specified a priori by a team of eight subject-matter experts. Classical Test Theory analyses were also conducted. RESULTS: A high level of agreement between the hypothesised and derived variable provided evidence of construct validity. Item and test indices from Rasch analysis and Classical Test Theory analysis suggested that the current test form was of moderate quality. However, the analyses made clear the required steps for establishing a valid assessment of sufficient psychometric quality. These steps included: increasing the number of items from 40 to 50 in the first instance, reviewing ineffective items, targeting new items to specific content and difficulty gaps, and formalising the assessment blueprint in light of empirical information relating item structure to item difficulty. CONCLUSION: The application of the Rasch model for criterion-referenced assessment validation with an expert stakeholder group is herein described. Recommendations for subsequent item and test construction are also outlined in this article.
dc.languageEnglish
dc.publisherBMC
dc.subjectCurriculum and Pedagogy
dc.titleRasch scaling procedures for informing development of a valid Fetal Surveillance Education Program multiple-choice assessment
dc.typeJournal Article
dc.identifier.doi10.1186/1472-6920-9-20
melbourne.peerreviewPeer Reviewed
melbourne.affiliationThe University of Melbourne
melbourne.affiliation.departmentMelbourne Graduate School of Education
melbourne.source.titleBMC MEDICAL EDUCATION
melbourne.source.volume9
melbourne.source.issue1
dc.research.codefor1302
dc.rights.licenseCC BY
melbourne.publicationid130632
melbourne.elementsid313469
melbourne.openaccess.pmchttp://www.ncbi.nlm.nih.gov/pmc/articles/PMC2685791
melbourne.contributor.authorZoanetti, Nathan
melbourne.contributor.authorGriffin, Patrick
dc.identifier.eissn1472-6920
pubs.acceptance.date2009-04-29
melbourne.accessrightsAccess this item via the Open Access location


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record