Show simple item record

dc.contributor.authorDharmarathne, Hetti Arachchige Sameera Gayan
dc.date.accessioned2020-04-30T00:15:13Z
dc.date.available2020-04-30T00:15:13Z
dc.date.issued2020
dc.identifier.urihttp://hdl.handle.net/11343/238547
dc.description© 2020 Hetti Arachchige Sameera Gayan Dharmarathne
dc.description.abstractWe explore the statistical aspects of some of the known methods of analysing experts’ elicited data to identify potential improvements on the accuracy of their outcomes in this study. It can be identified that potential correlation structures induced in the probability predictions by the characteristics of experimental designs are ignored in computing experts’ Brier scores. We show that the accuracy of the standard error estimates of experts’ Brier scores can be improved by incorporating the within-question correlations of probability predictions in the second chapter of this thesis. Missing probability predictions of events can impact on assessing the prediction accuracy of experts using different sets of events (Merkle et al., 2016; Hanea et al., 2018). It is shown in the third chapter that multiple imputation method using a mixed-effects model with questions’ effects as random effects can effectively estimate missing predictions to enhance the comparability of experts’ Brier scores. Testing experts’ calibration on eliciting credible intervals of unknown quantities using hit rates; observed proportions of elicited intervals that contain realized values of given quantities (McBride, Fidler, and Burgman, 2012), has a property of obtaining lower values of power to correctly identify well-calibrated experts and more importantly, the power tends to decrease as the number of elicited intervals increases. The equivalence test of a single binomial proportion can be used to overcome these problems as shown in the fourth chapter. There is a possibility of allocating higher weights to some of the not well-calibrated experts by the way experts’ calibration is assessed in the Cooke’s classical model (Cooke, 1991) to derive experts’ weights. We show that the multinomial equivalence test can be used to overcome this problem in the fifth chapter. Experts’ weights that derived from experiments to combine experts’ elicited subjective probability distributions to obtain aggregated probability distributions of unknown quantities (O’Hagan, 2019) are random variables subject to uncertainty. We derive shrinkage experts’ weights with reduced mean squared errors in the sixth chapter to enhance the precision of the resulting aggregated distributions of quantities.
dc.rightsTerms and Conditions: Copyright in works deposited in Minerva Access is retained by the copyright owner. The work may not be altered without permission from the copyright owner. Readers may only download, print and save electronic copies of whole works for their own personal non-commercial use. Any use that exceeds these limits requires permission from the copyright owner. Attribution is essential when quoting or paraphrasing from these works.
dc.subjectExpert elicitation
dc.subjectExperts’ Brier scores
dc.subjectMixed-effects models
dc.subjectMultiple imputation
dc.subjectExperts’ hit rates
dc.subjectEquivalence test of a single binomial proportion
dc.subjectCooke’s classical model
dc.subjectMultinomial equivalence test
dc.subjectCooke’s weights
dc.subjectShrinkage weights
dc.titleExploring the statistical aspects of expert elicited experiments
dc.typePhD thesis
melbourne.affiliation.departmentSchool of Mathematics and Statistics
melbourne.affiliation.facultyScience
melbourne.thesis.supervisornameAndrew Robinson
melbourne.contributor.authorDharmarathne, Hetti Arachchige Sameera Gayan
melbourne.thesis.supervisorothernameAnca Hanea
melbourne.tes.fieldofresearch1010401 Applied Statistics
melbourne.tes.fieldofresearch2179999 Psychology and Cognitive Sciences not elsewhere classified
melbourne.tes.confirmedtrue
melbourne.accessrightsOpen Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record