## Exploring the statistical aspects of expert elicited experiments

##### Download

##### Citations

**Altmetric**

##### Date

2020##### Affiliation

School of Mathematics and Statistics##### Metadata

Show full item record##### Document Type

PhD thesis##### Access Status

**Open Access**

##### Description

© 2020 Hetti Arachchige Sameera Gayan Dharmarathne

##### Abstract

We explore the statistical aspects of some of the known methods of analysing experts’ elicited data to identify potential improvements on the accuracy of their outcomes in this study. It can be identified that potential correlation structures induced in the probability predictions by the characteristics of experimental designs are ignored in computing experts’ Brier scores. We show that the accuracy of the standard error estimates of experts’ Brier scores can be improved by incorporating the within-question correlations of probability predictions in the second chapter of this thesis. Missing probability predictions of events can impact on assessing the prediction accuracy of experts using different sets of events (Merkle et al., 2016; Hanea et al., 2018). It is shown in the third chapter that multiple imputation method using a mixed-effects model with questions’ effects as random effects can effectively estimate missing predictions to enhance the comparability of experts’ Brier scores.
Testing experts’ calibration on eliciting credible intervals of unknown quantities using hit rates; observed proportions of elicited intervals that contain realized values of given quantities (McBride, Fidler, and Burgman, 2012), has a property of obtaining lower values of power to correctly identify well-calibrated experts and more importantly, the power tends to decrease as the number of elicited intervals increases. The equivalence test of a single binomial proportion can be used to overcome these problems as shown in the fourth chapter. There is a possibility of allocating higher weights to some of the not well-calibrated experts by the way experts’ calibration is assessed in the Cooke’s classical model (Cooke, 1991) to derive experts’ weights. We show that the multinomial equivalence test can be used to overcome this problem in the fifth chapter.
Experts’ weights that derived from experiments to combine experts’ elicited subjective probability distributions to obtain aggregated probability distributions of unknown quantities (O’Hagan, 2019) are random variables subject to uncertainty. We derive shrinkage experts’ weights with reduced mean squared errors in the sixth chapter to enhance the precision of the resulting aggregated distributions of quantities.

##### Keywords

Expert elicitation; Experts’ Brier scores; Mixed-effects models; Multiple imputation; Experts’ hit rates; Equivalence test of a single binomial proportion; Cooke’s classical model; Multinomial equivalence test; Cooke’s weights; Shrinkage weightsExport Reference in RIS Format

## Endnote

- Click on "Export Reference in RIS Format" and choose "open with... Endnote".

## Refworks

- Click on "Export Reference in RIS Format". Login to Refworks, go to References => Import References