School of Agriculture, Food and Ecosystem Sciences - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 3 of 3
  • Item
    Thumbnail Image
    Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science)
    Fraser, H ; Bush, M ; Wintle, B ; Mody, F ; Smith, ET ; Hanea, A ; Gould, E ; Hemming, V ; Hamilton, DG ; Rumpff, L ; Wilkinson, DP ; Pearson, R ; Singleton Thorn, F ; Ashton, R ; Willcox, A ; Gray, CT ; Head, A ; Ross, M ; Groenewegen, R ; Marcoci, A ; Vercammen, A ; Parker, TH ; Hoekstra, R ; Nakagawa, S ; Mandel, DR ; van Ravenzwaaij, D ; McBride, M ; Sinnott, RO ; Vesk, PA ; Burgman, M ; Fidler, F (Early Release, 2021-02-22)

    Replication is a hallmark of scientific research. As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce a new technique to evaluating replicability, the repliCATS (Collaborative Assessments for Trustworthy Science) process, a structured expert elicitation approach based on the IDEA protocol. The repliCATS process is delivered through an underpinning online platform and applied to the evaluation of research claims in social and behavioural sciences. This process can be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period. Pilot data suggests that the accuracy of the repliCATS process meets or exceeds that of other techniques used to predict replicability. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to assist with problems like understanding the limits of generalizability of scientific claims. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.

  • Item
    Thumbnail Image
    Mathematically aggregating experts' predictions of possible futures
    Hanea, AM ; Wilkinson, DP ; McBride, M ; Lyon, A ; van Ravenzwaaij, D ; Thorn, FS ; Gray, C ; Mandel, DR ; Willcox, A ; Gould, E ; Smith, ET ; Mody, F ; Bush, M ; Fidler, F ; Fraser, H ; Wintle, BC ; Citi, L (PUBLIC LIBRARY SCIENCE, 2021-09-02)
    Structured protocols offer a transparent and systematic way to elicit and combine/aggregate, probabilistic predictions from multiple experts. These judgements can be aggregated behaviourally or mathematically to derive a final group prediction. Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. The quality of this aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the "best" final prediction. When experts' performance can be scored on similar questions ahead of time, these scores can be translated into performance-based weights, and a performance-based weighted aggregation can then be used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. Here, we develop a suite of aggregation methods, informed by previous experience and the available literature. We differentially weight our experts' estimates by measures of reasoning, engagement, openness to changing their mind, informativeness, prior knowledge, and extremity, asymmetry or granularity of estimates. Next, we investigate the relative performance of these aggregation methods using three datasets. The main goal of this research is to explore how measures of knowledge and behaviour of individuals can be leveraged to produce a better performing combined group judgment. Although the accuracy, calibration, and informativeness of the majority of methods are very similar, a couple of the aggregation methods consistently distinguish themselves as among the best or worst. Moreover, the majority of methods outperform the usual benchmarks provided by the simple average or the median of estimates.
  • Item
    Thumbnail Image
    Eliciting group judgements about replicability: A technical implementation of the IDEA Protocol
    Pearson, ER ; Fraser, H ; Bush, M ; Mody, F ; Widjaja, I ; Head, A ; Wilkinson, DP ; Wintle, B ; Sinnott, R ; Vesk, P ; Burgman, M ; Fidler, F (Hawaii International Conference on System Sciences, 2021-01-01)
    In recent years there has been increased interest in replicating prior research. One of the biggest challenges to assessing replicability is the cost in resources and time that it takes to repeat studies. Thus there is an impetus to develop rapid elicitation protocols that can, in a practical manner, estimate the likelihood that research findings will successfully replicate. We employ a novel implementation of the IDEA ('Investigate', 'Discuss', 'Estimate' and 'Aggregate) protocol, realised through the repliCATS platform. The repliCATS platform is designed to scalably elicit expert opinion about replicability of social and behavioural science research. The IDEA protocol provides a structured methodology for eliciting judgements and reasoning from groups. This paper describes the repliCATS platform as a multi-user cloud-based software platform featuring (1) a technical implementation of the IDEA protocol for eliciting expert opinion on research replicability, (2) capture of consent and demographic data, (3) on-line training on replication concepts, and (4) exporting of completed judgements. The platform has, to date, evaluated 3432 social and behavioural science research claims from 637 participants.