School of Agriculture, Food and Ecosystem Sciences - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 13
  • Item
    Thumbnail Image
    Editorial: Multivariate Probabilistic Modelling for Risk and Decision Analysis
    Hanea, AM ; Nane, GF (FRONTIERS MEDIA SA, 2022-02-22)
  • Item
    Thumbnail Image
    AI-powered narrative building for facilitating public participation and engagement
    Marmolejo-Ramos, F ; Workman, T ; Walker, C ; Lenihan, D ; Moulds, S ; Correa, JC ; Hanea, AM ; Sonna, B (Springer Science and Business Media LLC, 2022-12-01)
    Abstract Algorithms, data, and AI (ADA) technologies permeate most societies worldwide because of their proven benefits in different areas of life. Governments are the entities in charge of harnessing the benefits of ADA technologies above and beyond providing government services digitally. ADA technologies have the potential to transform the way governments develop and deliver services to citizens, and the way citizens engage with their governments. Conventional public engagement strategies employed by governments have limited both the quality and diversity of deliberation between the citizen and their governments, and the potential for ADA technologies to be employed to improve the experience for both governments and the citizens they serve. In this article we argue that ADA technologies can improve the quality, scope, and reach of public engagement by governments, particularly when coupled with other strategies to ensure legitimacy and accessibility among a broad range of communities and other stakeholders. In particular, we explore the role “narrative building” (NB) can play in facilitating public engagement through the use of ADA technologies. We describe a theoretical implementation of NB enhanced by adding natural language processing, expert knowledge elicitation, and semantic differential rating scales capabilities to increase gains in scale and reach. The theoretical implementation focuses on the public’s opinion on ADA-related technologies, and it derives implications for ethical governance.
  • Item
    Thumbnail Image
    Reimagining peer review as an expert elicitation process
    Marcoci, A ; Vercammen, A ; Bush, M ; Hamilton, DG ; Hanea, A ; Hemming, V ; Wintle, BC ; Burgman, M ; Fidler, F (SPRINGERNATURE, 2022-04-05)
    Journal peer review regulates the flow of ideas through an academic discipline and thus has the power to shape what a research community knows, actively investigates, and recommends to policymakers and the wider public. We might assume that editors can identify the 'best' experts and rely on them for peer review. But decades of research on both expert decision-making and peer review suggests they cannot. In the absence of a clear criterion for demarcating reliable, insightful, and accurate expert assessors of research quality, the best safeguard against unwanted biases and uneven power distributions is to introduce greater transparency and structure into the process. This paper argues that peer review would therefore benefit from applying a series of evidence-based recommendations from the empirical literature on structured expert elicitation. We highlight individual and group characteristics that contribute to higher quality judgements, and elements of elicitation protocols that reduce bias, promote constructive discussion, and enable opinions to be objectively and transparently aggregated.
  • Item
    Thumbnail Image
    Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science)
    Fraser, H ; Bush, M ; Wintle, B ; Mody, F ; Smith, ET ; Hanea, A ; Gould, E ; Hemming, V ; Hamilton, DG ; Rumpff, L ; Wilkinson, DP ; Pearson, R ; Singleton Thorn, F ; Ashton, R ; Willcox, A ; Gray, CT ; Head, A ; Ross, M ; Groenewegen, R ; Marcoci, A ; Vercammen, A ; Parker, TH ; Hoekstra, R ; Nakagawa, S ; Mandel, DR ; van Ravenzwaaij, D ; McBride, M ; Sinnott, RO ; Vesk, PA ; Burgman, M ; Fidler, F (Early Release, 2021-02-22)

    Replication is a hallmark of scientific research. As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce a new technique to evaluating replicability, the repliCATS (Collaborative Assessments for Trustworthy Science) process, a structured expert elicitation approach based on the IDEA protocol. The repliCATS process is delivered through an underpinning online platform and applied to the evaluation of research claims in social and behavioural sciences. This process can be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period. Pilot data suggests that the accuracy of the repliCATS process meets or exceeds that of other techniques used to predict replicability. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to assist with problems like understanding the limits of generalizability of scientific claims. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.

  • Item
    Thumbnail Image
    Mathematically aggregating experts' predictions of possible futures
    Hanea, AM ; Wilkinson, DP ; McBride, M ; Lyon, A ; van Ravenzwaaij, D ; Thorn, FS ; Gray, C ; Mandel, DR ; Willcox, A ; Gould, E ; Smith, ET ; Mody, F ; Bush, M ; Fidler, F ; Fraser, H ; Wintle, BC ; Citi, L (PUBLIC LIBRARY SCIENCE, 2021-09-02)
    Structured protocols offer a transparent and systematic way to elicit and combine/aggregate, probabilistic predictions from multiple experts. These judgements can be aggregated behaviourally or mathematically to derive a final group prediction. Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. The quality of this aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the "best" final prediction. When experts' performance can be scored on similar questions ahead of time, these scores can be translated into performance-based weights, and a performance-based weighted aggregation can then be used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. Here, we develop a suite of aggregation methods, informed by previous experience and the available literature. We differentially weight our experts' estimates by measures of reasoning, engagement, openness to changing their mind, informativeness, prior knowledge, and extremity, asymmetry or granularity of estimates. Next, we investigate the relative performance of these aggregation methods using three datasets. The main goal of this research is to explore how measures of knowledge and behaviour of individuals can be leveraged to produce a better performing combined group judgment. Although the accuracy, calibration, and informativeness of the majority of methods are very similar, a couple of the aggregation methods consistently distinguish themselves as among the best or worst. Moreover, the majority of methods outperform the usual benchmarks provided by the simple average or the median of estimates.
  • Item
    Thumbnail Image
    Improving expert forecasts in reliability: Application and evidence for structured elicitation protocols
    Hemming, V ; Armstrong, N ; Burgman, MA ; Hanea, AM (WILEY, 2020-03)
    Abstract Quantitative expert judgements are used in reliability assessments to inform critically important decisions. Structured elicitation protocols have been advocated to improve expert judgements, yet their application in reliability is challenged by a lack of examples or evidence that they improve judgements. This paper aims to overcome these barriers. We present a case study where two world‐leading protocols, the IDEA protocol and the Classical Model, were combined and applied by the Australian Department of Defence for a reliability assessment. We assess the practicality of the methods and the extent to which they improve judgements. The average expert was extremely overconfident, with 90% credible intervals containing the true realisation 36% of the time. However, steps contained in the protocols substantially improved judgements. In particular, an equal weighted aggregation of individual judgements and the inclusion of a discussion phase and revised estimate helped to improve calibration, statistical accuracy, and the Classical Model score. Further improvements in precision and information were made via performance weighted aggregation. This paper provides useful insights into the application of structured elicitation protocols for reliability and the extent to which judgements are improved. The findings raise concerns about existing practices for utilising experts in reliability assessments and suggest greater adoption of structured protocols is warranted. We encourage the reliability community to further develop examples and insights.
  • Item
    Thumbnail Image
    Improving the Computation of Brier Scores for Evaluating Expert-Elicited Judgements
    Dharmarathne, G ; Hanea, A ; Robinson, AP (FRONTIERS MEDIA SA, 2021-06-10)
    Structured expert judgement (SEJ) is a suite of techniques used to elicit expert predictions, e.g. probability predictions of the occurrence of events, for situations in which data are too expensive or impossible to obtain. The quality of expert predictions can be assessed using Brier scores and calibration questions. In practice, these scores are computed from data that may have a correlation structure due to sharing the effects of the same levels of grouping factors of the experimental design. For example, asking common questions from experts may result in correlated probability predictions due to sharing common question effects. Furthermore, experts commonly fail to answer all the needed questions. Here, we focus on (i) improving the computation of standard error estimates of expert Brier scores by using mixed-effects models that support design-based correlation structures of observations, and (ii) imputation of missing probability predictions in computing expert Brier scores to enhance the comparability of the prediction accuracy of experts. We show that the accuracy of estimating standard errors of expert Brier scores can be improved by incorporating the within-question correlations due to asking common questions. We recommend the use of multiple imputation to correct for missing data in expert elicitation exercises. We also discuss the implications of adopting a formal experimental design approach for SEJ exercises.
  • Item
    Thumbnail Image
    Balancing the Elicitation Burden and the Richness of Expert Input When Quantifying Discrete Bayesian Networks
    Barons, MJ ; Mascaro, S ; Hanea, AM (WILEY, 2022-06)
    Structured expert judgment (SEJ) is a method for obtaining estimates of uncertain quantities from groups of experts in a structured way designed to minimize the pervasive cognitive frailties of unstructured approaches. When the number of quantities required is large, the burden on the groups of experts is heavy, and resource constraints may mean that eliciting all the quantities of interest is impossible. Partial elicitations can be complemented with imputation methods for the remaining, unelicited quantities. In the case where the quantities of interest are conditional probability distributions, the natural relationship between the quantities can be exploited to impute missing probabilities. Here we test the Bayesian intelligence interpolation method and its variations for Bayesian network conditional probability tables, called "InterBeta." We compare the various outputs of InterBeta on two cases where conditional probability tables were elicited from groups of experts. We show that interpolated values are in good agreement with experts' values and give guidance on how InterBeta could be used to good effect to reduce expert burden in SEJ exercises.
  • Item
    Thumbnail Image
    Predicting species and community responses to global change using structured expert judgement: An Australian mountain ecosystems case study
    Camac, JS ; Umbers, KDL ; Morgan, JW ; Geange, SR ; Hanea, A ; Slatyer, RA ; McDougall, KL ; Venn, SE ; Vesk, PA ; Hoffmann, AA ; Nicotra, AB (WILEY, 2021-09)
    Conservation managers are under increasing pressure to make decisions about the allocation of finite resources to protect biodiversity under a changing climate. However, the impacts of climate and global change drivers on species are outpacing our capacity to collect the empirical data necessary to inform these decisions. This is particularly the case in the Australian Alps which have already undergone recent changes in climate and experienced more frequent large-scale bushfires. In lieu of empirical data, we use a structured expert elicitation method (the IDEA protocol) to estimate the change in abundance and distribution of nine vegetation groups and 89 Australian alpine and subalpine species by the year 2050. Experts predicted that most alpine vegetation communities would decline in extent by 2050; only woodlands and heathlands are predicted to increase in extent. Predicted species-level responses for alpine plants and animals were highly variable and uncertain. In general, alpine plants spanned the range of possible responses, with some expected to increase, decrease or not change in cover. By contrast, almost all animal species are predicted to decline or not change in abundance or elevation range; more species with water-centric life-cycles are expected to decline in abundance than other species. While long-term ecological data will always be the gold standard for informing the future of biodiversity, the method and outcomes outlined here provide a pragmatic and coherent basis upon which to start informing conservation policy and management in the face of rapid change and a paucity of data.
  • Item
    Thumbnail Image
    Weighting and aggregating expert ecological judgments
    Hemming, V ; Hanea, AM ; Walshe, T ; Burgman, MA (Ecological Society of America, 2020-06-01)
    Performance weighted aggregation of expert judgments, using calibration questions, has been advocated to improve pooled quantitative judgments for ecological questions. However, there is little discussion or practical advice in the ecological literature regarding the application, advantages or challenges of performance weighting. In this paper we (1) illustrate how the IDEA protocol with four‐step question format can be extended to include performance weighted aggregation from the Classical Model, and (2) explore the extent to which this extension improves pooled judgments for a range of performance measures. Our case study demonstrates that performance weights can improve judgments derived from the IDEA protocol with four‐step question format. However, there is no a‐priori guarantee of improvement. We conclude that the merits of the method lie in demonstrating that the final aggregation of judgments provides the best representation of uncertainty (i.e., validation), whether that be via equally weighted or performance weighted aggregation. Whether the time and effort entailed in performance weights can be justified is a matter for decision‐makers. Our case study outlines the rationale, challenges, and benefits of performance weighted aggregations. It will help to inform decisions about the deployment of performance weighting and avoid common pitfalls in its application.