School of BioSciences - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 43
  • Item
    Thumbnail Image
    Can Groups Improve Expert Economic and Financial Forecasts?
    Smith, W ; Hanea, AM ; Burgman, MA (MDPI, 2022-09)
    Economic and financial forecasts are important for business planning and government policy but are notoriously challenging. We take advantage of recent advances in individual and group judgement, and a data set of economic and financial forecasts compiled over 25 years, consisting of multiple individual and institutional estimates, to test the claim that nominal groups will make more accurate economic and financial forecast than individuals. We validate the forecasts using the subsequent published (real) outcomes, explore the performance of nominal groups against institutions, identify potential superforecasters and discuss the benefits of implementing structured judgment techniques to improve economic and financial forecasts.
  • Item
    Thumbnail Image
    A toolkit for open and pluralistic conservation science
    Burgman, M ; Chiaravalloti, R ; Fidler, F ; Huan, Y ; McBride, M ; Marcoci, A ; Norman, J ; Vercammen, A ; Wintle, B ; Yu, Y (WILEY, 2023-01-01)
    Abstract Conservation science practitioners seek to preempt irreversible impacts on species, ecosystems, and social–ecological systems, requiring efficient and timely action even when data and understanding are unavailable, incomplete, dated, or biased. These challenges are exacerbated by the scientific community's capacity to consistently distinguish between reliable and unreliable evidence, including the recognition of questionable research practices (QRPs, or “questionable practices”), which may threaten the credibility of research, including harming trust in well‐designed and reliable scientific research. In this paper, we propose a “toolkit” for open and pluralistic conservation science, highlighting common questionable practices and sources of bias and indicating where remedies for these problems may be found. The toolkit provides an accessible resource for anyone conducting, reviewing, or using conservation research, to identify sources of false claims or misleading evidence that arise unintentionally, or through misunderstandings or carelessness in the application of scientific methods and analyses. We aim to influence editorial and review practices and hopefully to remedy problems before they are published or deployed in policy or conservation practice.
  • Item
    Thumbnail Image
    Challenges in estimation, uncertainty quantification and elicitation for pandemic modelling.
    Swallow, B ; Birrell, P ; Blake, J ; Burgman, M ; Challenor, P ; Coffeng, LE ; Dawid, P ; De Angelis, D ; Goldstein, M ; Hemming, V ; Marion, G ; McKinley, TJ ; Overton, CE ; Panovska-Griffiths, J ; Pellis, L ; Probert, W ; Shea, K ; Villela, D ; Vernon, I (Elsevier BV, 2022-03)
    The estimation of parameters and model structure for informing infectious disease response has become a focal point of the recent pandemic. However, it has also highlighted a plethora of challenges remaining in the fast and robust extraction of information using data and models to help inform policy. In this paper, we identify and discuss four broad challenges in the estimation paradigm relating to infectious disease modelling, namely the Uncertainty Quantification framework, data challenges in estimation, model-based inference and prediction, and expert judgement. We also postulate priorities in estimation methodology to facilitate preparation for future pandemics.
  • Item
    Thumbnail Image
    Reimagining peer review as an expert elicitation process
    Marcoci, A ; Vercammen, A ; Bush, M ; Hamilton, DG ; Hanea, A ; Hemming, V ; Wintle, BC ; Burgman, M ; Fidler, F (SPRINGERNATURE, 2022-04-05)
    Journal peer review regulates the flow of ideas through an academic discipline and thus has the power to shape what a research community knows, actively investigates, and recommends to policymakers and the wider public. We might assume that editors can identify the 'best' experts and rely on them for peer review. But decades of research on both expert decision-making and peer review suggests they cannot. In the absence of a clear criterion for demarcating reliable, insightful, and accurate expert assessors of research quality, the best safeguard against unwanted biases and uneven power distributions is to introduce greater transparency and structure into the process. This paper argues that peer review would therefore benefit from applying a series of evidence-based recommendations from the empirical literature on structured expert elicitation. We highlight individual and group characteristics that contribute to higher quality judgements, and elements of elicitation protocols that reduce bias, promote constructive discussion, and enable opinions to be objectively and transparently aggregated.
  • Item
    Thumbnail Image
    An introduction to decision science for conservation
    Hemming, V ; Camaclang, AE ; Adams, MS ; Burgman, M ; Carbeck, K ; Carwardine, J ; Chades, I ; Chalifour, L ; Converse, SJ ; Davidson, LNK ; Garrard, GE ; Finn, R ; Fleri, JR ; Huard, J ; Mayfield, HJ ; Madden, EM ; Naujokaitis-Lewis, I ; Possingham, HP ; Rumpff, L ; Runge, MC ; Stewart, D ; Tulloch, VJD ; Walshe, T ; Martin, TG (WILEY, 2022-02)
    Biodiversity conservation decisions are difficult, especially when they involve differing values, complex multidimensional objectives, scarce resources, urgency, and considerable uncertainty. Decision science embodies a theory about how to make difficult decisions and an extensive array of frameworks and tools that make that theory practical. We sought to improve conceptual clarity and practical application of decision science to help decision makers apply decision science to conservation problems. We addressed barriers to the uptake of decision science, including a lack of training and awareness of decision science; confusion over common terminology and which tools and frameworks to apply; and the mistaken impression that applying decision science must be time consuming, expensive, and complex. To aid in navigating the extensive and disparate decision science literature, we clarify meaning of common terms: decision science, decision theory, decision analysis, structured decision-making, and decision-support tools. Applying decision science does not have to be complex or time consuming; rather, it begins with knowing how to think through the components of a decision utilizing decision analysis (i.e., define the problem, elicit objectives, develop alternatives, estimate consequences, and perform trade-offs). This is best achieved by applying a rapid-prototyping approach. At each step, decision-support tools can provide additional insight and clarity, whereas decision-support frameworks (e.g., priority threat management and systematic conservation planning) can aid navigation of multiple steps of a decision analysis for particular contexts. We summarize key decision-support frameworks and tools and describe to which step of a decision analysis, and to which contexts, each is most useful to apply. Our introduction to decision science will aid in contextualizing current approaches and new developments, and help decision makers begin to apply decision science to conservation problems.
  • Item
    Thumbnail Image
    Conservation Biology celebrates success
    Jarrad, F ; Main, E ; Burgman, M (WILEY-BLACKWELL, 2016-10)
  • Item
    Thumbnail Image
    Promoting transparency in conservation science
    Parker, TH ; Main, E ; Nakagawa, S ; Gurevitch, J ; Jarrad, F ; Burgman, M (WILEY-BLACKWELL, 2016-12)
  • Item
    Thumbnail Image
    Improving expert forecasts in reliability: Application and evidence for structured elicitation protocols
    Hemming, V ; Armstrong, N ; Burgman, MA ; Hanea, AM (WILEY, 2020-03)
    Abstract Quantitative expert judgements are used in reliability assessments to inform critically important decisions. Structured elicitation protocols have been advocated to improve expert judgements, yet their application in reliability is challenged by a lack of examples or evidence that they improve judgements. This paper aims to overcome these barriers. We present a case study where two world‐leading protocols, the IDEA protocol and the Classical Model, were combined and applied by the Australian Department of Defence for a reliability assessment. We assess the practicality of the methods and the extent to which they improve judgements. The average expert was extremely overconfident, with 90% credible intervals containing the true realisation 36% of the time. However, steps contained in the protocols substantially improved judgements. In particular, an equal weighted aggregation of individual judgements and the inclusion of a discussion phase and revised estimate helped to improve calibration, statistical accuracy, and the Classical Model score. Further improvements in precision and information were made via performance weighted aggregation. This paper provides useful insights into the application of structured elicitation protocols for reliability and the extent to which judgements are improved. The findings raise concerns about existing practices for utilising experts in reliability assessments and suggest greater adoption of structured protocols is warranted. We encourage the reliability community to further develop examples and insights.
  • Item
    Thumbnail Image
    Plan S and publishing: reply to Lehtomaki et al. 2019
    McCarthy, MA ; Burgman, MA ; Wei, F ; Jarrad, FC ; Rondinini, C ; Murcia, C ; Marsh, HD ; Akcakaya, HR ; Esler, KJ ; Game, ET ; Schwartz, MW (WILEY, 2019-10)
  • Item
    Thumbnail Image
    Untapped potential of collective intelligence in conservation and environmental decision making.
    Vercammen, A ; Burgman, M (Wiley, 2019-12)
    Environmental decisions are often deferred to groups of experts, committees, or panels to develop climate policy, plan protected areas, or negotiate trade-offs for biodiversity conservation. There is, however, surprisingly little empirical research on the performance of group decision making related to the environment. We examined examples from a range of different disciplines, demonstrating the emergence of collective intelligence (CI) in the elicitation of quantitative estimates, crowdsourcing applications, and small-group problem solving. We explored the extent to which similar tools are used in environmental decision making. This revealed important gaps (e.g., a lack of integration of fundamental research in decision-making practice, absence of systematic evaluation frameworks) that obstruct mainstreaming of CI. By making judicious use of interdisciplinary learning opportunities, CI can be harnessed effectively to improve decision making in conservation and environmental management. To elicit reliable quantitative estimates an understanding of cognitive psychology and to optimize crowdsourcing artificial intelligence tools may need to be incorporated. The business literature offers insights into the importance of soft skills and diversity in team effectiveness. Environmental problems set a challenging and rich testing ground for collective-intelligence tools and frameworks. We argue this creates an opportunity for significant advancement in decision-making research and practice.