Infrastructure Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    Characterizing dominant hydrological processes under uncertainty: evaluating the interplay between model structure, parameter sampling, error metrics, and data information content
    Khatami, S ; Peel, M ; Peterson, T ; Western, A ( 2020-03-23)
    <p>Hydrological models are conventionally evaluated in terms of their response surface or likelihood surface constructed with the model parameter space. To evaluate models as hypotheses, we developed the method of <em>Flux Mapping</em> to construct a hypothesis space based on model process representation. Here we defined the hypothesis space based on dominant runoff generating mechanisms, and acceptable model runs are defined as total simulated flow with similar (and minimal) model error simulated by distinct combinations of runoff components. We demonstrate that the hypothesis space in each modeling case is the result of interplay between the factors of model structure, parameter sampling, choice of error metric, and data information content. The aim of this study is to disentangle the role of each factor in this interplay. We used two model structures (SACRAMENTO and SIMHYD), two parameter sampling approaches (small samples based on guided-search and large samples based on Latin Hypercube Sampling), three widely used error metrics (NSE, KGE, and WIA — Willmott’s Index of Agreement), and hydrological data from a range of Australian catchments. First, we characterized how the three error metrics behave under different error regimes independent of any modeling. We then conducted a series of controlled experiments, i.e. a type of one-factor-at-a-time sensitivity analysis, to unpack the role of each factor in runoff simulation. We show that KGE is a more reliable error metric compared to NSE and WIA for model evaluation. We also argue that robust error metrics and sufficient parameter sampling are necessary conditions for evaluating models as hypotheses under uncertainty. We particularly argue that sampling sufficiency, regardless of the sampling strategy, should be further evaluated based on its interaction with other modeling factors determining the model response. We conclude that the interplay of these modeling factors is complex and unique to each modeling case, and hence generalizing model-based inferences should be done with caution particularly in characterizing hydrological processes in large-sample hydrology.</p>