Economics - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 26
  • Item
    Thumbnail Image
    On the impact of outliers in loss reserving
    Avanzi, B ; Lavender, M ; Taylor, G ; Wong, B (SPRINGER HEIDELBERG, 2024-04)
    Abstract The sensitivity of loss reserving techniques to outliers in the data or deviations from model assumptions is a well known challenge. It has been shown that the popular chain-ladder reserving approach is at significant risk to such aberrant observations in that reserve estimates can be significantly shifted in the presence of even one outlier. As a consequence the chain-ladder reserving technique is non-robust. In this paper we investigate the sensitivity of reserves and mean squared errors of prediction under Mack’s Model (ASTIN Bull 23(2):213–225, 1993). This is done through the derivation of impact functions which are calculated by taking the first derivative of the relevant statistic of interest with respect to an observation. We also provide and discuss the impact functions for quantiles when total reserves are assumed to be lognormally distributed. Additionally, comparisons are made between the impact functions for individual accident year reserves under Mack’s Model and the Bornhuetter–Ferguson methodology. It is shown that the impact of incremental claims on these statistics of interest varies widely throughout a loss triangle and is heavily dependent on other cells in the triangle. Results are illustrated using data from a Belgian non-life insurer.
  • Item
    Thumbnail Image
    Detection and treatment of outliers for multivariate robust loss reserving
    Avanzi, B ; Lavender, M ; Taylor, G ; Wong, B (CAMBRIDGE UNIV PRESS, 2024-03-24)
    Abstract Traditional techniques for calculating outstanding claim liabilities such as the chain-ladder are notoriously at risk of being distorted by outliers in past claims data. Unfortunately, the literature in robust methods of reserving is scant, with notable exceptions such as Verdonck & Debruyne (2011, Insurance: Mathematics and Economics, 48, 85–98) and Verdonck & Van Wouwe (2011, Insurance: Mathematics and Economics,49, 188–193). In this paper, we put forward two alternative robust bivariate chain-ladder techniques to extend the approach of Verdonck & Van Wouwe (2011, Insurance: Mathematics and Economics,49, 188–193). The first technique is based on Adjusted Outlyingness (Hubert & Van der Veeken, 2008. Journal of Chemometrics,22, 235–246) and explicitly incorporates skewness into the analysis while providing a unique measure of outlyingness for each observation. The second technique is based on bagdistance (Hubert et al., 2016. Statistics: Methodology, 1–23) which is derived from the bagplot; however; it is able to provide a unique measure of outlyingness and a means to adjust outlying observations based on this measure. Furthermore, we extend our robust bivariate chain-ladder approach to an N-dimensional framework. The implementation of the methods, especially beyond bivariate, is not trivial. This is illustrated on a trivariate data set from Australian general insurers and results under the different outlier detection and treatment mechanisms are compared.
  • Item
    Thumbnail Image
    How to proxy the unmodellable: Analysing granular insurance claims in the presence of unobservable or complex drivers
    Avanzi, B ; Taylor, G ; wong, B ; Xian, A (Institute of Actuaries, Australia, 2018)
    The estimation of claim and premium liabilities is a key component of an actuary's role and plays a vital part of any insurance company’s operations. In practice, such calculations are complicated by the stochastic nature of the claims process as well as the impracticality of capturing all relevant and material drivers of the observed claims data. In the past, computational limitations have promoted the prevalence of simplified (but possibly sub-optimal) aggregate methodologies. However, in light of modern advances in processing power, it is viable to increase the granularity at which we analyse insurance data sets so that potentially useful information is not discarded. By utilising more granular and detailed data (that is usually readily available to insurers), model predictions may become more accurate and precise. Unfortunately, detailed analysis of large insurance data sets in this manner poses some unique challenges. Firstly, there is no standard framework to which practitioners can refer and it can be challenging to tractably integrate all modelled components into one comprehensive model. Secondly, analysis at greater granularity or level of detail requires more intense levels of scrutiny as complex trends and drivers that were previously masked by aggregation and discretisation assumptions may emerge. This is particularly an issue with claim drivers that are either unobservable to the modeller or very difficult/expensive to model. Finally, computation times are a material concern when processing such large volumes of data as model outputs need to be obtained in reasonable time-frames. Our proposed methodology overcomes the above problems by using a Markov-modulated non-homogeneous Poisson process framework. This extends the standard Poisson model by allowing for over-dispersion to be captured in an interpretable, structural manner. The approach implements a flexible exposure measure to explicitly allow for known/modelled claim drivers while the hidden component of the Hidden Markov model captures the impact of unobservable or practicably non-modellable information. Computational developments are made to drastically reduce calibration times. Theoretical findings are illustrated and validated in an empirical case study using Australian general insurance data in order to highlight the benefits of the proposed approach.
  • Item
    Thumbnail Image
    On the Impact, Detection and Treatment of Outliers in Robust Loss Reserving
    Avanzi, B ; Taylor, G ; Wong, B ; Lavendar, M (Actuaries Institute, 2016)
    The sensitivity of loss reserving techniques to outliers in the data or deviations from model assumptions is a well known challenge. For instance, it has been shown that the popular chain-ladder reserving approach is at significant risk to such aberrant observations in that reserve estimates can be significantly shifted in the presence of even one outlier. In this paper we firstly investigate the sensitivity of reserves and mean squared errors of prediction under Mack's Model. This is done through the derivation of impact functions which are calculated by taking the first derivative of the relevant statistic of interest with respect to an observation. We also provide and discuss the impact functions for quantiles when total reserves are assumed to be lognormally distributed. Additionally, comparisons are made between the impact functions for individual accident year reserves under Mack's Model and the Bornhuetter-Ferguson methodology. It is shown that the impact of incremental claims on these statistics of interest varies widely throughout a loss triangle and is heavily dependent on other cells in the triangle. We then put forward two alternative robust bivariate chain-ladder techniques (Verdonck and VanWouwe, 2011) based on Adjusted-Outlyingness (Hubert and Van der Veeken, 2008) and bagdistance (Hubert et al., 2016). These techniques provide a measure of outlyingness that is unique to each individual observation rather than largely relying on graphical representations as is done under the existing bagplot methodology. Furthermore the Adjusted Outlyingness approach explicitly incorporates a robust measure of skewness into the analysis whereas the bagplot captures the shape of the data only through a measure of rank. Results are illustrated on two sets of real bivariate data from general insurers.
  • Item
    Thumbnail Image
    On the optimality of joint periodic and extraordinary dividend strategies
    Avanzi, B ; Lau, H ; Wong, B (ELSEVIER, 2021-08-13)
    In this paper, we model the cash surplus (or equity) of a risky business with a Brownian motion (with a drift). Owners can take cash out of the surplus in the form of “dividends”, subject to transaction costs. However, if the surplus hits 0 then ruin occurs and the business cannot operate any more. We consider two types of dividend distributions: (i) periodic, regular ones (that is, dividends can be paid only at countably many points in time, according to a specific arrival process); and (ii) extraordinary dividend payments that can be made immediately at any time (that is, the dividend decision time space is continuous and matches that of the surplus process). Both types of dividends attract proportional transaction costs, but extraordinary distributions also attract fixed transaction costs, which is a realistic feature. A dividend strategy that involves both types of distributions (periodic and extraordinary) is qualified as “hybrid”. We determine which strategies (either periodic, immediate, or hybrid) are optimal, that is, we show which are the strategies that maximise the expected present value of dividends paid until ruin, net of transaction costs. Sometimes, a liquidation strategy (which pays out all monies and stops the process) is optimal. Which strategy is optimal depends on the profitability of the business, and the level of (proportional and fixed) transaction costs. Results are illustrated.
  • Item
    Thumbnail Image
    Stochastic loss reserving with mixture density neural networks
    Al-Mudafer, MT ; Avanzi, B ; Taylor, G ; Wong, B (ELSEVIER, 2022-07-01)
    In recent years, new techniques based on artificial intelligence and machine learning in particular have been making a revolution in the work of actuaries, including in loss reserving. A particularly promising technique is that of neural networks, which have been shown to offer a versatile, flexible and accurate approach to loss reserving. However, applications of neural networks in loss reserving to date have been primarily focused on the (important) problem of fitting accurate central estimates of the outstanding claims. In practice, properties regarding the variability of outstanding claims are equally important (e.g., quantiles for regulatory purposes). In this paper we fill this gap by applying a Mixture Density Network (“MDN”) to loss reserving. The approach combines a neural network architecture with a mixture Gaussian distribution to achieve simultaneously an accurate central estimate along with flexible distributional choice. Model fitting is done using a rolling-origin approach. Our approach consistently outperforms the classical over-dispersed model both for central estimates and quantiles of interest, when applied to a wide range of simulated environments of various complexity and specifications. We further extend the MDN approach by proposing two extensions. Firstly, we present a hybrid GLM-MDN approach called “ResMDN“. This hybrid approach balances the tractability and ease of understanding of a traditional GLM model on one hand, with the additional accuracy and distributional flexibility provided by the MDN on the other. We show that it can successfully improve the errors of the baseline ccODP, although there is generally a loss of performance when compared to the MDN in the examples we considered. Secondly, we allow for explicit projection constraints, so that actuarial judgement can be directly incorporated into the modelling process. Throughout, we focus on aggregate loss triangles, and show that our methodologies are tractable, and that they out-perform traditional approaches even with relatively limited amounts of data. We use both simulated data—to validate properties, and real data—to illustrate and ascertain practicality of the approaches.
  • Item
    Thumbnail Image
    SynthETIC: An individual insurance claim simulator with feature control
    Avanzi, B ; Taylor, G ; Wang, M ; Wong, B (ELSEVIER, 2021-07-07)
    Recent years have seen rapid increase in the application of machine learning to insurance loss reserving. They yield most value when applied to large data sets, such as individual claims, or large claim triangles. In short, they are likely to be useful in the analysis of any data set whose volume is sufficient to obscure a naked-eye view of its features. Unfortunately, such large data sets are in short supply in the actuarial literature. Accordingly, one needs to turn to synthetic data. Although the ultimate objective of these methods is application to real data, the use of synthetic data containing features commonly observed in real data is also to be encouraged. While there are a number of claims simulators in existence, each valuable within its own context, the inclusion of a number of desirable (but complicated) data features requires further development. Accordingly, in this paper we review those desirable features, and propose a new simulator of individual claim experience called SynthETIC. Our simulator is publicly available, open source, and fills a gap in the non-life actuarial toolkit. The simulator specifically allows for desirable (but optionally complicated) data features typically occurring in practice, such as variations in rates of settlements and development patterns; as with superimposed inflation, and various discontinuities, and also enables various dependencies between variables. The user has full control of the mechanics of the evolution of an individual claim. As a result, the complexity of the data set generated (meaning the level of difficulty of analysis) may be dialed anywhere from extremely simple to extremely complex. The default version is parameterized so as to include a broad (though not numerically precise) resemblance to the major features of experience of a specific (but anonymous) Auto Bodily Injury portfolio, but the general structure is suitable for most lines of business, with some amendment of modules.
  • Item
    Thumbnail Image
    On the surplus management of funds with assets and liabilities in presence of solvency requirements
    Avanzi, B ; Chen, P ; Henriksen, LFB ; Wong, B (Taylor and Francis Group, 2023)
    In this paper, we consider a company whose assets and liabilities evolve according to a correlated bivariate geometric Brownian motion, such as in Gerber and Shiu [(2003). Geometric Brownian motion models for assets and liabilities: From pension funding to optimal dividends. North American Actuarial Journal 7(3), 37–56]. We determine what dividend strategy maximises the expected present value of dividends until ruin in two cases: (i) when shareholders won't cover surplus shortfalls and a solvency constraint [as in Paulsen (2003). Optimal dividend payouts for diffusions with solvency constraints. Finance and Stochastics 7(4), 457–473] is consequently imposed and (ii) when shareholders are always to fund any capital deficiency with capital (asset) injections. In the latter case, ruin will never occur and the objective is to maximise the difference between dividends and capital injections. Developing and using appropriate verification lemmas, we show that the optimal dividend strategy is, in both cases, of barrier type. Both value functions are derived in closed form. Furthermore, the barrier is defined on the ratio of assets to liabilities, which mimics some of the dividend strategies that can be observed in practice by insurance companies. The existence and uniqueness of the optimal strategies are shown. Results are illustrated.
  • Item
    Thumbnail Image
    SPLICE: a synthetic paid loss and incurred cost experience simulator
    Avanzi, B ; Taylor, G ; Wang, M (CAMBRIDGE UNIV PRESS, 2022-05-23)
    In this paper, we first introduce a simulator of cases estimates of incurred losses, called SPLICE (Synthetic Paid Loss and Incurred Cost Experience). In three modules, case estimates are simulated in continuous time, and a record is output for each individual claim. Revisions for the case estimates are also simulated as a sequence over the lifetime of the claim, in a number of different situations. Furthermore, some dependencies in relation to case estimates of incurred losses are incorporated, particularly recognizing certain properties of case estimates that are found in practice. For example, the magnitude of revisions depends on ultimate claim size, as does the distribution of the revisions over time. Some of these revisions occur in response to occurrence of claim payments, and so SPLICE requires input of simulated per-claim payment histories. The claim data can be summarized by accident and payment “periods” whose duration is an arbitrary choice (e.g., month, quarter, etc.) available to the user. SPLICE is built on an existing simulator of individual claim experience called SynthETIC (introduced in Avanzi, Taylor, Wang, and Wong, 2021b, c), which offers flexible modelling of occurrence, notification, as well as the timing and magnitude of individual partial payments. This is in contrast with the incurred losses, which constitute the additional contribution of SPLICE. The inclusion of incurred loss estimates provides a facility that almost no other simulators do. SPLICE is a fully documented R package that is publicly available and open source (on CRAN). SPLICE, combined with SynthETIC, provides eleven modules (occurrence, notification, etc.), any one or more of which may be re-designed according to the user’s requirements. It comes with a default version that is loosely calibrated to resemble a specific (but anonymous) Auto Bodily Injury portfolio, as well as data generation functionality that outputs alternative data sets under a range of hypothetical scenarios differing in complexity. The general structure is suitable for most lines of business, with some re-parameterization.
  • Item
    Thumbnail Image
    On the modelling of multivariate counts with Cox processes and dependent shot noise intensities
    Avanzi, B ; Taylor, G ; Wong, B ; Yang, X (ELSEVIER, 2021-04-03)
    In this paper, we develop a method to model and estimate several, dependent count processes, using granular data. Specifically, we develop a multivariate Cox process with shot noise intensities to jointly model the arrival process of counts (e.g. insurance claims). The dependency structure is introduced via multivariate shot noise intensity processes which are connected with the help of Lévy copulas. In aggregate, our approach allows for (i) over-dispersion and auto-correlation within each line of business; (ii) realistic features involving time-varying, known covariates; and (iii) parsimonious dependence between processes without requiring simultaneous primary (e.g. accidents) events.