 Economics  Research Publications
Economics  Research Publications
Permanent URI for this collection
4 results
Filters
Reset filtersSettings
Statistics
Citations
Search Results
Now showing
1  4 of 4

ItemHow to proxy the unmodellable: Analysing granular insurance claims in the presence of unobservable or complex driversAvanzi, B ; Taylor, G ; wong, B ; Xian, A (Institute of Actuaries, Australia, 2018)The estimation of claim and premium liabilities is a key component of an actuary's role and plays a vital part of any insurance company’s operations. In practice, such calculations are complicated by the stochastic nature of the claims process as well as the impracticality of capturing all relevant and material drivers of the observed claims data. In the past, computational limitations have promoted the prevalence of simplified (but possibly suboptimal) aggregate methodologies. However, in light of modern advances in processing power, it is viable to increase the granularity at which we analyse insurance data sets so that potentially useful information is not discarded. By utilising more granular and detailed data (that is usually readily available to insurers), model predictions may become more accurate and precise. Unfortunately, detailed analysis of large insurance data sets in this manner poses some unique challenges. Firstly, there is no standard framework to which practitioners can refer and it can be challenging to tractably integrate all modelled components into one comprehensive model. Secondly, analysis at greater granularity or level of detail requires more intense levels of scrutiny as complex trends and drivers that were previously masked by aggregation and discretisation assumptions may emerge. This is particularly an issue with claim drivers that are either unobservable to the modeller or very difficult/expensive to model. Finally, computation times are a material concern when processing such large volumes of data as model outputs need to be obtained in reasonable timeframes. Our proposed methodology overcomes the above problems by using a Markovmodulated nonhomogeneous Poisson process framework. This extends the standard Poisson model by allowing for overdispersion to be captured in an interpretable, structural manner. The approach implements a flexible exposure measure to explicitly allow for known/modelled claim drivers while the hidden component of the Hidden Markov model captures the impact of unobservable or practicably nonmodellable information. Computational developments are made to drastically reduce calibration times. Theoretical findings are illustrated and validated in an empirical case study using Australian general insurance data in order to highlight the benefits of the proposed approach.

ItemOn the Distribution of the Excedents of Funds with Assets and Liabilities in Presence of Solvency and Recovery RequirementsAvanzi, B ; Henriksen, LFB ; Wong, B (Cambridge University Press (CUP), 20180501)We consider a profitable, risky setting with two separate, correlated asset and liability processes (first introduced by Gerber and Shiu, 2003). The company that is considered is allowed to distribute excess profits (traditionally referred to as dividends in the literature), but is regulated and is subject to particular regulatory (solvency) constraints. Because of the bivariate nature of the surplus formulation, such distributions of excess profits can take two alternative forms. These can originate from a reduction of assets (and hence a payment to owners), but also from an increase of liabilities (when these represent the wealth of owners, such as in pension funds). The latter is particularly relevant if distributions of assets do not make sense because of the context, such as in regulated pension funds where assets are locked until retirement. In this paper, we extend the model of Gerber and Shiu (2003) and consider recovery requirements for the distribution of excess funds. Such recovery requirements are an extension of the plain vanilla solvency constraints considered in Paulsen (2003), and require funds to reach a higher level of funding than the solvency level (if and after it is triggered) before excess funds can be distributed again. We obtain closedform expressions for the expected present value of distributions (asset decrements or liability increments) when a distribution barrier is used.

ItemOptimal dividends under Erlang(2) interdividend decision timesAvanzi, B ; Tu, V ; Wong, B (Elsevier, 20180301)In the classical dividends problem, dividend decisions are allowed to be made at any time. Under such a framework, the optimal dividend strategies are often of barrier or threshold type, which can lead to very irregular dividend payments over time. In practice however companies distribute dividends on a periodic basis. In that spirit, “Erlangisation” techniques have been used to approximate problems with fixed interdividend decision times. When studying the optimality of such strategies, the existing literature focuses exclusively on the special case of exponential – that is, Erlang(1) – interdividend decision times. Higher dimensional models are surprisingly difficult to study due to the implicit nature of some of the equations. While some of this difficulty continues to exist in high dimensions, in this paper we provide a stepping stone to the general Erlang() problem by providing a detailed analysis of the optimality of periodic barrier strategies when interdividenddecision times are Erlang(2) distributed. Results are illustrated.

ItemCOMMON SHOCK MODELS FOR CLAIM ARRAYSAvanzi, B ; Taylor, G ; Wong, B (Cambridge University Press (CUP), 20180901)The paper is concerned with multiple claim arrays. In recognition of the extensive use by practitioners of large correlation matrices for the estimation of diversification benefits in capital modelling, we develop a methodology for the construction of such correlation structures (to any dimension). Indeed, the literature does not document any methodology by which practitioners, who often parameterise those correlations by means of informed guesswork, may do so in a disciplined and parsimonious manner. We construct a broad and flexible family of models, where dependency is induced by common shock components. Models incorporate dependencies between observations both within arrays and between arrays. Arrays are of general shape (possibly with holes), but include the usual cases of claim triangles and trapezia that appear in the literature. General forms of dependency are considered with cell, row, column, diagonalwise, and other forms of dependency as special cases. Substantial effort is applied to practical interpretation of such matrices generated by the models constructed here. Reasonably realistic examples are examined, in which an expression is obtained for the general entry in the correlation matrix in terms of a limited set of parameters, each of which has a straightforward intuitive meaning to the practitioner. This will maximise chance of obtaining a reliable matrix. This construction is illustrated by a numerical example.