- Economics - Research Publications
Economics - Research Publications
Permanent URI for this collection
755 results
Filters
Settings
Statistics
Citations
Search Results
Now showing
1 - 10 of 755
-
ItemOn the optimality of joint periodic and extraordinary dividend strategiesAvanzi, B ; Lau, H ; Wong, B (ELSEVIER, 2021-08-13)In this paper, we model the cash surplus (or equity) of a risky business with a Brownian motion (with a drift). Owners can take cash out of the surplus in the form of “dividends”, subject to transaction costs. However, if the surplus hits 0 then ruin occurs and the business cannot operate any more. We consider two types of dividend distributions: (i) periodic, regular ones (that is, dividends can be paid only at countably many points in time, according to a specific arrival process); and (ii) extraordinary dividend payments that can be made immediately at any time (that is, the dividend decision time space is continuous and matches that of the surplus process). Both types of dividends attract proportional transaction costs, but extraordinary distributions also attract fixed transaction costs, which is a realistic feature. A dividend strategy that involves both types of distributions (periodic and extraordinary) is qualified as “hybrid”. We determine which strategies (either periodic, immediate, or hybrid) are optimal, that is, we show which are the strategies that maximise the expected present value of dividends paid until ruin, net of transaction costs. Sometimes, a liquidation strategy (which pays out all monies and stops the process) is optimal. Which strategy is optimal depends on the profitability of the business, and the level of (proportional and fixed) transaction costs. Results are illustrated.
-
ItemStochastic loss reserving with mixture density neural networksAl-Mudafer, MT ; Avanzi, B ; Taylor, G ; Wong, B (ELSEVIER, 2022-07-01)In recent years, new techniques based on artificial intelligence and machine learning in particular have been making a revolution in the work of actuaries, including in loss reserving. A particularly promising technique is that of neural networks, which have been shown to offer a versatile, flexible and accurate approach to loss reserving. However, applications of neural networks in loss reserving to date have been primarily focused on the (important) problem of fitting accurate central estimates of the outstanding claims. In practice, properties regarding the variability of outstanding claims are equally important (e.g., quantiles for regulatory purposes). In this paper we fill this gap by applying a Mixture Density Network (“MDN”) to loss reserving. The approach combines a neural network architecture with a mixture Gaussian distribution to achieve simultaneously an accurate central estimate along with flexible distributional choice. Model fitting is done using a rolling-origin approach. Our approach consistently outperforms the classical over-dispersed model both for central estimates and quantiles of interest, when applied to a wide range of simulated environments of various complexity and specifications. We further extend the MDN approach by proposing two extensions. Firstly, we present a hybrid GLM-MDN approach called “ResMDN“. This hybrid approach balances the tractability and ease of understanding of a traditional GLM model on one hand, with the additional accuracy and distributional flexibility provided by the MDN on the other. We show that it can successfully improve the errors of the baseline ccODP, although there is generally a loss of performance when compared to the MDN in the examples we considered. Secondly, we allow for explicit projection constraints, so that actuarial judgement can be directly incorporated into the modelling process. Throughout, we focus on aggregate loss triangles, and show that our methodologies are tractable, and that they out-perform traditional approaches even with relatively limited amounts of data. We use both simulated data—to validate properties, and real data—to illustrate and ascertain practicality of the approaches.
-
ItemSynthETIC: An individual insurance claim simulator with feature controlAvanzi, B ; Taylor, G ; Wang, M ; Wong, B (ELSEVIER, 2021-07-07)Recent years have seen rapid increase in the application of machine learning to insurance loss reserving. They yield most value when applied to large data sets, such as individual claims, or large claim triangles. In short, they are likely to be useful in the analysis of any data set whose volume is sufficient to obscure a naked-eye view of its features. Unfortunately, such large data sets are in short supply in the actuarial literature. Accordingly, one needs to turn to synthetic data. Although the ultimate objective of these methods is application to real data, the use of synthetic data containing features commonly observed in real data is also to be encouraged. While there are a number of claims simulators in existence, each valuable within its own context, the inclusion of a number of desirable (but complicated) data features requires further development. Accordingly, in this paper we review those desirable features, and propose a new simulator of individual claim experience called SynthETIC. Our simulator is publicly available, open source, and fills a gap in the non-life actuarial toolkit. The simulator specifically allows for desirable (but optionally complicated) data features typically occurring in practice, such as variations in rates of settlements and development patterns; as with superimposed inflation, and various discontinuities, and also enables various dependencies between variables. The user has full control of the mechanics of the evolution of an individual claim. As a result, the complexity of the data set generated (meaning the level of difficulty of analysis) may be dialed anywhere from extremely simple to extremely complex. The default version is parameterized so as to include a broad (though not numerically precise) resemblance to the major features of experience of a specific (but anonymous) Auto Bodily Injury portfolio, but the general structure is suitable for most lines of business, with some amendment of modules.
-
ItemOn the surplus management of funds with assets and liabilities in presence of solvency requirementsAvanzi, B ; Chen, P ; Henriksen, LFB ; Wong, B (Taylor and Francis Group, 2022-10-16)In this paper, we consider a company whose assets and liabilities evolve according to a correlated bivariate geometric Brownian motion, such as in Gerber and Shiu [(2003). Geometric Brownian motion models for assets and liabilities: From pension funding to optimal dividends. North American Actuarial Journal 7(3), 37–56]. We determine what dividend strategy maximises the expected present value of dividends until ruin in two cases: (i) when shareholders won't cover surplus shortfalls and a solvency constraint [as in Paulsen (2003). Optimal dividend payouts for diffusions with solvency constraints. Finance and Stochastics 7(4), 457–473] is consequently imposed and (ii) when shareholders are always to fund any capital deficiency with capital (asset) injections. In the latter case, ruin will never occur and the objective is to maximise the difference between dividends and capital injections. Developing and using appropriate verification lemmas, we show that the optimal dividend strategy is, in both cases, of barrier type. Both value functions are derived in closed form. Furthermore, the barrier is defined on the ratio of assets to liabilities, which mimics some of the dividend strategies that can be observed in practice by insurance companies. The existence and uniqueness of the optimal strategies are shown. Results are illustrated.
-
ItemSPLICE: a synthetic paid loss and incurred cost experience simulatorAvanzi, B ; Taylor, G ; Wang, M (CAMBRIDGE UNIV PRESS, 2022-05-23)In this paper, we first introduce a simulator of cases estimates of incurred losses, called SPLICE (Synthetic Paid Loss and Incurred Cost Experience). In three modules, case estimates are simulated in continuous time, and a record is output for each individual claim. Revisions for the case estimates are also simulated as a sequence over the lifetime of the claim, in a number of different situations. Furthermore, some dependencies in relation to case estimates of incurred losses are incorporated, particularly recognizing certain properties of case estimates that are found in practice. For example, the magnitude of revisions depends on ultimate claim size, as does the distribution of the revisions over time. Some of these revisions occur in response to occurrence of claim payments, and so SPLICE requires input of simulated per-claim payment histories. The claim data can be summarized by accident and payment “periods” whose duration is an arbitrary choice (e.g., month, quarter, etc.) available to the user. SPLICE is built on an existing simulator of individual claim experience called SynthETIC (introduced in Avanzi, Taylor, Wang, and Wong, 2021b, c), which offers flexible modelling of occurrence, notification, as well as the timing and magnitude of individual partial payments. This is in contrast with the incurred losses, which constitute the additional contribution of SPLICE. The inclusion of incurred loss estimates provides a facility that almost no other simulators do. SPLICE is a fully documented R package that is publicly available and open source (on CRAN). SPLICE, combined with SynthETIC, provides eleven modules (occurrence, notification, etc.), any one or more of which may be re-designed according to the user’s requirements. It comes with a default version that is loosely calibrated to resemble a specific (but anonymous) Auto Bodily Injury portfolio, as well as data generation functionality that outputs alternative data sets under a range of hypothetical scenarios differing in complexity. The general structure is suitable for most lines of business, with some re-parameterization.
-
ItemOn the modelling of multivariate counts with Cox processes and dependent shot noise intensitiesAvanzi, B ; Taylor, G ; Wong, B ; Yang, X (ELSEVIER, 2021-04-03)In this paper, we develop a method to model and estimate several, dependent count processes, using granular data. Specifically, we develop a multivariate Cox process with shot noise intensities to jointly model the arrival process of counts (e.g. insurance claims). The dependency structure is introduced via multivariate shot noise intensity processes which are connected with the help of Lévy copulas. In aggregate, our approach allows for (i) over-dispersion and auto-correlation within each line of business; (ii) realistic features involving time-varying, known covariates; and (iii) parsimonious dependence between processes without requiring simultaneous primary (e.g. accidents) events.
-
ItemOptimal periodic dividend strategies for spectrally negative Levy processes with fixed transaction costsAvanzi, B ; Lau, H ; Wong, B (Taylor and Francis Group, 2021-02-04)Maximising dividends is one classical stability criterion in actuarial risk theory. Motivated by the fact that dividends are paid periodically in real life, periodic dividend strategies were recently introduced (Albrecher et al. 2011). In this paper, we incorporate fixed transaction costs into the model and study the optimal periodic dividend strategy with fixed transaction costs for spectrally negative Lévy processes. The value function of a periodic (bu,bl) strategy is calculated by means of exiting identities and Itô's excusion when the surplus process is of unbounded variation. We show that a sufficient condition for optimality is that the Lévy measure admits a density which is completely monotonic. Under such assumptions, a periodic (bu,bl) strategy is confirmed to be optimal. Results are illustrated.
-
ItemAsymmetric versus Symmetric Binary Regresion: A New Proposal with ApplicationsGomez-Deniz, E ; Calderin-Ojeda, E ; Gomez, HW (MDPI, 2022-04-01)The classical logit and probit models allow to explain a dichotomous dependent variable as a function of factors or covariates which can influence the response variable. This paper introduces a new skew-logit link for item response theory by considering the arctan transformation over the scobit logit model, yielding a very flexible link function from a new class of generalized distribution. This approach assumes an asymmetric model, which reduces to the standard logit model for a special case of the parameters that control the distribution’s symmetry. The model proposed is simple and allows us to estimate the parameters without using Bayesian methods, which requires implementing Markov Chain Monte Carlo methods. Furthermore, no special function appears in the formulation of the model. We compared the proposed model with the classical logit specification using three datasets. The first one deals with the well-known data collection widely studied in the statistical literature, concerning with mortality of adult beetle after exposure to gaseous carbon disulphide, the second one considers an automobile insurance portfolio. Finally, the third dataset examines touristic data related to tourist expenditure. For these examples, the results illustrate that the new model changes the significance level of some explanatory variables and the marginal effects. For the latter example, we have also modified the definition of the intercept in the linear predictor to prevent confounding.
-
ItemPrice Discrimination by Negotiation: a Field Experiment in Retail Electricity*BYRNE, DAVIDP ; MARTIN, LESLIEA ; NAH, JIASHEEN (OXFORD UNIV PRESS INC, 2022-04-19)Abstract We use a field experiment to study price discrimination in a market with price posting and negotiation. Motivated by concerns that low-income consumers do poorly in markets with privately negotiated prices, we built a call center staffed with actors armed with bargaining scripts to reveal negotiated prices and their determinants. Our actors implement sequential bargaining games under incomplete information in the field. By experimentally manipulating how information is revealed, we generate sequences of price offers that allow us to identify price discrimination in negotiations based on retailer perceptions of consumers’ search and switching costs. We also document differences in price distributions between entrants and incumbents, reflecting differences in captivity of their respective consumer bases. Finally, we show that higher prices paid by lower-income subsidy recipients in our market is not due to discriminatory targeting; they can be explained by variation in consumer willingness and ability to search and bargain.
-
ItemA Copula Type-Model for Examining the Role of Microbiome as a Potential Tool in DiagnosisCalderin-Ojeda, E ; Lopez-Campos, G ; Gomez-Deniz, E ; Li, X (HINDAWI LTD, 2022-06-06)Continuous advancements in biotechnology are generating new knowledge and data sources that might be of interest for the insurance industry. A paradigmatic example of these advancements is genetic information which can reliably notify about future appearance of certain diseases making it an element of great interest for insurers. However, this information is considered by regulators in the highest confidentiality level and protected from disclosure. Recent investigations have shown that the microbiome can be correlated with several health conditions. In this paper, we examine the potential use of microbiome information as a potential tool for cardiovascular diagnosis. By using a recent dataset, we analyze the relation of some variables associated to coronary illnesses and several components of the microbiome in the organism by using a new copula-based multivariate regression model for compositional data in the predictor. Our findings show that the coabundance group associated to Ruminococcaceae-Bifidobacteriaceae has a negative impact on the age for nonsedentary individuals. However, one should be cautious with this conclusion since environmental conditions also influence the baseline microbiome.