Economics - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 729
  • Item
    Thumbnail Image
    Replication: Belief elicitation with quadratic and binarized scoring rules
    Erkal, N ; Gangadharan, L ; Koh, BH (Elsevier, 2020-12-01)
    Researchers increasingly elicit beliefs to understand the underlying motivations of decision makers. Two commonly used methods are the quadratic scoring rule (QSR) and the binarized scoring rule (BSR). Hossain and Okui (2013) use a within-subject design to evaluate the performance of these two methods in an environment where subjects report probabilistic beliefs over binary outcomes with objective probabilities. In a near replication of their study, we show that their results continue to hold with a between-subject design. This is an important validation of the BSR given that researchers typically implement only one method to elicit beliefs. In favor of the BSR, reported beliefs are less accurate under the QSR than the BSR. Consistent with theoretical predictions, risk-averse subjects distort their reported beliefs under the QSR.
  • Item
    Thumbnail Image
    Intra-industry spill-over effect of default: Evidence from the Chinese bond market
    Hu, X ; Luo, H ; Xu, Z ; Li, J (WILEY, 2021-09)
    Abstract We investigate the intra‐industry spill‐over effect of defaults in the Chinese bond market by using a sample of public corporate debt securities for the period 2014–2018. We find that both industry portfolios and individual firms witness a strong contagion effect, which further spreads to the primary bond market, triggering a surge in the debt financing cost for default industries. Moreover, this contagion effect is stronger for low‐competition industries and regulated industries, as well as when a default happens to state‐owned enterprises. Better information access and higher bond liquidity alleviate the contagion effect, lending support to the information updates and liquidity dry‐up hypotheses.
  • Item
    Thumbnail Image
    How to proxy the unmodellable: Analysing granular insurance claims in the presence of unobservable or complex drivers
    Avanzi, B ; Taylor, G ; wong, B ; Xian, A (Institute of Actuaries, Australia, 2018)
    The estimation of claim and premium liabilities is a key component of an actuary's role and plays a vital part of any insurance company’s operations. In practice, such calculations are complicated by the stochastic nature of the claims process as well as the impracticality of capturing all relevant and material drivers of the observed claims data. In the past, computational limitations have promoted the prevalence of simplified (but possibly sub-optimal) aggregate methodologies. However, in light of modern advances in processing power, it is viable to increase the granularity at which we analyse insurance data sets so that potentially useful information is not discarded. By utilising more granular and detailed data (that is usually readily available to insurers), model predictions may become more accurate and precise. Unfortunately, detailed analysis of large insurance data sets in this manner poses some unique challenges. Firstly, there is no standard framework to which practitioners can refer and it can be challenging to tractably integrate all modelled components into one comprehensive model. Secondly, analysis at greater granularity or level of detail requires more intense levels of scrutiny as complex trends and drivers that were previously masked by aggregation and discretisation assumptions may emerge. This is particularly an issue with claim drivers that are either unobservable to the modeller or very difficult/expensive to model. Finally, computation times are a material concern when processing such large volumes of data as model outputs need to be obtained in reasonable time-frames. Our proposed methodology overcomes the above problems by using a Markov-modulated non-homogeneous Poisson process framework. This extends the standard Poisson model by allowing for over-dispersion to be captured in an interpretable, structural manner. The approach implements a flexible exposure measure to explicitly allow for known/modelled claim drivers while the hidden component of the Hidden Markov model captures the impact of unobservable or practicably non-modellable information. Computational developments are made to drastically reduce calibration times. Theoretical findings are illustrated and validated in an empirical case study using Australian general insurance data in order to highlight the benefits of the proposed approach.
  • Item
    Thumbnail Image
    On the Impact, Detection and Treatment of Outliers in Robust Loss Reserving
    Avanzi, B ; Taylor, G ; Wong, B ; Lavendar, M (Actuaries Institute, 2016)
    The sensitivity of loss reserving techniques to outliers in the data or deviations from model assumptions is a well known challenge. For instance, it has been shown that the popular chain-ladder reserving approach is at significant risk to such aberrant observations in that reserve estimates can be significantly shifted in the presence of even one outlier. In this paper we firstly investigate the sensitivity of reserves and mean squared errors of prediction under Mack's Model. This is done through the derivation of impact functions which are calculated by taking the first derivative of the relevant statistic of interest with respect to an observation. We also provide and discuss the impact functions for quantiles when total reserves are assumed to be lognormally distributed. Additionally, comparisons are made between the impact functions for individual accident year reserves under Mack's Model and the Bornhuetter-Ferguson methodology. It is shown that the impact of incremental claims on these statistics of interest varies widely throughout a loss triangle and is heavily dependent on other cells in the triangle. We then put forward two alternative robust bivariate chain-ladder techniques (Verdonck and VanWouwe, 2011) based on Adjusted-Outlyingness (Hubert and Van der Veeken, 2008) and bagdistance (Hubert et al., 2016). These techniques provide a measure of outlyingness that is unique to each individual observation rather than largely relying on graphical representations as is done under the existing bagplot methodology. Furthermore the Adjusted Outlyingness approach explicitly incorporates a robust measure of skewness into the analysis whereas the bagplot captures the shape of the data only through a measure of rank. Results are illustrated on two sets of real bivariate data from general insurers.
  • Item
    Thumbnail Image
    On the optimality of joint periodic and extraordinary dividend strategies
    Avanzi, B ; Lau, H ; Wong, B (ELSEVIER, 2021-08-13)
    In this paper, we model the cash surplus (or equity) of a risky business with a Brownian motion (with a drift). Owners can take cash out of the surplus in the form of “dividends”, subject to transaction costs. However, if the surplus hits 0 then ruin occurs and the business cannot operate any more. We consider two types of dividend distributions: (i) periodic, regular ones (that is, dividends can be paid only at countably many points in time, according to a specific arrival process); and (ii) extraordinary dividend payments that can be made immediately at any time (that is, the dividend decision time space is continuous and matches that of the surplus process). Both types of dividends attract proportional transaction costs, but extraordinary distributions also attract fixed transaction costs, which is a realistic feature. A dividend strategy that involves both types of distributions (periodic and extraordinary) is qualified as “hybrid”. We determine which strategies (either periodic, immediate, or hybrid) are optimal, that is, we show which are the strategies that maximise the expected present value of dividends paid until ruin, net of transaction costs. Sometimes, a liquidation strategy (which pays out all monies and stops the process) is optimal. Which strategy is optimal depends on the profitability of the business, and the level of (proportional and fixed) transaction costs. Results are illustrated.
  • Item
    Thumbnail Image
    SynthETIC: An individual insurance claim simulator with feature control
    Avanzi, B ; Taylor, G ; Wang, M ; Wong, B (ELSEVIER, 2021-07-07)
    Recent years have seen rapid increase in the application of machine learning to insurance loss reserving. They yield most value when applied to large data sets, such as individual claims, or large claim triangles. In short, they are likely to be useful in the analysis of any data set whose volume is sufficient to obscure a naked-eye view of its features. Unfortunately, such large data sets are in short supply in the actuarial literature. Accordingly, one needs to turn to synthetic data. Although the ultimate objective of these methods is application to real data, the use of synthetic data containing features commonly observed in real data is also to be encouraged. While there are a number of claims simulators in existence, each valuable within its own context, the inclusion of a number of desirable (but complicated) data features requires further development. Accordingly, in this paper we review those desirable features, and propose a new simulator of individual claim experience called SynthETIC. Our simulator is publicly available, open source, and fills a gap in the non-life actuarial toolkit. The simulator specifically allows for desirable (but optionally complicated) data features typically occurring in practice, such as variations in rates of settlements and development patterns; as with superimposed inflation, and various discontinuities, and also enables various dependencies between variables. The user has full control of the mechanics of the evolution of an individual claim. As a result, the complexity of the data set generated (meaning the level of difficulty of analysis) may be dialed anywhere from extremely simple to extremely complex. The default version is parameterized so as to include a broad (though not numerically precise) resemblance to the major features of experience of a specific (but anonymous) Auto Bodily Injury portfolio, but the general structure is suitable for most lines of business, with some amendment of modules.
  • Item
    Thumbnail Image
    On the modelling of multivariate counts with Cox processes and dependent shot noise intensities
    Avanzi, B ; Taylor, G ; Wong, B ; Yang, X (ELSEVIER, 2021-04-03)
    In this paper, we develop a method to model and estimate several, dependent count processes, using granular data. Specifically, we develop a multivariate Cox process with shot noise intensities to jointly model the arrival process of counts (e.g. insurance claims). The dependency structure is introduced via multivariate shot noise intensity processes which are connected with the help of Lévy copulas. In aggregate, our approach allows for (i) over-dispersion and auto-correlation within each line of business; (ii) realistic features involving time-varying, known covariates; and (iii) parsimonious dependence between processes without requiring simultaneous primary (e.g. accidents) events.
  • Item
    Thumbnail Image
    Optimal periodic dividend strategies for spectrally negative Levy processes with fixed transaction costs
    Avanzi, B ; Lau, H ; Wong, B (Taylor and Francis Group, 2021-02-04)
    Maximising dividends is one classical stability criterion in actuarial risk theory. Motivated by the fact that dividends are paid periodically in real life, periodic dividend strategies were recently introduced (Albrecher et al. 2011). In this paper, we incorporate fixed transaction costs into the model and study the optimal periodic dividend strategy with fixed transaction costs for spectrally negative Lévy processes. The value function of a periodic (bu,bl) strategy is calculated by means of exiting identities and Itô's excusion when the surplus process is of unbounded variation. We show that a sufficient condition for optimality is that the Lévy measure admits a density which is completely monotonic. Under such assumptions, a periodic (bu,bl) strategy is confirmed to be optimal. Results are illustrated.
  • Item
    No Preview Available
    Is it a fallacy to believe in the hot hand in the NBA three-point contest?
    Miller, JB ; Sanjurjo, A (Elsevier BV, 2021-09-01)
  • Item
    No Preview Available
    Behavioral Constraints on the Design of Subgame-Perfect Implementation Mechanisms
    Fehr, E ; Powell, M ; Wilkening, T (AMER ECONOMIC ASSOC, 2021-04)
    We study subgame-perfect implementation (SPI) mechanisms that have been proposed as a solution to incomplete contracting problems. We show that these mechanisms, which are based on off-equilibrium arbitration clauses that impose large fines for lying and the inappropriate use of arbitration, have severe behavioral constraints because the fines induce retaliation against legitimate uses of arbitration. Incorporating reciprocity preferences into the theory explains the observed behavioral patterns and helps us develop a new mechanism that is more robust and achieves high rates of truth-telling and efficiency. Our results highlight the importance of tailoring implementation mechanisms to the underlying behavioral environment. (JEL C92, D44, D82, D86, D91)