Lead Forensics


Mitigating exposure risk through multi-dimensional view of probabilistic losses & deterministic exposures

12.10.21 Simon Fagg

We are living in unprecedented times. The negative impact of climate change is having an exponentially worse impact in severity and frequency on Insured (and uninsured) assets year-on-year. There is much debate about whether the relevance of loss modeling is keeping up with a future state that no longer resembles the historical loss patterns.

For example, flooding is the costliest insured risk to Insurers in the US, and one of the most complex to model as surge, tidal, fluvial (river or stream driven), and pluvial (rain derived) contribute to wider and deeper flooding outcomes, respectively. However, flood risk modeling is an example of a transition from deterministic accumulation analysis to probabilistic modeling with what is now significantly loaded to apply a more realistic risk rating. Take FEMA’s recent re-write on flood risk mitigation (triggered by climate change) with the recent FEMA Flood Risk rating (2.0) update. Here, assessment is now not just limited to flood zones, but at a site level, with more specific focus given to loss drivers; topography of land, erosion, potential storm surge, river overflow, and rebuilding costs that for some will lead to increases of up to 18% per annum, year-on-year. Arguably, this risk assessment means 23.5 million properties in the US are subject to some flood risk in the next 30 years, yet the National Flood Insurance Plan covers only 5 million properties. For the vast majority of (re)insurers, the pre-bind submission process includes several standard processes that must enable rapid, precise evaluation of the worthiness of submission (and its exposures) – importantly, within an operationally tolerable cost. These usually follow workflow, including data ingestion, data quality assessment, augmentation, before a flavor of a pricing workflow, inclusive of a technical appraisal of property risk. Underwriters consider unmodelled/modeled risk dimensions through a mixture of integrated system pricing tools and manual tacit knowledge. As part of this process, experience from loss history is a key dimension when considering the future performance of a submission. While flood modeling may have a significant number of data points, hurricane data spans 400 years. However, given the increasing volatility in the severity and frequency of landfall named storms, the models are arguably missing the mark in accurately determining the very financial costs incurred by insurers.

In the building of probabilistic loss models, the execution of simulation events and the evaluation of a return period spanning hundreds of years is now less relative because of the actual attainable objective loss history available from which the model is based. Probabilistic Cat. Models are low frequency, high severity models that utilize past events at various magnitudes to develop a predictive model of future events. The model typically follows stochastic event selection (plausible events representing the risk), each with a hazard associated at the location level. Damage is calculated in the form of vulnerability, and the financial terms and conditions of underlying contracts (individual risk through to treaty and outwards) are applied in inuring order to return a net outcome for selected events in the form of a return period annual aggregate losses. Typically, a peak occurrence (largest single loss) and annual aggregate occurrence focus on higher frequency. However, lower severity, attritional tail events, are not generally expected to create enough low-level losses to meaningfully impact the respective premium, until now. An insurer has been able to utilize annual aggregate reinsurance excess of loss cover to smooth out the frequency of occurrence through increasing incidence. But, with the model losses not being predictive of actual outcomes, insurers are committing to underpriced risky policies, the respective outwards reinsurance cover, the RI market is seeing a 15% rise in premiums in 2021 with a normalized 94% Combined Ratio. A fundamental problem that is likely to become more prevalent is that attritional losses, tail-losses, high frequency, and moderate frequency are not well catered for in the current probabilistic loss models because historically 10 out of the last 12 years have seen new records in frequency and severity. Probabilistic Catastrophe loss models began making an impact on the industry some twenty-five to thirty years ago. Regulation has bolstered the dependence on stochastic outcomes, as over the years, these models have proven a degree of accuracy in enabling insurers to predict potential losses at risk and portfolio levels – until recently, things changed! The sophistication and increasing complexity of model science aligning to reasonably acceptable outcomes gave confidence to risk evaluation in certain geographies/perils either at the point-of-underwriting or, post bind, realistic disaster scenarios. Along the way, probabilistic loss modeling has had issues with the science. An example, model calculations using Beta distribution (rather than simulation) have meant that temporal changes (such as a 48hrs BI clause in a property wording) could not be simulated.

Probabilistic loss models are refactored to include updates from recent events and usually updated once a year to reflect the latest view based on the science perspective of the vendor from up to and including the events from 6-12 months prior. Additionally, a large reinsurer may take 6-9 months in model evaluation and adoption to implement the model, implying that calculations are based on data that is one year out of date.

While this latency situation historically has not been an issue, the onset of climate change and the increasing frequency of high severity storms are becoming apparent. Firstly, the gap in updating the model view and application in a production environment is increasingly risky, and secondly, the historical model science is not predictive of the future. Therefore, relying on this data without due consideration for climate change leaves Insurers with an uncertain view of the true nature of risk exposure.

Probabilistic models will always have a place in objective risk assessment. Still, the industry should not forget that the pace of climate change has changed the risk assessment volatility, and maintaining a multi-dimensional view of probabilistic losses AND deterministic exposures builds a more complete picture that balances out some the probabilistic models errors and omissions brought on through climate change. The over-reliance on the loss model science has not been a factor until now. Any Insurer would be prudent in factoring in a forward-looking view-of-risk that includes deterministic and probabilistic models and loading for climate change factors in lieu of historical data that is predictive of future events.

Latest Insights