Blogs
Probabilistic and deterministic modeling for a data-driven, analytical underwriting approach
The world of insurance has always relied on historic data, models, and other forms of insight to create trends and predictions that ultimately drive their risk profiles and pricing that insure our global risk. Over the past several hundred years, history has provided an incredibly valuable dataset that our industry has enjoyed using to great success. Over the past 15 years, we have started seeing changes within our climate, both environmental and social, that have revealed cracks in how we predict the future of risk. We can no longer simply rely on historical data that provides probabilistic models. Even blending multiple models has proven troublesome given the reliance on history to predict future events; the time has come to reevaluate how insurers evaluate risk in order to increase precision in assessing risk exposure.
We need not look any further than this year’s Atlantic hurricane forecast to understand we are dealing with a “new normal” within the primary perils’ losses. Experts across the board predict another above-average hurricane season that will test the capitalization of the global (re)insurance markets. To be fair, our market’s recent hardening has provided some much-needed cushion for this season, but that does not stop the trend of annual predictions of increased frequency and severity of these natural catastrophes. At the time of writing this blog, Canada´s unprecedented heatwave and catastrophic flooding in Germany and Belgium have been tied to global warming. In addition, year-over-year increases identify trends that can be used to enable resilience in pricing models and thwart adversely competitive impact to market pricing.
On top of these unsettling forecasts within the primary category, one must face the very tough reality that secondary perils are the REAL issue, causing 60% of all losses, and are particularly impactful to Direct markets as attritional losses do not necessarily make the threshold to RI markets. In 9 out of the last 11 years, secondary perils have outpaced primary perils in total losses. Geographically severe convective storms are occurring with the same increase in frequency and severity across the globe. However, it’s the localized nature of Hailstorms, Tornados, Derechos, Floods and Wildfire (which has seen a year-on-year rise of 500 incidences of fire in H1 alone. Evidence supports this being driven by record U.S. Droughts that are increasingly becoming a common occurrence. “The realities of climate change are nowhere more apparent than in the increasingly frequent and severe drought challenges we face in the West and their devastating impacts on our communities, businesses, and ecosystems,” said California Governor, Gavin Newsom, in a statement.
These events have driven up premiums across the industry and motivated many (re)insurers to adjust their entire portfolio to recalibrate their exposures and ensure they can withstand the events that will take place. So, how can we move forward to protect insurers´ portfolios and our clients’ interests?
- Portfolio Resilience – given the adverse changing nature of the risk environment. From an insurance perspective, the ability to rapidly adapt the portfolio to mitigate accumulation exposure in any geography/peril.
- Embrace technology to measure real-time exposure, enhance data points and provide pre-bind triage, pricing, and post-bind event response.
- Recalibrate portfolios to reduce the industry risk with high-risk natural catastrophe perils.
The magical data lake
Like the fountain of youth, there is a mythical and magical data lake out there that contains every single piece of information you need to run your business successfully – all organized, connected, and ready to provide any and all answers you need! In reality, our current state of data warehouses and data lakes are evolving organisms that require a lot of resources to manage and keep organized. In addition to this, we have only just begun to see these oceans of data being used throughout organizations to drive better underwriting decisions and more effective exposure management. These projects are similar to working on an airplane while it is in midflight. Every day, terabytes of new data are being added to the existing dataset. On top of this, we are looking to add new data fields and data sets that drive more insights and better analytics.
There is hope as technology continues to evolve rapidly, allowing for better API connectivity and faster data exchange from various sources. The key to success on this front is partnering with the right solutions to successfully harness all of your current IP/data/insights to drive better decision-making within the organization.
The time has come for the industry to begin sharing its siloed data to harness the power of an aggregated set of anonymized data. This can only improve each and every participating organization´s risk profiling and ultimately deliver a more accurate, quality pricing model. The reluctance has come through the argument of data privacy and a feeling there are organizational secrets in the data, anonymized data answers for data privacy and the collective rewards for the industry through utilization of data to drive advantages for (Re)insurers to enhance consumer offering, distribution and thereby reduce costs exponentially. Furthermore, a single-source-of-truth and enhanced trusted datasets will feed into the greater good in the changing landscape provided through Climate Change.
In addition to the internal data sources, third-party data providers are becoming an essential piece of the puzzle to drive proactive, value-adding risk mitigation measures to your customers. Real-time data is beginning to emerge that can help notify the insured of impending storms or perils, allowing them the crucial time to act and avoid the claim or loss. Competitive advantage for insurers will rely on the ability to assess all parameters in context at the micro-level, and third-party providers will facilitate more real-time accurate forecasting that matches organizational risk profile to your risk portfolios on why, where and when you make Underwriting decisions.
Real-time data also allows organizations to act quickly on moratoriums or adjustments in rates that influence decisions and provide more effective bottom-line operational growth, imperative where the localization of impending weather conditions can impact individual properties and not that next door. All of these great things ultimately lead to more effective and efficiently underwriting decision making and proactive loss mitigation.
Next generation predictions
Taking all of this data and ingesting it into a tool that can provide your modeling teams with the ability to adjust and adapt the data to create their custom models based upon experience and insights is the next generation of predicting events, and the overall next exposures you could be responsible for. Finding the tools in the market today with this level of configurability and agility is essential to compete over the next 5-10 years as new markets, new programs, and new risks continue to emerge. Finding these opportunities to exploit first-mover advantages could be the difference between success and failure.
So, now the answer to these points is how does one aggregate internal data, third-party data from multiple sources, underwriting data, and any other instrument needed to provide a holistic view of real-time risk at the underwriter’s desktop? This approach is driven by new technology in the market that can deliver a data-driven, analytical underwriting approach supported by probabilistic modeling AND forward-looking deterministic modeling. It captures what our industry has supported and trusted for hundreds of years while also looking into the future to play with these models and drive deeper insights that will ultimately lead to better portfolio management and lower expense ratios, leading to lower combined ratios.