Insights

How data transparency is influencing insurer’s risk appetite

Insurwave Team avatar
Insurwave Team
8 mins read
Share this post

Subscribe to our Blog

Subscribe to our Blog

Contents
Share this post

Subscribe to our Blog

Subscribe to our Blog

Summary

  • As demand for coverage for natural disasters rises, accurate and timely risk analytics hold a great deal of importance for insurers seeking to assess, mitigate, and manage potential losses. 
  • But while the frequency of these events is rising, available insurance & reinsurance capital isn't keeping pace due to poor property insurance and reinsurance results in recent years.
  • There is a shared industry-wide understanding of the need for greater data transparency and accuracy to help improve CAT modelling across the board. 
  • From a technology perspective, artificial intelligence and data ingestion capabilities will be integral to helping ensure the data is complete prior to its placement in the models themselves.
  • The continued push from insurance providers to call for better, more open data standards will also help raise the bar for data models moving forward to avoid future compromised risk assessments and improve risk appetite in the market.

Every day we turn on the news, we are faced with more headlines detailing natural catastrophes; the wildfires in Maui being the most recent example. As demand for coverage for natural disasters rises, accurate and timely risk analytics hold a great deal of importance for insurers seeking to assess, mitigate, and manage potential losses. 

Global professional services firm Aon noted that as the number of extreme weather events rises – with more than 400 natural catastrophes causing $313bn of global damage in 2022 covered by insurance – it is increasingly crucial for companies to adequately quantify the impact of climate risk.

But while the frequency of these events is rising, available insurance & reinsurance capital isn't keeping pace due to poor property insurance and reinsurance results in recent years, which means a reduced appetite from insurers, and widely held perceptions that risk assessments are underestimating actual loss experience.  

Despite the wealth of available data and improved Catastrophe (CAT) Modelling capabilities, there is still a lack of confidence in the data underpinning them. The need for improved modelling and data transparency, therefore, is apparent. In this article, we will explore a spectrum of currently available CAT models, the importance of the data quality and transparency that underpins them and what is being done to improve it.

Data transparency

Catastrophe (CAT) models allow insurers, reinsurers, companies, financial institutions and more to evaluate and manage natural and man-made catastrophe risk from perils ranging from earthquakes and hurricanes to floods and wildfires. 

As demand for coverage for natural disasters has grown, so has the need for models that can be relied upon during risk assessment. While CAT models have improved in quality and modelling capabilities, poor property reinsurance underwriting results in recent years have contributed to widely held perceptions that risk assessments are underestimating actual loss experience. 

Last year, flooding in the Durban area of South Africa created one of the deadliest disasters in recent memory for the nation, impacting both commercial and residential properties and resulting in insured losses of USD 1.5 billion. 

Insurance penetration was low due to South Africa being considered to be insulated from natural disasters like tropical storms and earthquakes and combined with a lack of accurate data - an unexpected level of losses was accrued by both Insurers and the Insureds. Insurers were left overexposed and suffered losses they did not expect to, while businesses and homeowners were forced to take on a larger proportion of their losses, which accounted for $3.5 billion in total, more than double their coverage.

In real terms, the damage at Toyota South Africa resulted in the loss of production of 45,000 cars, resulting in loss claims between $353 to $400 million, while the country’s biggest beer maker, South African Breweries (SAB), suffered as much as R700 million in damaged property. 

To ensure that these mistakes are not repeated, not only must the quality of data improve alongside the expansion of modelling capabilities, but also the ability to bring future climate modelling data into the standard framework of catastrophe models.

Speaking to The Insurer TV, Aon’s Head of Climate Risk Advisory Liz Henderson explained:

“Climate models [...] they're global, they're continental at scale, they're forward looking. You have to be able to take that kind of global, forward data and link it back into that foundational framework of what is it actually going to mean in terms of event frequency, severity and behaviour, and then take those learnings and look at how exposures are going to change as the hazards change.”

 

Open data standards

With the alarming increase in severity of environmental disasters across the globe, many organisations are looking for ways to collaborate and improve data standards. Open data standards in risk modelling are believed to be the most effective and transparent – and there are many initiatives designed to help organisations meet them.

In 2020, Moody’s RMS introduced the Risk Data Open Standard (RDOS), a flexible modern data schema designed to help drive value and innovation throughout the industry. To promote the collection of high-quality exposure data throughout the insurance value chain, Verisk also committed to making its exposure data formats open with its Cyber Exposure Data Standard, which is flexible, allowing organisations to grow the types of data they collect over time.

More broadly, the Oasis Loss Modelling Framework provides an open-source platform for developing, deploying and executing catastrophe models. It uses a complete simulation engine and does not restrict the modelling approach. The models are packaged in a standard format and the components can be from any source, such as model vendors, academic and research groups. 

Some organisations have also joined forces to help drive better standards. Led by RenaissanceRe, SCOR, Hannover Re and Swiss Re and Aon, the Open Exposure Data (OED) Standard is a non-commercial venture designed to test and support the hypothesis that open data standards in risk modelling are more effective than proprietary and commercial standards in terms of improving operational efficiency, reducing costs and increasing transparency and consumer choice.

The OED Standard will also lower entry barriers for model developers – including commercial vendors, third-party data providers and academia and other research institutions.

Speaking to industry publication Global Reinsurance, Martin Bertogg, head of Cat Perils, Cyber and Geo at Swiss Re, said: “Globally shared data formats are an important step to overcome barriers for consistency, processing efficiency and a transparent cat risk dialogue”.

In the US, Insurer groups are lobbying the Californian government to change its stance on the use of modelling in rate making, arguing that restrictions on the use of catastrophe modelling are preventing carriers from adequately pricing wildfire risks. State regulations direct wildfire insurers to set rates using historical losses as a guide, prohibiting models that can account for land use changes, the state of vegetation or weather trends.

“Unlike every other state, California regulations prohibit the use of forward-looking climate models to project future losses, and instead require wildfire risk to be priced using an insurer’s average wildfire losses over the last 20 years,” explained Michael D’Arelli, executive director of the American Agents Alliance.

Solutions such as Insurwave are using the power of technology to help contribute to a future of open data standards.

 

The power of AI

Using its advanced data extraction capabilities to extract schedule and slip information, AI can transform the required submission data set into the appropriate standard, ready for processing. 

Taking advantage of AI’s ability to analyse large datasets, reformat and refine them prior to being placed into a CAT model or exposure management platform, oversights or issues, such as those witnessed in South Africa, can be flagged prior to impacting the risk assessment itself.

“We believe transformation is impossible without effective and robust data ingestion practices. Nor will companies generate the optimum value from their investments in advanced technology unless they can streamline the process by which data enters those systems,” explained Tom Williams, Senior Product Manager at Insurwave.

Using the intelligent capture of information at the front-end, Insurwave AI can transform processes and decision-making and build confidence in the quality of data. The value of quality, structured data also advances a broader transformational agenda – the opportunity to reimagine business models for the future and fundamentally re-engineer processes across the value chain. 

“High-quality data are the foundation for understanding natural hazards and underlying mechanisms providing ground truth, calibration data and building reliable AI-based algorithms,” said Monique Kuglitsch, Innovation Manager at Fraunhofer Heinrich-Hertz-Institut and Chair of a new Focus Group on AI for Natural Disaster Management, supported by the International Telecommunication Union (ITU) together with the World Meteorological Organisation (WMO) and UN Environment.

Transparency and technology

Although the storyline for 2023 has mostly been dominated by an increase in the frequency of climate change-related exposure growth and a shrinking of insurance and reinsurance capital, there is a shared industry-wide understanding of the need for greater data transparency and accuracy to help improve CAT modelling across the board. 

From a technology perspective, artificial intelligence and data ingestion capabilities will be integral to helping ensure the data is complete prior to its placement in the models themselves and the continued push from insurance providers to call for better, more open data standards will also help raise the bar for data models moving forward to avoid future compromised risk assessments and improve risk appetite in the market.

 

Subscribe to our blog