Across the insurance industry as new technologies challenge incumbents and consumers demand increasingly nuanced coverage, the competitors that will remain ahead are looking to external data and data exhaust streams to better inform their underwriting processes, product development, customer experience and more.
“It is becoming increasingly clear that external data – combined with industry knowledge, process expertise and sophisticated analytics – will be the basis for success for insurers going forward.” – Accenture report: Harnessing the Data Exhaust Stream
Underwriting methods for many insurance companies have traditionally been based on internal or locally accessible data, with structured information from various bureaus and agencies used to make decisions. According to a new report from Accenture, however, today “insurers are finding that they must creatively explore, mine and harness external data to remain competitive, to convert new opportunities for growth and to achieve improvements in the loss ratio.”
Some of this external data can come from the so-called “data exhaust stream,” which speaks to the typically unstructured data that is a byproduct of information gathered for a product’s primary purpose. This information is often discarded but can contain potentially valuable insights. For instance, according to a study by Wharton, IoT devices will generate 400 zettabytes (or 400 trillion gigabytes) of data a year. “Despite this huge growth in data from IoT devices, only a small amount (8.6 Zettabytes) will actually be sent to data centers for storage and subsequent analysis — the ‘data exhaust’ is much bigger than what’s actually being analyzed for insights.”
External data being analyzed increasingly by insurance companies today comes from places like publicly available government and third party databases, social media, IoT devices, weather patterns, call centers, company websites and more. Incumbents are quickly realizing the value of implementing digital system that can discover and analyze this data, as they face an increasing number of new competitors innovating on specific elements of the insurance value chain and eating away at their competitive advantage.
As a result, insurance companies are becoming creative about using this data in the algorithms that inform many of their processes and products, to “more precisely measure risk in underwriting, anticipate and prevent losses with real-time monitoring, and increase sales via more targeted distribution strategies.” This can lead to a substantial increase in profit – upwards of 16-21 percentage points.
According to The Actuary Magazine, many insurance companies are beginning to drip feed data from third party sources incrementally within particular verticals of their business, in addition to leveraging traditional sources like policy applications, property records, motor vehicle records, prescription drug records, bankruptcy records, voter records and more.
As a result, they’re able to send this data through an algorithm, “to score each individual and create an estimate of future mortality for each individual,” in turn enhancing the traditional actuarial assumption-setting process to provide deeper insights, improving customer experience and providing avenues for new types of product offerings that can help provide an advantage in an increasingly competitive market.
Having the data alone, however, is not enough to derive value. These organizations will need to understand how to pull real insights from it to make it work hard for them. To do so, team members across the company will need to understand how to manipulate these data sets and glean insights from them. As Chief Executive Officer of IAG Peter Harmer noted, “Our emerging view is that data is ubiquitous and there’s limited value in the data itself. The value resides in the insights that you can draw from the data.”
Beyond underwriting, use cases extend into product development, claims management, pricing models and more flexible options to suit growing customer demand and needs.
According to Accenture’s report, this shift will require an out with the old, in with the new outlook mentality: “The old industry model – with data and technology operating in their own controlled, isolated environment – will no longer work. Instead, teams working throughout the enterprise should be able to obtain new, unstructured data sets and the technologies required to obtain valuable insights.”