Progressive's Secret Weapon: Big Data Segmentation

At Progressive, big data is defined by data sets too large to store organize and share. To counter that, the insurer recycles data through segmentation.

Similarly to how Apple’s robot “Liam” recycles old iPhones for parts, Progressive uses segmentation to characterize data from sensors or social media into viable sectors for usage, according to Pawan Divakarla, Progressive’s big data business leader.

“We recently surpassed $20 billion in premiums. That means we have a lot of data that has to be baked in,” said Divakarla. “It’s on us to use all the data we collect to give back to customers. Innovation is built into the way we think.”

The problem Progressive has is scaling big data, Divakarla acknowledged at INN’s DigIn Conference this week in Austin. Progressive’s Snapshot usage-based insurance option alone has collected more than 15 billion miles of data over the past six years.

Distributive computing, however, has helped many facets of the insurer’s business, including claims, fraud and its UBI offering, he says. The approach also tells the insurer what information has actual value and which is just noise.

“We turned to distributive computing in 2013 so that we could process data in shorter amounts of time,” Divakarla said. “What used to take us a month to process started taking nine hours to do, increasing speed to insight.”

For reprint and licensing requests for this article, click here.
Analytics Digital distribution Telematics Mobile technology Data and information management Business intelligence
MORE FROM DIGITAL INSURANCE