How computer vision is changing insurance

Register now

The National Association of Insurance Commissioners recently created the Innovation and Technology Task Force to “explore the technological developments in the insurance sector.” It cited the rise of artificial intelligence as one of the key issues that the task force would address, citing the technology’s “potential to transform the insurance industry in a number of ways.” The message is being received loud and clear: adapt or be left behind.

The insurance industry has historically been a laggard compared to other industries when it comes adoption of new technologies. However, the industry should be credited for embracing recent advances in technology, most notably the use of computer vision (a form of AI) and drones, to automate or assist with what would have previously been arduous and/or dangerous tasks. The use of drones combined with AI technologies, like computer vision, are allowing many industries to automate or assist with what would have previously been arduous tasks:

Assessing Roof Damage with Drones
Traditionally, after a policyholder files a claim with their insurance company for roof damage, a property adjuster must climb a ladder to assess the damaged roof which is dangerous. According to the United States Department of Labor, property adjusters experience an average of 78 injuries per million site working hours, which is almost four times the injury rate of average construction worker. Drones using computer vision technology help assist property adjusters by assessing roof damage in a faster, safer, and more efficient manner.

Kespry, an industrial drone manufacturer, produces drones that can fit into a suitcase-size carrying case. The adjuster still travels to the insured’s property and programs the coordinates of the property on an iPad, creating a polygon around the property. After the adjuster selects an area to inspect and a flight pattern, the Kespry drone autonomously takes flight.

As the drone does its ‘inspection,’ it transmits data and imagery to the adjuster. When a drone inspects properties, it uses photogrammetric imaging, which involves taking many pictures and using them to develop either a 2D or 3D model of the site. The drone can measure the dimensions of a roof with great precision, enabling it to determine roof damage more accurately and safely than previously possible.

Drones can also inspect a wide variety of property damage. After Hurricane Harvey struck Southeast Texas in September 2017, both Allstate and Farmers Insurance used drones to assess subsequent roof damage. The CEO of Farmers claimed that, through using drones, the company reduced the time it took to assess hurricane-damage claims from three homes per day to three homes per hour. Moreover, the drones improved accessibility by getting to areas humans could not. Drones not only helped in damage assessments but assisted in finding stranded people and pets.

Professional drones are being used increasingly after the FAA lessened restrictions. Insurance agencies utilized 17 percent of commercial drones in 2018, and Goldman Sachs estimates that business and civil governments will spend $13 billion on drones between 2016 and 2020.

Photo Claim Service for Damaged Vehicles
Today, large insurance companies like Allstate, MetLife and Esurance offer photo claim services that allow drivers to submit photos of their vehicle’s damage through their mobile phone after which a human does the damage assessment. Since 2013, Allstate has offered a QuickFoto Claim feature on their Allstate® Mobile App. When an Allstate customer sustains minor damage in an accident, they can open the Allstate app on their mobile device and upload photos of both the damage and their entire vehicle. Then, an Allstate claims adjuster receives the photos and uses them to assess the vehicle’s damage to determine the payout. This service makes the claims process faster and easier.

The U.K.-based company Tractable was founded in 2014 and uses artificial intelligence and computer vision to assess images of vehicle damage and estimate repair costs. Tractable’s artificially intelligent photo estimating system is trained through what Tractable Chief Commercial Officer Adrien Cohen refers to as ‘supervised learning.’ This type of machine learning (with human teaching) is especially common for computer vision because of the sheer number of samples required.

Because Tractable’s system has seen millions of claims, it can often predict when minor exterior damage is likely concealing hidden damage. In addition, the program’s accuracy only improves over time. Tractable’s founders believe their system can assess 70 percent of auto collision claims.

Risk Management and Employee Safety
Businesses that engage in comprehensive risk management solutions face lower levels of risk and reduced cost of coverage. Computer vision is helping companies reduce the probability of claims occurring in the first place which gives insurers greater confidence in underwriting.

Mobileye, the makers of an advanced driver assistance system, offer collision avoidance technology using computer vision. Mobileye places cameras on the front of vehicles, and its system monitors the inputs from these cameras. The system alerts drivers when it detects threats, such as when drivers veer out of their lanes or when they speed. This system also contains automated braking technology.

Mobileye technology is highly effective, and after Dish Network installed this technology in their fleet of vehicles, they experienced an 88 percent reduction in collisions. Businesses and individuals can adopt this technology to reduce automobile accidents and lower insurance costs. was founded in 2015 and uses artificial intelligence to improve safety in the architecture, engineering and construction (AEC) industry.’s collection of artificially intelligent engines, which the company has collectively named VINNIE, use computer vision technology to enhance safety on industrial sites. VINNIE can rapidly analyze, tag and flag photos and videos that contain potential safety risks. For example, VINNIE can assess if workers are wearing hard hats and safety vests. VINNIE can also identify potentially risky environments or practices, like workers standing on scaffolding.

During a recent test, VINNIE was able to sort through 1,080 images of construction sites, correctly detect 446 images containing people, and flag 32 images of workers missing hard hats and 106 images of workers missing safety-colored clothing in under ten minutes. When a human attempted the same task, it took four and a half hours while only identifying 414 images containing people. Today, VINNIE can only recognize a few categories of safety issues, but is adding new categories to enable to system to recognize additional potential safety risks.

For reprint and licensing requests for this article, click here.
Computer vision Drones