Data and Gen AI play large role in cybersecurity for insurance, WTW expert says

Willis Towers Watson's U.S. headquarters in Virginia
Willis Towers Watson's U.S. headquarters, 800 N. Glebe Road, Arlington, Virginia.
Cooper Carry

As cybercriminals capitalize on Gen AI to step up their attacks on insurance companies as a source of valuable customer data, the carriers are learning that they can also use Gen AI to fight back.

Sean Scranton, a consultant on the cyber risk solutions team for financial and executive risk at Willis Towers Watson, responded in written form to questions from Digital Insurance, providing more details on the methods and means insurers need to defend against cyber attacks, including AI-powered ones. 

What practices are insurance companies implementing to keep client/customer apps secure?

Sean Scranton of Willis Towers Watson
Sean Scranton, consultant, cyber risk solutions team for financial and executive risk, Willis Towers Watson
Insurance companies should set the standard for safeguarding customer information. Attacks on insurance companies in recent years with attempts to exfiltrate insured's information have put this topic in the forefront for our insurance clients. Data classification, access controls and provisioning, technological protection methods, and data retention procedures help to ensure information is accessible to only those with a need to know, and destroyed when no longer needed. Insurance is a relationship and reputational business, and the ability to safeguard this information is critical.

What new methods are insurers using to keep customer/client data secure?

As information continues to move to the cloud, there are quite a few cloud-based "new" technologies, such as the use of Cloud Access Security Brokers to enforce security. On the AI side, user behavioral analytics (UBA) analyzes "normal" user patterns, and detects any anomalous behavior that might indicate an intruder. In a larger sense, and due to the dissolving physical network perimeter into the cloud, companies are moving towards a zero trust network architecture, where every access attempt is not implicitly trusted and should be verified.

What are customers seeing as security challenges in the new world of ChatGPT / Gen AI?

Understanding the basic models of AI, from simple data mining to neural networks, to large language models. The best place to start, as with most technologies, is with a comprehensive risk assessment. From there, controls may be developed to mitigate the risks.

Challenging risks outside the technological realm include potential inherent bias in these models, the ethical ramifications of that bias, and the implicit trust people may put into these models.

Looking externally, the attacker's use of AI in developing cyber attack methods is at the forefront for security professionals. The ability to refine fraudulent requests such that these are nearly impossible to discern from a valid request is terrifying. Some clients have already had experience with voice deep fakes in order to execute wire transfer fraud.

What other cybersecurity challenges are you seeing for clients?

Ability to find security talent is paramount, as many estimates show 3.5 million job opportunities. Others include trying to keep pace with changing attack methods (which are mostly people-focused) and changing technologies, while at the same time supporting and securing legacy systems.

How are cyber risks changing for companies as they “digitize” for an industry that has struggled to catch up to the digital age?

The use of AI shows promise as a new technology – and some will see it as the maturation of data mining. But this evolution further emphasizes the need for data classification; where is your data, who has access to it, how is it used. This is one of the more difficult domains of cybersecurity, and when new digitization projects are started, this must be holistically addressed throughout the project.