How to avoid becoming the victim of AI scams

Headshot of Jennifer Wilson.

Transcription:

Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Patti Harman (00:06):

Welcome to the latest edition of the Dig-In Podcast. I'm Patti Harman, editor-in-chief of Digital Insurance. The adoption of artificial intelligence is permeating many aspects of our personal and business lives. While there may be many benefits to its adoption, it also poses some risks such as the creation of deep fakes and more sophisticated cyberattacks. Joining me today to discuss this and more is Jennifer Wilson, cyber leader at Newfront. Thank you so much for coming on the Dig-In podcast. Jennifer,

Jennifer Wilson (00:39):

Thank you so much for having me, Patti. I'm pleased to be part of this discussion.

Patti Harman (00:44):

Well, I was at a conference last week and this is a huge issue, so I'm wondering, are you seeing a rise in any particular types of cyber crimes or any interesting trends at this point?

Jennifer Wilson (00:59):

Yes. I always say that the only constant in cyber is that it's constantly changing, so there's always something new coming at us. Ransomware continues to be the headliner and has maintained, the top position year over year. However, the past two years have shown a significant increase in third-party privacy litigation, specifically wrongful collection of data. These claims are top of mind for most insurers. Some examples are pixel tracking, it's tracking users' movements on your website or wiretapping, intercepting electronic communications without consent. An example of that would be chat messages, and of course biometrics. We're seeing a lot of biometric claims. That's the unlawful collection of fingerprints, eye scans, face scans, et cetera. The issue is the collection, storage, or transfer of protected or confidential information without consent. A driving force in these claims is the increased privacy laws where we're already seeing the cut and paste allegations from plaintiff attorneys.

(02:16):

And while these are long-tail claims, the legal costs add up and that makes it a lucrative hit for plaintiff attorneys. And that's why it's such an attractive type of area for plaintiff attorneys to delve into. And insurers are shifting the way they're looking at insureds or potential insureds, and they're using AI tools to identify the tracking practices of an insured. So what they'll do is they'll first go on to an organization's website to identify their privacy statements, and then they'll use the AI tools to identify the tracking practices and make sure they're in compliance. And you want to know who else is using these tools? Yes, plaintiff attorneys. So plaintiff attorneys are using the AI tools as well to help identify their next target for their next wrongful collection litigation.

Patti Harman (03:21):

Wow. That'll give people some sleepless nights.

Jennifer Wilson (03:24):

Right? Yeah. I recently heard an insurer say they're more concerned about plaintiff attorneys at this point than they are about cyber criminals because they're just all over these types of claims. And due to the increase in these claims and the uncertainty around the settlement exposure, because they are long-tail claims, insurers are finding ways to limit or exclude the coverage related to privacy. Almost across the board, there are still a few that are still providing the coverage, but for the ones that are willing to offer the coverage, they are looking to confirm one, that you're in compliance with your privacy laws. Two, you're providing notices of your practices, and three, you're obtaining consent in advance.

Patti Harman (04:16):

Wow. I was at a conference last week and we were talking about social inflation, and I see this as just being possibly another aspect that comes into play here with all of that. And it's a little bit scary because technology is changing so quickly. And so it's incumbent on insurance carriers, agents and brokers to stay abreast of what's going on and how it affects coverage. And I remember having a conversation with someone in the insurance industry several years ago, and I used the term, I said, well, cyber claims are long-tail claims. And they looked at me like they had never considered that possibility. But as you see this going forward, that is very much what they are, and it affects companies that are up for sale, that have had breaches. I mean, there are just so many different facets I think that come into play here.

Jennifer Wilson (05:13):

I completely agree, Patti, and that's why I think cyber is so interesting because it does have both components. It's got the first party, quick, short-tail claims like ransomware, but then it also has the long-tail claims for the third party. So yeah, cyber, it's fascinating. And to your point, AI is advancing the scope and scale of cyberattacks at unprecedented numbers. Automation with AI reduces the timeline for tasks such as scanning for vulnerabilities across thousands of systems from months down to seconds. And AI is being leveraged to give unskilled hackers the capabilities to pull off sophisticated attacks. And I'm sure you are familiar with the $25 million deep fake wire fraud claim from last year. I think that's just a harbinger of what we're going to see in the future.

(06:19):

And with the wire fraud types of claims, insurers have implemented what they call the callback requirement. So when you get a wire transfer or request if it exceeds a certain threshold or if it's changing wiring instructions, the insurers want to make sure that you are calling that individual based on the contacts you have, not on the email that's providing the request, but with things like the deep fake, it makes the callback moot. Why on earth are you going to call your CFO to authenticate his wire transfer request when you're looking right at him on a zoom and you're hearing his voice? So I think that we have to find other ways to authenticate these requests, and there are tools to authenticate whether or not to identify whether or not someone's using a deep fake, but that's not happening in real time. You have to record the video and then upload it into technology, and that's not going to help you when your CFO is saying, transfer this money now. It's urgent.

Patti Harman (07:39):

Very true.

Jennifer Wilson (07:40):

Yeah, I dunno. AI is also, it's used to improve the hit ratio of phishing attacks. So no longer are we seeing the awkward and choppy language in phishing emails that we used to see. It's now being replaced with grammatically correct language. That's how they're using AI to improve the cadence of the emails. And threat actors are also using AI to scrape public data via LinkedIn, for example, to help them craft convincing emails that are tailored specifically to that individual. So it's making it more and more challenging to identify those phishing emails.

Patti Harman (08:29):

It's very scary. And they're even doing it, I think with phone calls as well because they have so much information available. And I had gotten a call a couple of weeks ago and they had all of this information, and I'm like, how do you verify that in that split second that they are who they say they are? And I am thinking I'll have to do some follow up on that because there were some things that happened. And I was like, this is very, very strange. This is not the way this type of a business would operate or what they would say. That sort of thing. And that kind of leads into my other question about how are the cyber crimes changing and how have they evolved because we're seeing this, like you said, it's changing almost overnight.

Jennifer Wilson (09:20):

It's true, Patti, and just take a step back. So you and I are in this world. We're aware of all this. Imagine all of the unsuspecting victims out there that aren't aware of these evolving schemes, and it's just your head could spin thinking about how many people become victims of these phishing schemes and cyber crimes. It's mind-boggling. Let's take a step back and talk about what happened following the pandemic and move forward and talk about different ways the claims have evolved. The pandemic brought on the onslaught of ransomware claims. We all know that. And encryption of critical data was the focus, and that was leveraged to elicit ransom payments. But as we all learned how the attacks were happening, insurers started to require more stringent security controls. And we all know that because all of the companies out there had to invest a lot of money to improve their security, and that included backups.

(10:43):

And suddenly organizations were able to get back up and running without having to pay for the decryption key, so they were less likely to pay the ransom. So then that forced the threat actor groups to pivot to data extraction. So now they weren't locking the files, now they were taking the critical data and they were leveraging that data in a variety of ways. One threatening to sell it to the highest bidder on the dark web or using the data to influence ransom payments. Let me give you an example of what we saw with one of our clients. One of our clients was hit with a ransomware attack and they made the business decision not to pay the ransom and to rely on their backups. And almost immediately after they refused to pay the ransom, they started getting harassing calls from their customers. The threat actors were calling our clients customers and urging them to persuade our client to pay the ransom or the threat actor groups would leak the customer's confidential information.

Patti Harman (11:59):

Oh wow. That takes it to a whole new level then.

Jennifer Wilson (12:04):

And so now our client is calling us and saying, okay, we need to revisit our approach on this. So yes, we're also hearing that threat actor groups are calling the data to identify family members of C-Suites' home addresses and use threats that way. Yeah, they are getting very crafty and finding all ways around. As soon as they find a way to attack and we find a solution, they pivot and find a new angle, and then we have to pivot. And it's just this constant chase and constant evolution. As they evolve, we evolve, they evolve. Cybersecurity has to evolve, insurance has to evolve. So it's this crazy cat and mouse game. Another thing we're seeing, we've seen three of these types of claims.

(13:19):

I think we're going to see more and more of this in the future as well. Cyber criminals are interviewing for IT jobs under false identities. So they're creating complete false identities, complete LinkedIn profiles with fake references, fake companies, and they're doing this so that they can get hired just so they can get the login credentials. And as soon as they get the login credentials, they then pass that along to a bigger threat actor group that pays them for the credentials. And then they just disappear. And by the time the company realizes that the person they hired actually has gone MIA, it's too late, the threat actor group that purchased the login credentials, they already got in there, pulled the data, they're gone. Another angle is they're interviewing and getting the jobs and they're negotiating high salaries and they're just banking the salary so that they can fund their own criminal operation.

Patti Harman (14:37):

That's why I've loved covering insurance fraud because my mind doesn't go there. And when you look at how creative and innovative some of these schemes are, it surprises me no end. So we're going to take a short break right now. We'll be back in a few minutes.

Welcome back to the Digging podcast. We're chatting with Jennifer Wilson, cyber lead at New Front about AI risks for companies and cybersecurity and all sorts of other exciting things here.

So how is AI influencing the cyber insurance space then? Because it has to be, it's affecting every other area. What is it doing in terms of cyber insurance?

Jennifer Wilson (15:28):

Well, Patti, you kind of alluded to this earlier, the advancements in AI are moving at rapid speed. The insurance industry notoriously moves at a glacial pace. So what that means is that the coverage is often struggling to keep up with the exposures. We're already seeing AI-specific claims, and the insurance policies lack affirmative AI language. So we are stuck relying on what I call silent AI, which means that if it's not excluded and it falls within the definition of claim and wrongful act, then you can assume it's covered. But that's not acceptable. We want to provide our clients with certainty of coverage. So we're working with underwriters to endorse AI-specific language to address these emerging risks. And some companies, some insurance companies out there are coming out with their own language, but it's coming out in piecemeal. Some of the claims that give us pause and concern include hallucinations bias, deep fakes, AI, regulatory coverage for compliance or training controls.

(16:51):

I'll give you an example of bias. One of our clients caught the situation before it evolved into a claim, but one of our clients is a national grocery chain, and they wanted to use an AI tool to be the first pass for interviewing candidates. So the AI tool, it was a robot that would interview candidates via Zoom, and if they pass the first interview, then they would go onto the human interview and the head of HR at that company noticed that they were just getting all men, females and people of color were not being referred, and they didn't know are they sending in applications, are they applying and not getting through or are they not applying? So they decided to test the tool, and by testing it, they used employees who were long-term employees, female people of color, long-term employees who were exceptional at their jobs, and they had them interview for their actual job. And you want to guess what happened? They were declined by the AI tool. Wow.

Patti Harman (18:16):

Right. I mean, you hear about bias in AI and in other things. This is just such a tangible example of what that could look like.

Jennifer Wilson (18:26):

Yeah, that was a very smart HR director who caught that before it spiraled into an actual bias claim.

Patti Harman (18:38):

Yes.

Jennifer Wilson (18:38):

Yeah, that could have been. So yeah, I think because of all of these, we know there are new and emerging risks. Insurance companies are starting to look at language and consider how they can cover the AI-specific claims. I think in the future, we may get to the point where we see standalone AI policies because the coverage right now as it stands doesn't contemplate the breadth of AI-related claims. An example, well, the example I just gave you, that HR example. So that would be, if those claims resulted from that tool, they would be alleging failure, discrimination, failure to hire, or yeah, I believe it would failure to hire, or if it was happening internally, it would be failure to promote and those claims would fall under an employment practices liability policy. I don't know that line too well, but my guess is the employment practices liability policy doesn't contemplate tech E&O, and that's where the tool is in the tech E&O.

(20:07):

On the flip side, the tech E&O policy excludes employment-related matters. So that type of claim is falling right smack in the middle of two different policies. Another example would be an AI tool software for an automobile that causes a 10-car pile up that results in fatalities. Well, that's bodily injury. And tech E&O isn't intended to cover bodily injury, but the auto liability policy isn't intended to cover E&O. Right. So I think what I hope is that insurance companies are going to put together standalone AI policies that cover the whole gamut that would fall under that.

Patti Harman (21:04):

Right? Because just listening to you explain all those like, oh my goodness, I never thought of that. And you're right, because how do you buy insurance to insure your AI, and there's so much pressure on companies today to implement it across their entire ecosystem. How does the implementation of AI in a company change its risk profile then? Because think about it, you're using it to collect information, you're using it to underwrite policyholders, you're using it for claims. There are just so many different ways. So how is it changing their risk profiles then?

Jennifer Wilson (21:48):

It's so interesting because the level of AI dictates the risk level from an insurance perspective, but from a marketing perspective, the more you tout AI technologies in your business model, the more attractive you are to customers and consumers. So companies that are barely using AI of any sort whatsoever, they're plastering all over their website. We are using AI in our business operations and AI this and in AI that. And from the broker's perspective, we're saying temper that down a little bit because you're not really using the AI that you think you're involved in and you're just increasing your risk. The more AI you utilize in your business operations, the less attractive you become to insurers because it increases your risk. Right? So insurers look at AI risk in three different buckets. One, it's the general AI consumption, which are sort of the plug and play solutions.

(23:01):

Think of chat GPT. These are tools that are embedded into a SaaS product. For example, this is the lower level exposure and something that most companies deploy already. Then the next step up is AI integration. This is where you integrate the AI tools into your internal platform, and this brings another level of exposure because it increases your need for data security. And then finally, you have the AI development. This is when you're building your own generative AI model on your proprietary data, and this is the highest exposure. So that's kind of where insurers are looking at it. So as a business, want to make sure that you are identifying your level of AI risk when you're going out for insurance.

Patti Harman (23:54):

Are there tools available then to help them kind of identify what some of these risks are?

Jennifer Wilson (24:01):

There are a lot of tools out there. We talked earlier about how insurance companies are using AI tools to identify the privacy exposures that a company has there. There are other tools out there that identify vulnerability. So insurance companies and brokers are using external risk scans to identify vulnerabilities of their insureds. We use it to help our clients understand what their risk profile looks like, what their exposures are, and to help them address any vulnerabilities in advance before we even go to submit them to market to insurers. The insurance companies are looking at this to understand what the vulnerabilities are, and this can impact price and coverage, availability, coverage options. But threat actors are using those same external risk scans to identify vulnerabilities of their potential targets. So it's so interesting that we're all using these tools for different purposes, and then plaintiff attorneys are using tracking scans to identify targets for their non-compliance litigation. So everyone is finding a way to use these tools to get what they want, but if you're not looking at the results or you're not using the tools internally, you should be because you want to stay ahead of it.

Patti Harman (25:48):

Wow. This has been such an eye-opening conversation for me, and we've covered a lot in the last couple of minutes. Is there anything that I haven't asked you that you think our listeners should know or be aware of regarding AI or cybersecurity cyber crimes? Just anything that you think you want to make sure that they take away from this conversation.

Jennifer Wilson (26:12):

I think the biggest takeaway for me is that cyber impacts multiple sectors of a business. It's no longer just about cybersecurity. Organizations should be viewing cyber from a wide lens. They should be looking at yes, cybersecurity, but also looking at their business practices, their consent practices, looking at their employee training. Are they training employees and alerting them to phishing attacks and how they've evolved? And yes, they should be looking at their insurance and risk transfer, their contracts with their vendors. Think of the change healthcare event. Do you have the proper language in there so that if one of your dependent vendors has an attack, that you have protections in there? And compliance, the SEC came out with the cybersecurity disclosure requirements, and you want to make sure that you're in compliance with those disclosure requirements and legal. So keep in mind that cyber is one area where the exposures continue to evolve. And as those exposures evolve, everything else must evolve with it.

Patti Harman (27:36):

Wow. Thank you so much, Jennifer, for sharing your insights with our audience. Thank you for listening to the Dig In podcast. I produced this episode with audio production by Adnan Khan. Special thanks this week to Jennifer Wilson of NewFront for joining us. Please rate us, review us, and subscribe to our content@wwwdigin.com/ subscribe. From Digital Insurance, I'm Patti Harman, and thank you for listening.