Transcription:
Patti Harman (00:06):
Welcome to this edition of the Dig In Podcast. I'm Patti Harman, editor-in-chief of Digital Insurance. The adoption of AI across the insurance industry brings numerous benefits as well as some unexpected risks. It's changing company's risk profiles while also providing new tools for identifying fraud and cybersecurity threats. As cyber hazards evolve, understanding how new technologies affect data exposures and increase the opportunities for threat actors, companies have to be more proactive than ever before about managing these risks.

Here to discuss all of this and more with me today is Michelle Worrall, global Director of Insurance product at Resilience. Michelle has over 30 years of experience and cyber insurance tech, E&O and professional liability insurance, and has an extensive background in claims coverage, underwriting and product development. Thank you so much for joining us today, Michelle.
Michelle Worrall
Thanks, Patti. I'm so delighted to be here.
Patti Harman
So the hot topic for the day is definitely AI, and I wanted to ask, how is AI changing the rules of data privacy for companies and their customers?
Michelle Worrall (01:22):
That's an excellent question. The use of AI to gather and share personal information, it presents really the same exposure as do pixels, cookies, and other web tracking technology when used without transparency and user consent. Simply put, wrongful collection by AI is really an old wine in a new bottle. The technology is different, but the exposure is the same.
For example, AI systems scoop up vast amounts of personal data, which may be used for purposes other than what the individual may have consented to. AI systems can also be vulnerable to hacking, leading to unauthorized access and theft of personal data. We're seeing a resurrection of dusty statutes such as the California Information Privacy Act, which is a 1967 law to prevent unlawful wiretapping. It is being used some 60 years later to challenge how websites collect and share user data. Likewise, the Video Privacy Protection Act, which was enacted in 1988 after a reporter published Judge Robert Borks video viewing habits during his Senate confirmation hearings for consideration to the US Supreme Court, it has become what we call a blockbuster statute with respect to wrongful tracking, but the driving force behind the use of these statutes is that there is a private right of action, meaning a civil action that can be brought by plaintiff attorneys on behalf of a class of plaintiffs and not brought by a regulatory body.
(02:59):
It also allows for the recovery of statutory damages, which can be high as $5,000 per use as well as attorney's fees.
Patti Harman (03:08):
Wow, that is a great overview of some of the risks and some of the considerations with the use of AI. Are consumers worried about how information is gathered and might be used, then?
Michelle Worrall (03:21):
Oh, absolutely. Consumers have a visceral negative reaction to invasion of privacy from an early age. We recall the shame and the anger when a sibling or parent reads or overhears something that's private, whether it's a conversation with a close friend or discovering and reading a diary, a journal, or going through text messages. Many times, PI is gathered without consent and shared with ad tech companies to target ads, and frankly, this just adds insult to injury and Flo Health is a prime example of an egregious privacy violation involving AI. The app used AI to track menstrual cycles, sexual activity, and made ovulation predictions as well as tracking pregnancy and other health data, and it was used by millions of women across the world. And so you think, well, what could possibly go wrong? Well, Flo Health shared user data through software dating kits such as megapixels with third parties like Google and Meta.
(04:29):
And then this data was used for AI, machine learning and targeted advertising, which were key points of contention in this lawsuit. Members of the class action, they alleged that the sharing of this very personal information occurred despite the app's public privacy policy, which claimed that data would remain private. Ultimately, Flo Health, the developer of the app and Google settled out for $56 million last summer, but the plaintiff proceeded to trial against Meta, who did not settle out, and no surprise, the jury found liability. Meta has appealed, and the damages portion of the trial is still pending. Now you take the negative reaction to an invasion of privacy and combine that with the use of AI, and it's really a double whammy to companies who misuse PII, and statistically, 69% of consumers do not trust AI. And that's according to a September 2025 Gallup poll. And they worry that the use of AI to handle personal data, make decisions, or that they're concerned that it's being used in an unethical manner. And this trust level varies with younger generations being more tolerant and also by gender, with women being more concerned than men, and certainly no user trusts AI to make a decision, especially a consequential decision on his or her behalf. And I suspect as more AI-related cases go to trial, it will be an uphill battle with juries. They aren't going to like company use of AI that causes harm in any way to an individual.
Patti Harman (06:16):
Right. I was not aware of all of this, and it's interesting because as I talk to friends and you're scrolling for something on one device and all of a sudden ads for that pop up on your laptop or your tablet or whatever, and you just realize how interconnected all of this information is, and I think AI is going to make that even easier going forward.
Michelle Worrall (06:44):
Yes, and absolutely. We all think that, oh, some of the social media platforms are there to help us connect and share information, and really their purpose is to monetize data and sell targeted ads.
Patti Harman (06:59):
Yes, I have seen that firsthand for the last week. I looked at one thing and now that's all that I'm getting ads for, even when I'm playing games. It's just a little unsettling. How important is transparency then for companies that are collecting consumer's information?
Michelle Worrall (07:17):
Well, transparency and informed consent from the data subject, which is usually the ability to opt out, they are critically important with respect to the collection of PII. And these are the foundational principles of data collection. Individuals must have control over how their personal information is used.
Now, many countries include the right to privacy in their constitutions such as Germany, Chile, South Africa, and Mexico. The U.S. on the other hand, lacks an explicit constitutional right, but we have legal precedent and legislation such as the California Consumer Protection Act, the Illinois Biometric Information Protection Act, and GPA (Global Privacy Assembly). You've got to love all the acronyms. Illinois Genetic Information Protect Act, and then those are all state, that's state legislation. And then we also have of course the federal HIPAA, which protects health information, and that's all legislation that protects PII. Now, in July of 2024, the Texas Attorney General reached a $1.4 billion settlement with Google to resolve claims that Google had unlawfully collected and used biometric data biometrics, meaning fingerprint- based geometry, voice from Texans without their permission.
(08:41):
And it was the largest settlement obtained by a single state in a privacy regulatory action, but we expect more to come. In this particular case, Google collected millions of biometric identifiers including voice prints and records of facial geometry through its AI products and services such as Google Photos and Google Assistant. Texas sued under the Capture or Use of Biometric Identifiers Act or CUBI, and this case really should have been a wake up call, but unfortunately privacy actions are on the rise. Companies need to adopt a top-down approach with respect to data privacy, and there's also exposure arising from a security failure of properly gathered data that then leads to a breach here. However, the rise in data privacy arises from company use of PII without permission, and that exposure is entirely within organizational control.
Patti Harman (09:42):
I think a lot of times we just do things without thinking and you don't see what happens like 10 steps further on down the road, so to speak, and how that information can be used.
Michelle Worrall (09:58):
Well, exactly. We think about using a QR code, which are ubiquitous anytime you go to a restaurant clicking on a QR code, and there are QR codes that are static, and those that are dynamic and those that are dynamic are tracking your geolocation data and what you are looking at where you are, and then drawing conclusions upon that, of course, to sell ads,
Patti Harman (10:26):
Right? Yes. But it's just, again, it's just the gathering and how they're using all of that data. That's a little bit unsettling. So more and more companies are using chatbots as part of their customer facing interactions. Are you finding that consumers are comfortable with this change or maybe not? I know in some cases I find it easier and in others it's like I just want a real person.
Michelle Worrall (10:51):
Well, absolutely, and there is growing enthusiasm, of course on the business side to use chatbots to help answer questions. And a growing percentage of e-commerce business to consumer companies have implemented AI chatbots into consumer-facing operations. Now, customers are significantly less enthusiastic. I know personally, I feel highly annoyed when a peppy chatbot pops up when I'm trying to gather answers for something. And statistically, well, more than half of customers do prefer interacting with a human agent. They report that a chat bot often fails to understand the issue, which gives rise to frustration and then ultimately the need to talk to a human, which then slows down the entire process leading to even more frustration. Now from an age standpoint, millennials and Gen Z are more comfortable sharing information with a chatbot compared to older generations with 60% of boomers and Gen X age groups talking to a human representative, they cite more concerns because they're not interested in sharing information with AI.
(12:03):
And the efficiency of a chatbot definitely depends on the situation and the complexity of the issue. Chatbots may be effective with respect to something that's low complexity, but issues of higher complexity or more sensitivity often require a human touch. Now, companies who utilize AI chatbots need to recognize that under the law, a chatbot is an agent acting on behalf of a company that has deployed it in its conduct. The chatbot's conduct, whether it's giving incorrect information or it's hallucinating with respect to its advice will bind the company. So a chatbot is not considered some rogue independent contractor that is acting outside of the company. So if a chatbot gives incorrect information that a customer relies upon to his or her detriment, the company is responsible and cannot point a finger at the negligent chatbot.
Air Canada, for example, it's a great example. It lost a small claims court case for advice its chatbot gave to a passenger about a bereavement fare, which was incorrect. And we know that again, AI is prone to incorrect answers and hallucination and inexplicably, Air Canada tried to dispute the advice provided by its own chatbot, and of course, it was found liable and it was required to pay the full bereavement fare. And as an aside, it is important that these emerging state AI regulations do require full transparency when AI is being used to engage with website visitors.
Patti Harman (13:45):
That was one of the questions that I had and wondered whether or not there was liability if your chatbot gives misinformation. So thanks for clarifying that. What other data privacy risks come into play then when companies are using chatbots or other technology to collect customer information? And by that I mean are there legal issues? And you have mentioned some state and federal regulations, so what are some of the other risks that their use could encompass?
Michelle Worrall (14:16):
Well, yes, I already mentioned the privacy laws that are already in place. Now, Europe has an EU AI act, which requires a risk-based approach to use of AI. So AI can absolutely not be used to make a consequential decision that impacts a person. And that could be in the area of housing, such as denying a mortgage application, education, job, hiring, medical diagnoses, a mental health counseling, insurance decisions such as making decisions about the claims process. So we are seeing all sorts of AI laws that are being passed. Again, we've got a patchwork quilt in the United States with all sorts of states starting to introduce AI regulations that will have significant impacts on companies that either develop or deploy AI.
Patti Harman (15:23):
It'll be interesting to see what happens in the courts too, because of the fact that you have, if you have a company that operates in multiple states, then they're going to have different regulations that they have to respect and adhere to across the country, which could be very interesting, too.
Michelle Worrall (15:42):
Oh, absolutely. Companies have to set the bar high. They have to set a high watermark with respect to network security, privacy and AI. And if they set this high watermark, then they will not have to constantly create new governance and requirements internally when new laws are passed. And we saw that in 2017, I believe, with the passage of the GDPR in Europe. And so I think that's what companies absolutely need to do, and they need to think from the top down and consider having a Chief AI officer to help mitigate the exposure.
Patti Harman (16:29):
Right. Yes, that would be a really good plan. We're going to take a short break. We'll be back in a few minutes.
Welcome back to the Dig In podcast. We're chatting with Michelle Worrall, global director of Insurance product at Resilience. So we've been talking a lot about AI. How is AI changing cyber coverage?
Michelle Worrall (16:54):
Well, this is an exciting time to be in insurance because AI is having a significant impact. Insurers are addressing the silent AI exposure that exists in cyber coverage agreements as well as other policies and either affirmatively covering AI or excluding it. And there is immense pressure from the insurance brokerage community for insurers to address AI for contract certainty. And certainly before we can do this, of course, insurers need to identify, anticipate, and quantify AI risk, which is starting to emerge. I already talked about some of those emerging statutes. And we already know that AI is supercharging privacy exposure. Insurers need to determine what risk is within appetite, how to price for it, and of course, against the backdrop of the soft market that really has no bearing to cyber, which is significant. So this is a heavy lift for carriers. We also need to have underwriting guidelines in place and supplemental AI applications to better identify risk.
(18:02):
Once this framework is in place, there is the need to draft policy language, which is where I come in now. There's this wonderful quote from Jack Cook, and he's the co-founder of Anthropic AI, which has the foundational AI model, Claude, and he said it best, "We should be technologically optimistic and appropriately afraid." And with respect to the exposure, insurers are rightfully concerned because many policyholders do not have a handle on their AI. They don't have proper governance in place to legally and ethically govern and monitor whatever they're doing with AI. The big unknown is the exposure from these emerging developing AI-specific regulations. The EU AI Act, which I already identified, will be fully applicable in summer of 2026, as will the landmark Colorado AI Act and coverage expansions. From an insurance standpoint, if the market determines to, okay, we're going to expand coverage, this is not silent AI, the AI specific regulations, it will require an expansion of coverage and carriers need to decide, do we want to undertake this significant exposure?
(19:23):
And if so, I anticipate that coverage expansions will be subject to additional premium and likely supplements until there is an enhanced comfort level. And with respect to cyber policies, they have often become what I call the dog's breakfast and so many instances, and that the insuring agreements are extremely diverse and keep expanding. And I think sometimes there are questions. Does AI really belong in the cyber policy? Now, ISO certainly doesn't think it belongs in the CGL policy with respect to completed products operations or with respect to the advertising and personal injury coverage grant within coverage part B of the CGL. And my understanding is that ISO has filed absolute AI exclusions for CGL policies, so that's going to put more pressure on tech errors and emissions and cyber policies to pick up this exposure. And we're also seeing that AI risk arising from business operations that companies have, if they're using it, they have significant and growing exposure to network security exposures such as cyber extortion and social engineering fraud because threat actors are becoming more efficient from their use of AI. And then, as I mentioned before, privacy being a very strong theme that companies have a privacy liability exposure if they don't protect or gather it through AI without permission or you use PII that they've gathered for a different purpose to use it to train their large language models.
Patti Harman (21:03):
Wow, that's a great segue to my next question. I wanted to know if you were seeing new risks because of the ways that threat actors are using AI now?
Michelle Worrall (21:13):
Well, AI is being used to supercharge and facilitate attacks by making it easier and faster for threat actors to carry out traditional methods of cyberattacks. So we're seeing more sophisticated social engineering. Deep fakes through video on audio and use of AI is also accelerating the discovery of network vulnerabilities, software vulnerabilities, and then exploiting these vulnerabilities. And AI may also lower the barrier to entry for wannabe threat actors and increase the sophistication of current cyber criminals. Now at Resilience, our network security team expects to see increased automation and adaptability of network attacks through the use of ai. They also think that AI malware will be used to automate all phases of the cyberattack lifecycle from reconnaissance, scanning, gaining access, elevating privilege, and delivering a higher payload of a ransomware, for example, and then using AI to cover the threat actors' tracks. So AI will drastically reduce the time it takes to launch an attack.
(22:29):
And AI is also being integrated into the malware itself, creating intelligent malware and ransomware that can learn from the target's defense mechanisms, and then dynamically alter its behavior to evade detection. Now, you just let that sink in and it's frightening. And then AI, deepfake technology is making it increasingly difficult to identify social engineering fraud through highly realistic and personalized deepfake voice and image impersonation. Video impersonation allows for adaptive and real time interactions, and additionally, email phishing attacks have become more polished. They don't have the misspellings and are effective as generative AI can draft convincing email messages after automating the research process into the target. So they'll use AI to really investigate the target by scanning social media and other public information about their target. Now, while employees who are often the first line of defense have become adept at identifying misspellings and issues with suspicious email addresses, and yes, they can spot the fake Nigerian prince from a mile away, the ability to spot an AI deep fake video is well outside the ability of most individuals.
(23:57):
Now, I receive the most excellent advice from an attorney in Chicago with expertise in all things cyber. And he shared with me two tips which I think are very helpful to identify a deep fake video. And the first one, which just is, I think fantastic, is to direct the person asking for the wire transfer, for example, to turn around. Deepfakes are two-dimensional, and the threat actor, the fake, the synthetic media cannot turn around, cannot rotate, cannot do the hokey pokey is what we were talking about. And secondly, it's important to have a prearranged code word or safe word that is not written down anywhere, that key employees who transfer money only know. Now, while we haven't experienced this yet, we have heard of claim scenarios where the negotiation of a cyber extortion attack is carried out by an AI agent on behalf of the threat actor.
(25:02):
Now, we know that AI has a goal but no soul and is interested in landing at a final number quickly and solving the task with respect to cyber extortion, they just want that money in hand. Now, AI may become frustrated with the back and forth negotiations with a human negotiator and certainly will not be moved by any sort of emotional arguments. They're not going to be effective like, I need to take a break or we need to see if we can get a loan. But importantly, AI doesn't need to rest and can go at it 24/7 unlike human negotiators. So I think we're going to see extortion negotiators who represent companies with cyber extortion. They may need to fight fire with fire by utilizing AI to better negotiate with AI threat actors.
Patti Harman (25:57):
That was just a fascinating explanation and really scary. And as you were talking, one of the things going through my mind was, and people think that insurance is boring, it's like they have no concept of all of these different things. So last year we had an event and our media team created a deep fake of me just based on, it was 30 seconds long based on a very short interview I had done, and I'd been trying to explain to my mom what deepfakes were, and I played it for her, and she looked at it and she said, "I'm your mother. You're right here with me. I can't tell the difference." And that is really scary when you think of how all of these videos and audio recordings could be used going forward. So I loved the suggestions from the attorney because they're very simple and very practical, but easy to implement for sure.
We've touched on this a bit, but does the adoption of AI by companies open them up to new risks then? And I think you mentioned this protecting the proprietary data and other things, but are there other risks that they should be aware of as well?
Michelle Worrall (27:14):
Oh yes. Use of AI by companies does open them up to new risks, particularly if these companies don't know how their AI is being used or how it may degrade over time and skew results. And this is the black box nature of AI. We really don't know how deep learning systems make their decisions. And within an organization, there's also what we call shadow AI exposure. And this arises from unauthorized use of AI by employees. And it's frightening to note that a Gallup survey from just this year found out that only 30% of companies have internal AI guidelines. This is good news, I guess, and that it is up from 10% a year ago. And companies are under immense pressure to adopt AI, and they might not have the governance framework in place to monitor and govern the use of such a dynamic resource.
(28:19):
It's one that is constantly changing, and because of the expense of developing AI, most companies do not develop their own, rather they lease or purchase the technology from a vendor. And as a result, there are all sorts of unknowns with respect to the way the foundation AI model was developed. And importantly, companies should also use a secure, customized AI environment hosted on a dedicated infrastructure, either on premises or a private cloud, rather than being on a public AI that is shared. Because if you put in confidential information, suddenly you have violated confidentiality requirements to clients and also compromising PII. And this helps ensure that sensitive data remains confidential and under company control, which helps to address concerns regarding data privacy, security, and then also regulatory compliance regarding all of these state AI laws that are emerging. And clearly the many regulatory statutes both in the U.S. and abroad create significant exposures for companies that use AI to interact with customers or use AI to make decisions that, and I already talked about this, have a consequential impact on the customer. This is both the EU AI Act, Colorado's AI use Act, and again, you cannot use a deep learning system where the potential for consumer harm is high. Again, finance, employment, housing, insurance decisions, and criminal justice. Now, California, on the other hand, and also because of the intense tech lobby, it has a more safety-based AI approach that's based upon significant financial loss and harm to consumers from AI.
Patti Harman (30:12):
Are companies at least able to use AI to better identify and maybe manage some of their cyber threats then?
Michelle Worrall (30:20):
Yes, security companies are rolling out their own AI agents to quickly analyze data, draw conclusions, and even take predefined actions, the goal being faster, more accurate outcomes to help keep up with the accelerating pace of network security attacks.
Now at Resilience, we use algorithms to predict the likelihood and severity of potential exposure across our portfolio of risk. We're using large language models to better understand the security postures of our clients. Now, this one's really interesting and you know how I feel about data privacy. We are also using AI to closely review the client's public facing privacy policy and compare these results against a scan that identifies that company's use of pixels and other tracking technology. Now, from a privacy standpoint, we want to make sure that the client's privacy policy lines up with its actual behavior, and we have found that sometimes they line up, but sometimes our clients, they say on their app, we are not using pixels yet. We find out that they are using pixels to track customers on their websites. And we also use large language models to better understand trends within our own data, which we then use to inform our policyholders of the importance of actions and security controls that we recommend.
Patti Harman (31:46):
We've spoken a lot about different types of legislation that are out there. How could AI legislation help carriers maybe with mitigating some of these cyber risks?
Michelle Worrall (31:57):
Well, AI regulations are going to force the hand of companies. It's going to require them to take their risks seriously and comply with guidelines or face expensive regulatory proceedings, fines, and sometimes even more expensive reputational damage. Companies will have to adopt the resources necessary. In other words, spend the money, hire a chief AI officer and adopt a governance framework to identify, monitor, manage, and make transparent their AI risk. Because so many of these statutes require transparency of the data training sets, impact statements, how AI is being used and how it's responding with respect to bias, discrimination, transparency, data quality. There must be both. Some of these statutes require both internal and external audits. And again, we already talked about the statutes that prohibit the consequential decision-making. And then so there are these very high standards that are required on a state by state basis. And then companies must also mandate the same AI scrutiny of their vendors because many of them, again, have purchased or acquired the technology from their vendors. And a company is only as strong as its weakest AI vendor. And I think that the legislative mandate of AI maturity will make this exposure significantly more tolerable for insurers. Now, it's going to take some time, companies will have to comply with, we talked about the high watermark with respect to the most rigid state or country requirements in the state or country where they're doing business and state and international legislation is absolutely forcing the issue of AI responsibility, which insurers simply can't drive given the soft insurance market.
Patti Harman (33:55):
I've been covering the insurance industry for probably about 30 years. I've been covering cybersecurity for well over a decade, and I want to know, are you seeing more complex cyber claims from where I'm sitting? I think that they have definitely changed, but I'm thinking about this in terms of network or cloud outages or attacks on infrastructure or even a change in the data security risks. Are those types of claims becoming more complex now?
Michelle Worrall (34:28):
Well, we've certainly have heard about a lot of the AWS system failure. There was a very prominent vendor last year that also had a system failure, and so many companies are interconnected now. Most cyber policies do have very broad exclusions regarding attacks on infrastructure because the cybersecurity market just simply cannot bear a significant accumulation loss. That's why, for example, I don't know if you've talked about it on your show before, but about the war exclusion and how prominent that has become over the past several years at Resilience. Fortunately, we have not experienced more complex cyber claims, but we're seeing significant financial damage from successful cyber tax, and that has increased according to our 2025 midyear cyber risk report, which is available online to anyone who's interested. Ransomware attacks now average more than $1.18 million in damages, which is up from $705,000 in 2024. And again, it's the evolution of ransomware tactics, use of AI.
(35:48):
And now we're finding out that threat actors are dwelling for longer period periods of time on the network. They are actually locating the victim's cyber policy and aligning their extortion demands with policy limits. Double extortion is also a standard practice with criminals demanding payment for data decryption and to prevent public release of such data, which is often the PII or trade secrets. I've also heard of triple extortion, which you add to the double extortion, the threat that the cyber extortion and the actual payment and the release will be sent to the media for publication. So that's the triple extortion social engineering tax. We're responsible for 88% of our loss with AI powered phishing attacks, achieving a 54% success rate compared to just 12% for traditional attempts. And this is a startling development, and we know, again, as I mentioned, that cyber criminals are using AI to create more convincing phishing campaigns, voice and video synthesis for social engineering fraud.
Patti Harman (37:05):
It's enough to make your head spin when you think of all the things they can do and how creative they are. So what risks or issues are you watching for just like the next six to 12 months? Is there anything in particular that you expect to change, evolve, or things that you're seeing now that maybe aren't as identified by the average business person, but that they need to at least be keeping an eye on for the future?
Michelle Worrall (37:38):
Well, I think again, network security exposure such as cyber extortion is not going away because it is financially lucrative for the threat actor, nor is social engineering fraud with respect to the synthetic media and trickery being used for wire transfer. And we anticipate these exposures to increase from threat actor adoption of AI, and again, lowering that threshold to entry by new cyber criminals. Now, that being said, companies that use AI as part of their network security arsenal may be very better protected if threat actors are using AI to compromise networks. Companies need to work with vendors and their own IT teams to shore up their defenses in this very high stakes game of cat and mouse. And again, an ongoing theme, non-breach privacy class action that's going to likely continue as well. We don't know what's going to happen with this, but California recently adopted a one click privacy statute that goes into effect in early 2027. So companies really need to better manage PII and show restraint with respect to collecting web and application visitor data without consent.
Patti Harman (38:59):
Wow. Yes. They will have to be very careful for sure. As we wrap everything up now, is there anything that you want our listeners to know or consider when it comes to managing the risks associated with cyber claims?
Michelle Worrall (39:13):
As always, the first line of defense continues to be the well-trained employees who exercise discipline with respect to their email and show restraint with respect to clicking on suspicious links or key, we call that click. We don't want them to have that. It's also important to practice proactive risk mitigation, such as use of multifactor authentication, and always updating software. At resilience, we assist with risk mitigation by sending security notification reports almost on a daily basis to customers within our risk operation system to alert a policyholder of vulnerabilities. Now, as we move into the holiday season, it's critically important to note that threat actors attack during the holidays because organizations are more vulnerable due to reduced staff, slower response times, and threat actors have the ability to operate undetected before the organization can mount a response. Social engineering tactics become more effective because employees, again, the first line of defense, are often rushed or distracted.
Patti Harman (40:22):
Thanks for mentioning about the holidays, because I think everybody kind of lets their guard down over the next couple of months and think of how much time people spend online. And you're right, when you're talking about click itis, it's really easy to say, oh, yeah, let me do this, and you end up somewhere you didn't intend.
Michelle Worrall (40:41):
Yes, an employee certainly should not be using their employer devices for holiday shopping because oftentimes, malicious malware is embedded in some of these shopping sites, so it's important to be very careful.
Patti Harman (40:56):
Well, great. Thank you so much, Michelle, for joining us today. I learned so much. This was just fascinating for me. Thank you for listening to the Dig In podcast. I produced this episode with audio production by Wen Wist Jen Mary. Special thanks this week to Michelle Worrall of Resilience for joining us. Please rate us, review us, and subscribe to our content at www.dig-in.com/subscribe From Digital Insurance, I'm Patti Harman, and thank you for listening.