Artificial intelligence is changing the pace of cyber risks and how companies defend against them. Understanding new threats and how to train employees so they are a strong line of defense against emerging dangers requires a flexible approach to training.
Nicole Jiang is co-founder and CEO of Fable Security, which focuses on effectively managing human risk management in the cybersecurity space. Previously, she was a founding team member and head of product at Abnormal Security, and she has held product and engineering roles at Mixpanel, Microsoft, Palantir Technologies and Pixlee.
While technology is changing how many risks are identified, humans play a key role in the mitigation process and a bespoke approach to training ensures that employees have the knowledge required to be proactive.
In this Q&A with Digital Insurance, Jiang shares insights on how Fable is using AI to customize the training experience to
Cyber risks continue to evolve, and AI is changing the ability to identify them sooner. How are you using AI to help employees better recognize different types of cyber risks?
At
We use AI in three ways to help employees recognize and act on different types of cyber risks:
- Identify and measure employee risk, such as weak MFA hygiene, outdated OS software, exposed data, etc. so they can prompt people to improve their security posture.
- Synthesize threat data, applying intelligence to the stream of incoming threat data to project which will target particular companies and types of employees.
- Auto-generate hyper-relevant, mass-customized phishing simulations, TikTok-style briefing videos, and crisp nudges on-the-fly based on either risky behavior or targeted threats.
Are there certain types of attacks or scams that you're finding humans are more vulnerable to? e.g., phishing, deepfakes, etc.
Yes, and it's not just traditional phishing anymore. We still see high vulnerability to
We're also seeing growing exposure to executive impersonation, malicious browser extensions, and deepfake-enabled social engineering, including voice cloning used in financial authorization scams. What makes these attacks effective isn't necessarily technical sophistication — although they are way more sophisticated than in the past. Rather, it's that they exploit urgency, authority, and distraction. Even experienced professionals can fall for them when they're busy.
As AI risks change, how does your training for humans adjust to help them become better risk managers?
Today's landscape doesn't just require a minor adjustment in how we train people. We have to completely upend the model. Legacy training, which is generic, static, and one-size-fits-all, is irrelevant to people and easy to ignore. For it to work, it needs to be targeted, super short, and delivered just-in-time so it engages people. And of course, security teams need to measure if it's working and adjust it if it's not.
Here's what I mean: One of our financial services customers recently ran a campaign to address a data handling issue. They're a pretty big target and threat actors can target wayward data in systems through lateral movement once they penetrate a company's systems, so data hygiene is incredibly important. The security team had found a bunch of PII alerts in their system, which they traced back to a developer observability tool, and then to a bad software code parser, which about 150 of their nearly 1,000-person engineering team was using. Instead of broadcasting a generic warning to the entire engineering organization, they targeted the 150 with a 90-second, AI-generated Fable video briefing campaign. It was highly specific, like this: Hi Bob, you've inadvertently logged PII to [application]. This exposes sensitive customer data and puts us at risk for violating privacy laws. Here's the process for remediating, and here's what to do in the future.
Recipients said they appreciated the crisp, to-the-point message, and in the first month the company saw a 60% decline in these alerts with 100% and zero recidivism in the months since. Yesteryear's training paradigm would have had all 1,000 developers (or, worse, everybody in the company!) take a generic training that might not even have mentioned the particular issue. People would have tuned it out, and we'd still be seeing those data violations.
One last thing I want to mention about human risk campaigns: you must measure and validate! One critical capability we baked into Fable was the ability to measure not just campaign engagement (Did they watch the video?) but action (Did they stop logging PII to observability tools?). That kind of closed-loop validation is essential, so you go from hoping to knowing you changed behavior — and reduced risk.
What have been some of the more unusual scams or attacks that you've seen a company encounter? How do you help them recognize and protect against them?
One incredible scam we've been seeing across our customer base—and even experienced ourselves!—was fake candidates trying to get a job. Here's an example: On paper, the candidate looked solid: relevant experience, a coherent portfolio, and a strong early technical screen.
During the virtual onsite, it became apparent that something was very wrong. The candidate had a straight out-of-stock-photography virtual background. His WiFi was super-glitchy and there was background noise that sounded a bit like a call center. But it was the VPN slip that gave it away: Our Zoom logs showed his IP address hopping from the U.S. to Germany to Vladivostok, a Russian border city not known for its thriving remote-tech workforce. Needless to say, we passed on the candidate! But you'd be surprised—there's a thriving industry of folks from places like Russia and North Korea applying for jobs in our technology, financial services, healthcare, and other critical industries—especially in roles like IT and development, where they'll have outsized access to our business-critical systems and source code.
What should businesses keep in mind when they're trying to ensure that their employees are well-versed in cyber risks and how to protect against them?
I think the big takeaway here is that awareness is not enough. When security teams stop at measuring training completion rates and phishing failures, they're not measuring the right behaviors…or risk. Also, one-size-does-not-fit-all! Generic training doesn't work. People tune it out and that leaves security teams with a false sense of safety.
Here's what I recommend:
- Target training based on real, observed risk signals.
- Measure behavior change, not just engagement.
- Reinforce positive behaviors (like reporting) rather than only punishing mistakes.
- For requested behaviors around data handling, authentication hygiene, and device posture, close the loop on your campaigns to make sure people didn't just hear you, they actually took action
When employees see that training is relevant to their work and helps them avoid real-world risk, they'll listen and do what you ask.
What are some of the common mistakes employees make that create greater vulnerabilities or cyber exposures for their firms?
Most mistakes stem from small, usually well-intentioned shortcuts, such as:
- Reusing or sharing passwords
- Ignoring requests to adopt secure technology like password managers or MFA
- Acting on urgent requests to be "helpful" (e.g., resetting an account, disbursing payment)
- Approving MFA push notifications without verifying the request
- Uploading or pasting sensitive content in unsanctioned AI tools
- Installing unvetted browser extensions
Here are some common threads:
- Urgency. Attackers deliberately create time pressure: "This needs to be done in 10 minutes," "The CEO is waiting," "Your account will be locked." When people are juggling meetings, emails, Slack messages, and deadlines, they default to speed.
- Authority. Many scams impersonate executives, IT teams, banks, or vendors. Humans are wired to respond quickly to perceived authority, especially in hierarchical organizations. When a request appears to come from leadership or a trusted system, employees are less likely to challenge it.
- A desire to be helpful. Most employees who fall for scams aren't careless; they're trying to do their jobs well. They want to fix problems, move quickly, and support colleagues. Attackers exploit that instinct.
What concerns you the most about the use of AI to perpetrate cyberattacks?
Enterprises have always faced cyber risks, but AI changes the speed, scale, and personalization of attacks. Whereas just a few years ago, audiences were primarily targeted with poorly-written, easy-to-spot lures over email, today they're targeted in dozens of highly-targeted, realistic, sophisticated ways—over email, phone, social media like WhatsApp, communication apps like Slack…even over Zoom. AI makes cyberattacks especially nefarious because with it, you can do deep research on key people on the cheap, create hyper-realistic lures (tool sign-in pages, MFA reset pages, deepfake voices, etc.), and run sophisticated A/B experiments at scale to see which campaigns work best. Given how much employees are juggling, it's especially hard for them to take a beat and ask themselves whether a legitimate-seeming request is real.
What opportunities do you see for companies to proactively use AI to defend against cyberattacks?
Enterprises can fight fire with fire. AI can detect anomalous behavior faster; identify which employees are most at risk; predict which emerging threats are likely to target specific industries or roles; and automatically generate highly relevant simulations and briefings.
It can also help security teams prioritize their efforts by focusing on the intersection of technical risk and human behavior. The real opportunity is combining AI-driven detection with AI-driven behavior shaping.
Is there anything else that our readers should know about using AI to protect against cyber risks?
One thing I want to make clear because I believe it strongly, and we built Fable around this principle: AI doesn't replace humans; it elevates their role. As automation improves, the human becomes both the last line of defense and the most adaptable control in the system. AI-enabled attackers will continue to innovate, but so can defenders.
The organizations that win won't be the ones with the best or longest policy manuals. They'll be the ones that:
- Continuously adapt to emerging threats
- Measure and act on real risk
- Close the loop and measure success by validating actions
- Treat employees as security partners, not liabilities






