GRC

AI Deepfakes and Phishing: How To Upgrade Your Employee Training for AI and GRC

|

Updated:

|

Published:

Blurred image of glowing envelope icons in various colors, resembling AI-powered email notifications, floating against a dark background. The scene is out of focus, creating a dreamy, abstract effect that hints at the intersection of AI and GRC.

Traditional cybersecurity hasn’t kept pace with AI. Generative AI tools are producing phishing emails that are more advanced, harder to detect, and more effective than ever before, and AI deepfakes can impersonate humans with alarming accuracy. That means even cutting-edge cybersecurity tools may not be enough to safeguard your operations from some AI attacks. The best defense is a workforce that knows how to spot and respond to a fake.  

To strengthen your risk management strategies, you need more than technical safeguards. You need employees who can identify deception, flag anomalies, and make informed decisions before damage occurs.

This article explores how AI awareness training improves cyber risk resilience, examines a real-world deepfake attack, and outlines practical steps for strengthening human-layer defenses within your governance, risk and compliance (GRC) programs.

Deepfakes and Phishing: Smarter (and More Effective) Than Ever

Generative AI (GenAI) can now personalize and emulate content with such accuracy that distinguishing reality from fraud has become a challenge. For example, AI tools create deepfakes by generating audio or video files based on real people, thereby impersonating executives or other stakeholders and convincing employees to perform fraudulent acts. More concerning, deepfake-based fraud incidents have increased by 1740% since 2022.

AI can also be used to create compelling phishing messages that are custom-tailored to your organization. Employees are likely to respond to an email that looks like it came from a manager or executive, and one recent study showed that 60% of participants fell for AI-automated phishing attempts. That’s roughly the same response rate as emails written by human experts, showing that many teams need better training on how to detect an AI-driven attack. 

A Real Example: How Arup was Fooled by a Deepfake

Arup is proof that even the most prolific technical enterprises can fall prey to a deepfake. In February 2024, a member of this UK-based engineering firm’s Hong Kong branch joined what appeared to be a legitimate leadership video call. Executives requested multiple financial transfers. Everything— voices, mannerisms, cadence— looked and sounded real.

It wasn’t. The meeting was a deepfake. By the time they realized the threat, the employee had already sent 25 million USD to the cybercriminal’s requested accounts, causing the company to face significant financial damage. 

As CIO Rob Grieg explained, Arup has faced thousands of cyberattacks, but this social engineering attempt doesn’t fit the usual profile. Rather than hacking into their network and exfiltrating sensitive data, the attackers simply tricked the employee into believing they were someone else. Bad actors have been using this tactic since the beginning of time, . Grieg notes that the technology to do it has just become more sophisticated. That’s why AI governance and enhanced risk eduction must become core elements of modern risk mitigation.  

Three Steps for Better AI Awareness Training

Grieg prefers to call deepfakes “technology-enhanced social engineering” instead of a cyberattack. The distinction shifts the focus away from software solutions and towards the same strategies that employers have always adopted to stop a social engineering attempt. AI-enabled threats demand new training tactics that align with regulatory requirements, internal controls, and existing risk assessments. Add these elements to your AI governance framework:

  • Host simulations. Practice makes perfect when spotting a fake, so equip your team with resources such as exercises or simulations demonstrating AI’s most common tells. Examples include awkward lighting or voice glitches in the case of deepfakes, or strange language, font types, or link errors in the case of a phishing attempt. 

    Strong simulations reduce third-party risks, fraud, and policy management failures by preparing employees to validate identity before acting.
  • Think critically. Was that transaction request consistent with the rest of your operations? Is there any information you could request that only the real person would know? A healthy skepticism can help you identify a deepfake or social engineering attack, so do a “gut check” and think critically about whether a request is real or a fraud. 
  • Resist reactions. Just like other social engineering attempts, AI deepfakes and phishing often prey upon your emotions. The most successful attacks give their victims a false sense of urgency or compel them to act out of fear, so build a culture where employees pause and verify before approving a transfer, sharing credentials, or bypassing internal controls.

Where Technology Fits In

While training is essential, automated safeguards reduce reliance on individual judgment alone.

Tools that strengthen cyber resilience include:

  • Multifactor authentication (MFA) for sensitive actions and financial workflows
  • GRC platforms that centralize policies, regulatory compliance monitoring, and incident response paths
  • AI-powered anomaly monitoring to detect unexpected behavior
  • Agentic AI constraints that enforce boundaries on AI autonomy and access

The right technology doesn’t replace people. It supports smarter human decisions.

Making AI Work for Your GRC Program

Despite the cyber threat that AI deepfakes and phishing can pose, the technology is still invaluable for carrying out your GRC workflows. AI fits best in data-intensive tasks involving high volumes of data or repetitive work, and GRC can have plenty of both. That means there are many applications of AI for GRC, including:

  • Reading complex documents, such as contracts or business proposals
  • Improve data integrity across internal controls
  • Creating GRC plans and documents automatically, improving your productivity
  • Overcoming “blank box syndrome” by offering prompts or templates to help get you started

Human intelligence is still essential for tasks involving emotional judgment, critical thinking, limited data, or where legal or ethical issues pose a concern. The key is to combine the intelligence of your human and digital workforce to harmonize your AI and GRC operations, which is what teams must do as they protect themselves from AI-based threats. That means evaluating GRC software features to see which ones can assist with authentication and anomaly detection, then educating your employees to use the intuition and critical thinking that only they have.

The strongest organizations combine human intuition with AI governance, continuous monitoring, and policy management to prevent misuse.

Onspring: For Smarter GRC Processes

From highly accurate deepfakes to personalized phishing attempts, AI is already transforming the way we process information around us. Navigating a world with such compelling deception takes a level head, a keen eye, and experience looking for signs of a spoof. That’s a lot to ask of any employee, so equipping them with the knowledge and simulations they need to sniff out a fake is a must. With the right training and awareness in place, your team can respond to AI-driven threats in a way that minimizes your business risk, keeping AI in its place as a tool that elevates your GRC processes. 

Onspring delivers business process automation (BPA) software that puts AI on your side. Our tools help you generate GRC documents and develop a comprehensive AI governance framework, helping you put the pieces in place to ward off even the most sophisticated cyber attacks. Onspring’s intuitive, flexible GRC platform, coupled with our dedicated customer support, has made us the top GRC software provider in Info-Tech Research Group’s Leader Quadrant for five years straight. For more on how to build AI into your operations, download our e-book today

About the Author

Share This Story, Choose Your Platform!