AI-powered attacks, such as deepfake phone calls, manipulated emails and altered code, are more sophisticated than you might think. Powered by artificial intelligence, machine learning and deep learning, these attacks are designed to closely mimic legitimate behavior. As a result, many employees may not realize that something’s suspicious until it’s far too late.
Take a deepfake phone call, for example. It comes from a known number, with a voice a team member recognizes. Behind the scenes, an AI model has been trained on real audio samples to replicate speech patterns and tone.The caller engages in conversation as usual, but something seems off. In situations like these, employees need to pause and check everything over before taking any action.
Technology, such as email content filters and code-scanning software, helps you avoid an AI attack like the one above. But another vital defense is employee training. Teaching team members how to deal with suspicious communications and code can improve regulatory compliance, keeping you on the right side of the law. Learn how to get started below.
What Is an AI Attack?
AI can be both a help and a hindrance to your organization. While tools like ChatGPT and Google Gemini use generative AI to enhance productivity and performance, attackers also misuse them, often for financial gain.
An AI cyberattack is any threat that uses generative AI and machine learning to trick people or breach security defenses. It usually hides in plain sight, appearing as a trusted email, a seemingly legitimate phone call or authentic-looking code.
An AI cyber-attack is any threat that uses machine learning, deep learning or generative AI to deceive people or bypass security defenses. These attacks often hide in plain sight, appearing as trusted emails, legitimate phone calls or authentic-looking code generated by an AI system.
This makes employee awareness and prevention more important than ever, especially if you work in Governance, Risk and Compliance (GRC). AI attacks often bypass traditional security tools, resulting in legal and regulatory risks that you need to manage. Ignoring the problem could jeopardize your business reputation.
Here are some examples of artificial intelligence-driven attacks:
- Deepfake vishing calls: AI-generated audio use AI models to mimic a real person’s voice to trick someone into sharing sensitive data. Sixty-two percent of organizations experienced some kind of deepfake attack in the 12 months leading up to September 2025.
- Manipulated emails: Fake or altered email messages created by AI tools often look genuine, but attackers use them for social engineering and fraud. For example, these phishing attacks often appear to come from executives or trusted partners, making them difficult to detect, but in reality, it’s a carefully crafted scam.
- Altered code: Malicious code generated by an AI system usually appears legitimate, but it’s designed to compromise systems and steal information. Even AI code that’s not intentionally harmful can have errors or vulnerabilities that may be exploited.
Traditional Security Training Won’t Cut It
Most cybersecurity training programs teach employees the following:
- How to identify phishing emails with spelling and grammar errors
- How to spot suspicious links and attachments
- Basic password hygiene and device security
While all of the above are still important, they aren’t enough to prevent AI cyberattacks. Deepfake calls, AI-generated emails, altered code and other threats are different beasts. They require upskilling to understand how hackers use AI tools to manipulate and cause destruction.
For instance, an attacker might use ChatGPT to create an email that mimics your CEO’s writing style and tone of voice, asking your finance department to wire funds for an emergency. Traditional training, which focuses on spotting typos and checking the sender’s email address, might miss this completely.
To combat AI threats, your organization needs training that goes beyond the basics. Employees, regardless of their role, need to adopt a skeptical mindset and recognize the subtle signs of AI deception. By combining awareness with the latest security measures, you can reduce the risk of data breaches and reputational damage.
What Role GRC Plays in AI Attacks
Many organizations think that preventing an AI attack is solely the responsibility of IT or cybersecurity teams. This isn’t the case at all. GRC teams like yours also play a critical role, especially when it comes to managing the compliance and regulatory risks that often come from these threats.
GRC bridges the gap between cybersecurity defenses and organizational oversight. While IT teams detect cybersecurity vulnerabilities and respond to threats in real time, you’re tasked with looking at the wider implications of AI-driven manipulation. For example, you should understand how an AI robot attack could affect internal policies or regulatory compliance in your industry. Doing so can prevent fines and keep you out of legal hot water.
You’ll also want to get involved in employee cybersecurity training programs. By helping team members understand the broader context of an AI attack, they can recognize threats before they cause real damage.
What Does AI Awareness Training Look Like?
AI awareness training programs are a relatively new concept, but they’re becoming increasingly important for businesses in almost every sector. Eighty-seven percent of security professionals said their organization experienced an AI-driven cybersecurity attack in the year leading up to March 2025, while 91% anticipate an imminent ‘significant surge’ in these threats. Only 25% say they are highly confident in their ability to detect them.
Here are some of the topics to cover in team training to prevent AI attacks:
- AI attack mitigation: Teach employees across your organization to verify unexpected email or phone requests to mitigate AI phishing attacks and deepfakes. For example, conduct vishing and video-call drills using cloned voices so they can better identify and respond to attacks. IT team members, on the other hand, need to understand that automatically generated or altered code might contain hidden vulnerabilities.
- Information sharing: Teach staff to question unexpected requests, especially if they seem urgent or emotionally manipulative. They could use verification methods, such as safe words or personal knowledge questions, to confirm identities before escalating suspicious communications via the right channels. A ‘verify-first’ mindset is imperative.
- Regulatory and legal implications: Highlight the consequences of AI-driven attacks on your business as a whole and how they can affect internal policies and regulatory compliance. You’ll want to explain how cyber threats might violate data protection laws or lead to legal action and fines.
- Incident reporting: Encourage employees to log suspected AI attacks and other threats. This can support your audits and regulatory reviews. Staff should recognize and report even the most subtle AI-generated attempts, such as slightly altered voices and emails that appear legitimate at first glance.
Don’t forget to align your training with your specific GRC policies. For example, if someone from your HR department receives a suspicious email that appears to come from a senior executive, they may need to properly log and report it based on your incident reporting procedures. You can then investigate the event, see if there’s an attack pattern and work out the best course of action.
You’ll also want to tailor training for different roles. Legal, finance, HR and other high-risk departments, in particular, require specialized procedures for identity verification and handling sensitive requests in order to prevent fraud.
Limits of AI Tools
Employees often assume that prompts and responses in everyday AI tools, including chatbots, are private. Your training must correct these assumptions. Make it clear that tools like ChatGPT can share inputs and outputs, creating security and compliance risks.
Cover the following in your programs:
- Which third-party AI platforms and large language models meet security, privacy and AI security compliance requirements, and which ones don’t
- How to handle sensitive business information safely when using AI tools
- What information to avoid entering into tools
Learn More About Risk Management When Using AI
While AI can boost efficiency and productivity, it also poses risks that could affect compliance with government and industry regulations. When employees understand how an AI attack works, you can prevent worst-case scenarios and maintain your organization’s reputation.
Dive deeper into this topic with Onspring’s latest ebook, Using AI in Risk Management for GRC Teams. Download and explore practical strategies that support your team.