Many organizations are struggling to mitigate the risk associated with artificial intelligence tools. Across industries, employees are experimenting with generative AI models, often without training or oversight. For federal agencies and contractors who handle sensitive data, this creates serious exposure.
AI tools deliver significant benefits, including greater productivity and faster workflows. It’s unrealistic to stop using the new technology. Instead, organizations need to find solutions to make AI safe. That’s the issue we’ll address in this article: how to create an effective AI governance program to mitigate shadow artificial intelligence risks and protect sensitive data.
The Spread of AI Tools
A recent study by the National Security Alliance found that 65% of people are using AI models to do their work. That’s hardly surprising: AI tools are ubiquitous these days, from search engines to email solutions. Machine learning is already integrated into the digital tools that most office workers use every day, like Microsoft Word, Excel and PowerPoint.
The problem is that most workers haven’t been trained in AI data security. In fact, according to the National Security Alliance study, 58% of AI users say they received no training at all in maintaining data security while using the new technology. That’s a key blind spot that can lead to significant problems, especially when it comes to sensitive data and personally identifiable information, or PII.
Frequently, employees use AI tools without even consulting their supervisors. Not only do employees often have no training in AI compliance obligations, but they also have no risk management oversight. Their managers are likely to be blindsided by the resulting problems.
This unauthorized use of AI is known as shadow AI (or shadow GenAI), and it’s one of the most persistent problems connected to AI in the workplace.
What is Shadow AI, and Why Does It Matter?
Shadow AI refers to employees’ unauthorized use of large language model technology. In most cases, it refers specifically to the use of open-source generative AI tools to analyze and summarize data or create code.
Employees may use shadow AI to increase productivity and get faster results, often without understanding the potential consequences, or the shadow AI risks. Some of the most common use cases for shadow AI include:
AI-Powered Chatbots
Chatbots offer fast, frictionless answers to queries. Employees may be tempted to ask chatbots questions when they need a quick response. This may come up when employees in public-facing roles get questions from consumers, for example.
Data Analytics
Generative AI can rapidly ingest and analyze data from a wide range of sources. It can be tempting to feed information into AI and then use the result; the rapid turn-around time is hard to resist. Of course, AI’s analytics always carry some level of risk, especially when sharing private data with an open-source tool.
Data Visualizations
Generative AI can rapidly put together impressive charts and graphs, and it can do so in response to natural language inputs. This can be tempting, especially for employees who lack enough training in creating their own visualizations.
Gauging the Scale of Shadow AI
It’s difficult to gauge the exact extent of the shadow AI penetration. A recent MIT report found that employees at 90% of companies were using LLMs in their work. The National Security Alliance survey found that 65% of overall employees were using shadow AI tools. Whatever the actual number, both reports make it clear that shadow AI is common.
Without rules and guardrails in place, there’s no way to ensure employees avoid unsafe AI tools. Shadow AI is a major problem for any agency lacking monitoring tools or AI governance training.
The risks posed by shadow AI
AI-powered chatbots pose significant risks for private and sensitive data. They’re easy to use, so users might forget that they’re using a public AI tool.
AI agents often require access to internal databases and documents to operate correctly. This can lead to compromising sensitive data, in the event of a data breach. It can also lead to PII being stored and used by a third party for agent model training.
Whenever data is stored in a non-secure location, there’s a risk that it can be compromised down the line. There’s also a risk that private data will be identified and exposed, so that sensitive or privileged data is leaked.
When it comes to data protection, 43% of those surveyed by the National Security Alliance said they have already shared sensitive details with AI tools. That includes company financial data and client information. It’s impossible to say whether even more people are sharing private data without admitting to doing so. Federal agencies and contractors must take this figure as a wake-up call and a reminder to put protections in place.
Legal and reputational risks
Many companies today are subject to strict rules about private data use. If your employees are sharing sensitive information with open-source AI, you’re likely failing to comply with those rules. Non-compliance can lead to stiff penalties and fines.
If your organization is involved in a data breach, it can also create a significant public backlash, harming your reputation and damaging trust. It can take a long time to recover from this kind of reputational damage.
Shadow AI and the risk of cyberattack
Beyond the risk of data leaks, there’s a real danger that hackers will use AI models as a point of entry into a protected database.
Using open-source AI agents increases an organization’s attack surface; the larger and more complex a network becomes, the more potential vulnerabilities it has. Adding AI agents results in a larger network, which requires more protection from hackers.
Would-be hackers use tools like direct and indirect prompt injection to embed malicious code into the data ingested by an AI agent. The code then forces the agent to execute an attack or other malicious attack on your network.
Shadow AI exponentially increases risk of exposure and hacking, as compared to using vetted AI tools. When employees use unauthorized AI agents, the organization has no way of knowing what to plan for or how to set up safeguards. The IT department also has no way of knowing what kinds of protections are necessary for network security.
Correcting for shadow AI risks
The first step to preventing shadow AI is talking to your employees openly and honestly. Find out how people have used unauthorized AI tools, and if possible, find out why they’re using the technology.
Is there a proprietary or closed-source version of the tool they can use instead? Do they need further training so they can complete their work without these AI tools? Are they overburdened with work, and therefore turning to AI?
The next step is to begin a rigorous employee training program. Wherever possible, institute cross-department training sessions. Bring in people from your IT team, your legal department and your security team.
Create a strict employee AI use policy, and ensure that all of your employees understand the rules. Build a system to monitor your staff, as needed, and institute guardrails to protect your team from temptation. It’s important to regularly re-train your employees and contractors and to issue clear and specific guidelines.
With Onspring, organizations can govern AI safely, risk assessment tools and data management. We can help you with every step, from evaluating GRC software features to setting up guardrails. Download our eBook to learn how to get started.