Sound the Alarm About Shadow AI
- NopalCyber
- Sep 24
- 5 min read

It’s almost inevitable: People in your organization are using AI in ways they shouldn’t.
They may be working with AI tools they prefer to the tools your company uses. Or feeding company data into AI without telling the security or IT departments. Or using authorized tools in countless unauthorized ways.
All of these are examples of “shadow AI.” Similar to shadow IT, the term refers to any AI usage without approval or oversight. Not only is shadow AI more common than you may realize, it’s a liability that’s growing quickly and already causing meltdowns at multiple companies.
That’s why we think it’s time to sound the alarm. As AI adoption continues to surge, we can’t let the rewards obscure the risks. And if approved AI creates risks, imagine the dangers of AI hiding in the shadows.
That’s what we cover in this blog, followed by tips for preventing shadow AI when you can and remaining secure when you can’t.
A Closer Look at Shadow AI
Across professions, people have been very enthusiastic about using AI to make their jobs more efficient, productive, and seamless. At the same time, the number of easy and affordable (often free) AI apps has exploded. With demand and supply both surging, it’s no surprise that usage has spilled out of bounds.
How pervasive is shadow AI? One study suggested that 50% of employees are using it, but of course it’s hard to count what’s being hidden, so the true numbers are probably higher. Microsoft found that 78% of AI users are bringing their own AI tools to work, many of which fall into the shadow AI bucket.
Some instances of shadow AI, like uploading proprietary data sets into unauthorized AI tools without telling anyone, are obvious. But less obvious examples are even more common. In one instance, a foreign employee was practicing discussing work topics in English with the help of Grammarly, thus giving sensitive information to a system that would store and potentially reuse that data.
Shadow AI happens both accidentally and intentionally, and both instances will only increase as AI becomes more ubiquitous. Unfortunately, it only takes one instance of shadow AI to unleash complete chaos.
What are the Risks of Shadow IT?
“Your IT team’s worst nightmare” was how a recent article from the Cloud Security Alliance described shadow AI. That seems appropriate considering all the ways rogue AI can undermine security and amplify risk, all while being hard for teams to discover, monitor, and manage. More than just a new issue for teams to address, shadow AI looks poised to become one of the biggest problems teams face. Here’s why:
Data Breach Exposure:
Most of the data fed into AI tools gets stored, some gets used to train the AI, and it can even get recycled into answers for other users. That data could be breached, stolen, or abused in countless ways. And since it lives in a third-party system, there’s little to no way for teams to stop misuse or even know when it occurs. Once the data goes in...it’s gone for good.
Compliance Chaos:
At the same time that AI is advancing everywhere, authorities around the world are taking data privacy and protection more seriously, and putting regulations such as GDPR and CCPA in place. If data fed into AI ends up in the wrong hands, whether that’s hackers or just other users, it could lead to compliance violations that ramp up quickly. The unpredictable risks of shadow AI make all aspects of governance, risk, and compliance more complicated.
Corrupted Tools:
Hackers have found countless ways to hijack consumer-grade AI tools, and attacks involving weaponized AI have been on the rise. Someone could easily be using an AI tool they believe is safe that’s actually using API access to launch attacks directly into the organization or manipulating users into making bad decisions. No one knows there’s a problem, allowing it to continue until serious consequences occur.
These risks aren’t hypothetical. IBM estimates that shadow AI played a part in 20% of global data breaches and adds $670,000 to the average breach cost. What’s more, this list is far from complete. Like AI itself, the risks are just starting to emerge, and there will be more over time. With AI’s ability to act independently, sometimes unpredictably, using access and privileges across multiple systems and data sets, the risks are high of something going wrong. And when AI is unknown, unseen, and unmanaged, the risks all multiply.
That’s why teams everywhere must take action to rein in shadow AI before the problem grows too big to control.
What to Do About Shadow AI
Stopping shadow AI won’t be easy. One study showed that 46% of workers would refuse to give up personal AI tools if their companies banned them, which would only drive more AI usage into the shadows. Instead of expecting to eliminate shadow AI entirely, take a risk management approach that can scale as AI usage, authorized and unauthorized, grows.
Create AI Policies
All companies need to be developing AI policies, including acceptable use policies that dictate how people can and can’t use AI. Given the unique risks of this technology, err on the side of caution by limiting AI usage to the fewest number of tools, data sources, and use cases as possible, in line with zero trust principles of least privilege access.
Train and Educate Users
Educate users on the risks of shadow AI, emphasizing that even small policy violations can lead to major fallout. Then train people how to use and experiment with AI safely. Start an ongoing awareness campaign to keep the risks, rewards, and requirements of AI usage top of mind for everyone at the company.
Scan the Network
One way to detect shadow AI is with a network scanner that has runtime security tools capable of detecting, flagging, and stopping data flows to unauthorized AI services. Data loss prevention tools can further keep sensitive information from ending up inside unknown AI.
Remain Open to AI
Give people an avenue to suggest new AI tools and use cases for approval rather than shooting down every proposal. If possible, build a secure sandbox where people can try using new tools in new ways. Not only does this keep morale high but also reveals novel ways to use AI and gain early advantages.
Stay Secure in the Age of AI
Shadow AI will create complicated challenges for security and IT teams into the foreseeable future—yet it’s just one of many risks created by the rapid adoption of AI. Prepare to abandon the existing playbook and take a drastically different approach to securing IT infrastructure.
How is your company adapting to the AI era? How will you secure AI, in or out of the shadows, without compromising innovation? NopalCyber is here to help companies use AI confidently and creatively, knowing the right policies, protections, and compliant practices are in place.
Contact our team of AI cybersecurity experts.