Blog
Shadow AI governance: strategies to rein in rogue models
Banning ChatGPT won't stop shadow AI. Building better alternatives will.

Someone needs a quick analysis, the data team's backlog is weeks deep, and ChatGPT is right there. Within minutes, customer data gets pasted into a browser tab, and the results are shared in Slack. Problem solved — until it isn't.
This is shadow AI in action. Roughly a third of employees now use AI tools without IT approval. While intentions are good, using AI without any kind of governance makes maintaining control over metrics, data quality, and compliance difficult for most organizations.
The good news? Shadow AI isn't inevitable. With the right governance approach, organizations can give business users the speed and autonomy they're looking for while keeping data secure and metrics consistent.
Shadow AI vs. shadow IT: a quick distinction
Traditional shadow IT involved employees using unauthorized cloud storage or project management tools. The main concern was straightforward: where does the data live, and who can access it?
Shadow AI, on the other hand, refers to employees using AI tools like ChatGPT without IT approval, creating both data security and data quality problems.
When an employee pastes customer information into ChatGPT, or any other unregulated AI tool, that data potentially becomes part of how the model learns and responds. Sensitive customer details, proprietary business logic, or confidential strategies could surface in outputs for other users or end up in training data you have no control over.
There's also a decision quality problem. When someone uses an unauthorized AI tool to analyze sales figures, they're making business decisions based on calculations that haven't been validated by your data team.
In short, with shadow IT, you had to worry about data access. With shadow AI, you're dealing with data transformation and automated decision-making, often with zero visibility into what's actually happening.
The real risks of ungoverned AI
Many organizations are already experiencing data incidents, compliance violations, and reputational damage caused by sensitive information flowing into AI tools without oversight. As AI adoption accelerates, these risks grow, making governance an urgent priority rather than a future consideration.
Data exposure at scale
Corporate data flowing into AI tools increased 485% over a single year. Even more concerning: that data is now 27.4% sensitive, up from 10.7% — a 2.5x increase in sensitive data exposure.
An example of this is the Samsung incident in April 2023, which illustrates exactly how this plays out. Within three weeks, Samsung experienced three separate shadow AI incidents when employees accidentally leaked sensitive data through unauthorized ChatGPT usage:
Engineers pasted proprietary semiconductor source code for debugging help
Employees uploaded confidential meeting recordings for AI summarization
Technical specifications were shared for analysis
The employees involved were simply trying to do their jobs faster. That's what makes shadow AI so difficult to address through policy alone.
Metric inconsistency and decision quality
Beyond security, shadow AI makes metrics inconsistent. When business users bypass data teams for quick answers from AI, they often work with different definitions, stale data, or calculations nobody has validated. "Revenue" means one thing to finance, another to sales, and something else entirely to the AI tool that just hallucinated a trend line.
This erodes trust in analytics across the organization. Teams start maintaining their own spreadsheets "just to be safe." Leadership gets conflicting numbers in the same meeting. The data team spends more time arbitrating disputes than delivering insights.
Best practices for shadow AI governance
Effective shadow AI governance balances control with speed by addressing the root causes driving business users to unauthorized tools.
Build governance into your platform architecture
Governance works best when you embed it in infrastructure rather than bolting it on as an afterthought. Modern analytics platforms should embed governance principles directly into their core architecture through automated policy enforcement, making governed analytics easier than creating shadow solutions.
Semantic Modeling is foundational here and teams are investing in it more heavily with AI on the rise, like HubSpot, who needs to ensure data trust for more than 8,000 employees . Defining metrics once in a central location means "active users" or "churn" have one reliable meaning everywhere, eliminating the need for business users to create their own definitions or ask AI tools to calculate them.
Calendly's Go-to-Market analytics team took this approach, building a Standardized Metric Library that serves as company-wide KPI documentation. The library helps tie-break conflicting reports and gives everyone confidence they're working from the same definitions, reducing the temptation to seek validation from external AI tools. Definitions living in governed Semantic Modeling rather than scattered spreadsheets mean both data teams and business users operate from a shared truth. This foundation makes every subsequent governance control more effective.
Implement risk-based approval processes
Not all AI use cases carry equal risk. A marketing analyst exploring campaign performance trends is different from an engineer debugging proprietary code. Your governance framework should reflect that.
Low-risk sandbox: Pre-approved AI tools with self-service access and 24- to 48-hour approval for new tools. Think AI data visualization and routine analytics automation.
Medium-risk use cases: Expedited five to ten business day review with limited production deployment and monitoring. Customer service chatbots and internal process automation fall here.
High-risk use cases: Complete risk assessment, pilot deployment with extensive logging, and continuous monitoring. Credit scoring systems, medical diagnosis support, and anything needing HIPAA or SOX compliance.
Speed at the low-risk end matters most. Organizations that provide rapid approval windows for low-risk AI tool categories let business users operate within approved channels. Make governance slower than using ChatGPT directly, and you've created the incentive for shadow AI adoption.
Align with regulatory frameworks
Your governance program should connect to broader regulatory needs. The NIST AI Risk Management Framework provides a structured approach to identifying and mitigating AI risks through four core functions: Govern, Map, Measure, and Manage.
For organizations operating in Europe, the EU AI Act introduces tiered compliance needs based on risk classification. High-risk AI systems need technical documentation, data governance measures, and quality management systems. Building your internal governance tiers to mirror these external frameworks simplifies compliance audits and ensures your policies remain defensible as AI regulation matures. The EU AI Act's phased compliance deadlines extend through August 2027, giving organizations time to align — but starting now prevents scrambling later.
Provide approved alternatives that actually work
Offering sanctioned AI tools with proper governance discourages shadow AI adoption more effectively than policies or restrictions alone. This means curating an enterprise-approved AI tool catalog with clear data handling policies, enterprise admin access, training resources, and transparent communication about what's available.
Samsung responded to concerns about AI tool misuse by banning external generative AI services and later introducing internal AI models with increased oversight, but there is no evidence that these alternatives fully replaced unauthorized shadow tool use or that ChatGPT access was formally reinstated.
By denying access to well-known tools like ChatGPT or Claude, security leaders may push employees underground toward less centralized, less well-documented AI solutions. Provide access to thoroughly reviewed AI products rather than implementing blanket bans that drive adoption underground.
Hex — an AI-assisted platform where data teams and business users work side-by-side — addresses this by combining SQL, Python, and AI-assisted features in one governed environment. Business users can ask questions in natural language and get answers backed by your actual data warehouse, with full lineage and audit trails. This makes them less likely to paste that same question into an ungoverned external tool.
Create visibility and enable self-service
You can't govern what you can't see. Effective shadow AI governance needs detection infrastructure combining network monitoring through Cloud Access Security Brokers (CASB), Data Loss Prevention (DLP) tools tracking sensitive data movement, and behavioral analysis watching for unusual data access patterns.
But this isn't about catching people doing something wrong. It's about understanding that shadow AI adoption stems from legitimate business needs going unmet. Understanding usage patterns lets you build approved alternatives that match the speed and convenience of shadow tools.
The best defense against shadow AI is making governed analytics faster and easier than the alternatives. Data teams that invest in self-service analytics platforms with governed data access, pre-approved AI tool catalogs, and Semantic Modeling that frees engineering capacity from firefighting address root causes rather than merely blocking unauthorized tools.
Building a sustainable governance culture
Shadow AI governance isn't a one-time project. It's an ongoing practice that needs attention to both technical controls and organizational culture.
Track metrics that matter: time from AI request to approved deployment, self-service request fulfillment rate, shadow AI detection rate, and incident response time. These tell you whether your program is actually working or just creating theater.
Training matters as much as tooling. Roll out education programs that show employees how to use approved AI tools effectively, not just what's prohibited. Create clear escalation paths so business users who need capabilities outside the approved catalog have a fast way to request them rather than going underground. Review your approved tool catalog quarterly to ensure it keeps pace with rapidly evolving AI capabilities. And communicate proactively about what's available: if employees don't know governed alternatives exist, they'll default to whatever they find on their own.
Build feedback loops between your governance team and the business. Someone requesting access to a new AI tool is telling you what capabilities your current stack lacks. The organizations that handle shadow AI best aren't the ones with the strictest policies; they're the ones that respond fastest to emerging needs with governed solutions.
And remember that your business users aren't the enemy. They're trying to do their jobs, often under time pressure. If your governance program treats them as adversaries to be monitored and restricted, you'll lose. If it treats them as partners who need better options, you'll build something sustainable.
Making governance work for everyone
Shadow AI represents a genuine challenge, but it's also an opportunity. Organizations that figure out how to enable AI-assisted productivity while maintaining data quality and compliance will have a significant advantage over those still fighting losing battles with blanket bans.
Address the root causes: business users turn to shadow AI because governed alternatives don't meet their needs for speed and autonomy. Fix these service delivery problems, and the governance challenge becomes much more manageable.
Hex approaches this by building AI capabilities directly into an AI-assisted platform where data teams and business users work side-by-side. Data teams maintain control over definitions, lineage, and access — while business users get the fast, self-serve experience they're looking for. It's governance that helps rather than restricts.
If you're ready to see how this works in practice, you can sign up for free or request a demo.