Shadow AI is not a fringe problem anymore. It is what happens when employees adopt AI tools faster than IT, finance, and security can approve them. A personal ChatGPT account, an AI browser extension, a trial of Cursor, or an unauthorized Copilot install can all become shadow AI when the company has no visibility into usage.
The risk is not just compliance. The cost is duplicated spend, untracked client data exposure, and AI budgets that grow without a clear owner. Every untracked tool is a line item that finance cannot forecast and security cannot audit. Rize detects shadow AI with ATT, or Agent Token Tracking, by measuring which AI tools employees use automatically across apps, URLs, and project work.
Shadow AI Costs $412K Per Year
HelpNetSecurity reported that 78% of workers use unapproved AI tools and that shadow AI costs companies an average of $412,000 per year. Thirty-four percent of that spend duplicates tools the company already pays for.
According to the Federal Reserve Bank of Atlanta, firms are spending an average of $2,068 per employee on AI tools. Most of that budget is planned and visible. The problem is the spending that falls outside procurement. A company may be paying for approved ChatGPT Team seats while employees also expense personal ChatGPT Plus, use Claude on individual cards, or run work through unapproved AI browser tools.
| Shadow AI signal | What it means | |---|---| | Approved tool exists, but personal account usage continues | Rollout or training gap | | Many tools solve the same workflow | Consolidation opportunity | | High usage in unapproved tool | Buying signal or security review needed | | Low usage in paid seats | License waste |
The goal is not to shut down every unapproved tool. The goal is to see what is happening and make a deliberate decision.
Why ACO Tools Cannot See Shadow AI
AI Cost Optimization tools track token spend, API calls, and model costs. They are useful when AI usage flows through instrumented endpoints. They do not see personal ChatGPT tabs, desktop AI apps, browser extensions, or AI features embedded in third-party software.
That leaves a measurement gap. According to the FinOps Foundation, 98% of organizations have adopted some form of AI cost tracking. But those trackers only cover what flows through instrumented pipelines. A FinOps dashboard may say API spend is under control while employees are using unapproved tools outside the system entirely.
The Uber AI budget story is the cautionary example. A widely circulated internal account described 6,500 engineers consuming $500 to $2,000 per month each and burning through the full 2026 AI budget in four months. The core problem was not only spend. It was the lack of per-engineer attribution.
ACO answers "what did the API cost?" Shadow AI detection requires "who used which AI tool, for how long, on which work?"
Adoption Tracking Is Not Enough
Many companies now track AI adoption. CNBC reported that large employers are monitoring AI use at work. Adoption tracking can show whether employees are trying AI, but it does not automatically show ROI, duplication, or project impact.
Counting logins is a start, but login data alone cannot tell you whether the tool is creating value or duplicating spend. It does not answer:
- Which AI tools are outside the approved list?
- Which tools duplicate approved spend?
- Which teams are heavy users?
- Which projects are absorbing the most AI-assisted work?
- Which tools are used enough to justify renewal?
Those questions need employee-level and project-level usage data, not only vendor dashboard totals.
The Budget Split That Creates Shadow AI
93% of enterprise AI budgets go to infrastructure and model costs, according to the Deloitte State of AI in the Enterprise report. Only 7% goes to change management, training, and tool governance. That 93/7 split is where shadow AI starts. When IT spends nearly the entire budget on model infrastructure but allocates almost nothing to tool approval workflows, employees fill the gap themselves.
The financial impact of getting AI adoption right is significant. According to PwC's analysis of AI performance, the top 20% of companies capture 74% of all AI-related revenue gains. Those companies share a common trait: they measure AI usage at the employee and project level, not just at the API gateway.
Meanwhile, research from Anthropic found that AI tools can reduce task completion time by up to 80% for certain knowledge work. That productivity gain only counts, though, when the organization knows which tools are being used and by whom. If a developer cuts coding time in half with an unapproved tool, the company gets the benefit without knowing the source. When that tool gets blocked in a security sweep, the productivity gain vanishes overnight.
Shadow AI is not just a compliance problem. It is a planning problem. Without per-employee usage data, finance cannot forecast AI spend accurately. Without per-project attribution, operations cannot calculate which AI investments produce returns and which are waste.
Shadow AI Discovery Methods: Survey vs ATT vs CASB
There are three primary approaches to discovering shadow AI in an organization. Each has a different scope, cost, and detection speed. Most companies start with surveys, hit the ceiling within one quarter, and need to decide between ATT and CASB for continuous monitoring.
Employee surveys. The simplest approach: ask employees which AI tools they use. Surveys are free and fast to distribute. The problem is accuracy. According to McKinsey's State of AI survey, 72% of organizations reported using AI in at least one function in 2024. But internal surveys at those same organizations consistently undercount actual tool usage because employees forget tools, underreport unapproved usage, or do not recognize embedded AI features as separate tools. Surveys also produce point-in-time data that goes stale within weeks.
CASB (Cloud Access Security Broker). CASB tools like Netskope, Zscaler, and Microsoft Defender for Cloud Apps monitor network traffic and can flag when employees access SaaS AI tools. According to Gartner, by 2026 more than 80% of enterprises will have deployed generative AI applications, making CASB-based AI discovery increasingly relevant. CASBs work well for web-based AI tools routed through corporate networks. They miss desktop apps (Cursor, Claude Desktop), VPN-bypassed traffic, mobile device usage, and AI features embedded in approved software.
ATT (Agent Token Tracking). ATT operates at the device level, capturing application metadata regardless of network path. It detects desktop AI apps, browser-based tools, IDE extensions, and CLI tools equally. The data includes time spent, project context, and tool identity. ATT requires device deployment but produces the most complete shadow AI inventory of the three approaches.
| Method | Detection scope | Time to first data | Ongoing accuracy | Cost | |---|---|---|---|---| | Survey | Self-reported only | 2-4 weeks | Low (degrades fast) | Free | | CASB | Network-routed web tools | 1-2 weeks | Medium (misses desktop) | $5-15/user/month | | ATT | All apps, browser, desktop, CLI | 1 day | High (continuous) | Rize subscription |
The practical recommendation: use surveys for the initial baseline if you have no other data. Deploy ATT for continuous monitoring. Use CASB if your primary concern is network-level data loss prevention rather than usage measurement. For shadow AI discovery specifically, ATT produces the most complete picture because it captures the desktop and IDE tools that CASB and surveys both miss.
ATT Detects Shadow AI Automatically
ATT gives teams a different measurement layer. Rize captures app names, URLs, window titles, timestamps, and project context. That lets Rize identify AI tools by name even when they were not purchased through IT.
For example, Rize can detect that a designer spent time in Midjourney, a developer spent time in Cursor, or a marketer spent time in ChatGPT. It can then roll that data up by employee, team, client, and project. See the weekly AI tools rankings for the full list of tools Rize tracks automatically.
This is the same foundation used for AI productivity metrics. It works without screenshots, keystroke logging, browser-only tracking, or SDK instrumentation. The system measures activity metadata, then turns it into reporting that finance and operations can use.
The difference between ATT and a FinOps dashboard is scope. A FinOps tool tracks what the company provisions. ATT tracks what employees actually use, regardless of whether IT approved it. That distinction matters most during budget season. If Deloitte's data is right that 93% of AI budgets go to infrastructure, the remaining 7% for governance needs a data source that covers the full tool surface, not just the instrumented slice.
Shadow AI Policy Design
A shadow AI policy should make it easier to use approved tools than to use unapproved ones. Policies that only say "do not use unapproved AI" fail because they offer no path forward for employees who need AI tools to do their work.
According to Harvard Business Review, organizations that take a blanket prohibition approach to generative AI see higher rates of covert usage compared to those that provide approved alternatives with clear guardrails. Prohibition creates shadow AI. Governed access reduces it.
A practical shadow AI policy has four components:
1. Approved tool list with rationale. List every AI tool the company endorses, the approved use cases for each, and the data handling rules. Employees should know which tool to use for which task without asking IT every time. Update the list monthly as new tools emerge.
2. Fast-track approval process. When an employee finds a new AI tool that works better than approved options, they need a path to request approval that takes days, not months. According to Forrester, organizations with a tool request-to-approval cycle under 10 business days see 40% less shadow AI than those with cycles over 30 days. If the approval process is slower than signing up for a free trial, employees will choose the trial.
3. Data classification rules. Not all shadow AI carries the same risk. An employee using ChatGPT to rewrite internal meeting notes is a different risk level than one pasting client financial data into an unvetted AI tool. The policy should classify data by sensitivity and specify which AI tools (if any) can process each category.
| Data classification | AI tool policy | Monitoring approach | |---|---|---| | Public (blog drafts, marketing copy) | Any approved tool | Monthly ATT review | | Internal (meeting notes, project plans) | Approved tools with enterprise agreements | Weekly ATT review | | Confidential (client data, financials) | Approved tools with SOC 2 + data residency | Real-time ATT alerts | | Restricted (PII, health data, legal holds) | No AI tool usage permitted | ATT block-and-alert |
4. Continuous monitoring with ATT. The policy is only enforceable when violations are detectable. ATT provides the monitoring layer that turns policy into practice. Without it, the policy is a document employees read once during onboarding and forget by the second week.
The companies that reduce shadow AI most effectively combine a permissive approved list, a fast approval process, and continuous monitoring. Strict policies with slow processes and no monitoring produce the highest rates of shadow usage.
A Three-Step Shadow AI Discovery Playbook
One week of data is enough to surface the majority of shadow AI in most organizations. Use a short audit cycle first:
- Deploy automatic tracking. Install Rize and collect one week of ATT data.
- Compare discovered tools against the approved list. Flag tools that are unapproved, duplicated, or missing from procurement.
- Decide tool by tool. Consolidate duplicate spend, approve high-value tools, reassign unused seats, and block tools that create unacceptable risk.
The review should produce a simple decision table:
| Tool | Finding | Action | |---|---|---| | Personal ChatGPT | Duplicates approved Team plan | Move users to approved workspace | | Cursor | Heavy usage by engineering | Review for official purchase | | AI browser extension | Unknown data handling | Block pending security review | | Copilot | Paid seats with low usage | Reassign or cancel seats |
After the first audit cycle, set a recurring monthly review. Shadow AI is not static. New tools launch weekly, and employees experiment constantly. A one-time audit catches the current state. A monthly review catches drift before it compounds into the next budget surprise.
Connect Shadow AI to Budget Planning
Shadow AI data should feed directly into AI budget planning. If 15% to 25% of licensed spend is happening outside the approved stack, finance needs that buffer visible before renewals. Most companies set AI budgets based on vendor invoices alone. That approach misses the personal subscriptions, free-tier tools, and expensed accounts that make up the shadow layer.
It should also feed into AI cost management. ACO helps control API and vendor spend. ATT helps find employee-level usage. AYO connects that usage to productivity yield. The goal is a single view that shows both provisioned and discovered AI tools, with cost and usage data for each.
The Federal Reserve Bank of Atlanta's $2,068 per-employee figure is a useful benchmark. If your company has 200 employees and your visible AI spend is under $200,000, the gap between that number and $413,600 (200 times $2,068) is likely flowing through personal accounts, expensed subscriptions, and free tiers that carry data risks.
According to McKinsey's State of AI survey, organizations that track AI usage at the individual level report 20% higher confidence in their AI budget forecasts compared to those relying on department-level estimates. That confidence translates into better capital allocation: finance teams can set AI budgets with a shadow AI buffer built in rather than discovering overruns after the quarter closes.
Shadow AI also creates compliance exposure that has direct financial consequences. If an employee processes client data through an unapproved AI tool that experiences a data breach, the company bears the liability regardless of whether IT approved the tool. According to IBM's Cost of a Data Breach Report, the average data breach cost reached $4.88 million in 2024. Shadow AI increases breach surface area by routing sensitive data through tools that have not been vetted for security, compliance, or data retention policies.
The companies that handle shadow AI well will not be the ones with the strictest survey. They will be the ones with the clearest usage data and the fastest approval loop for tools that are actually helping teams work.
Related Reading
- AI Productivity Metrics. How Rize detects shadow AI automatically with ATT per-employee tracking
- AI Cost Management: ACO to AYO. Why FinOps tools miss shadow AI and what ATT adds
- AI Budget Planning for Teams. Include a shadow AI buffer in your AI budget
- How to Run an AI Tool Audit. Discover shadow AI in one week with automatic tracking
- Reduce AI Costs Without Cutting Usage. Use usage data to cut duplicate AI spend without blocking high-value tools
Start tracking time automatically
Join thousands of professionals who stopped guessing where their time goes. Free for 7 days.
“Rize has been a no-brainer for me.” — Ali Abdaal Read more →
