AI Governance Starts With Visibility

AI Governance Starts With Visibility

jonathan wu · May 12, 2026

AI governance policies are only as good as the usage data behind them. Most companies write AI acceptable-use policies, distribute them to employees, and assume compliance. Then 78% of workers use unapproved AI tools anyway. The problem is not policy, it is visibility.

Key Takeaway

AI governance requires knowing which AI tools employees actually use, not which tools IT approved. ATT (Agent Token Tracking) creates the automatic usage inventory that compliance, security, and legal teams need. Without it, governance is a policy document, not a control.

The Visibility Gap in AI Governance

According to HelpNetSecurity, 78% of workers use unapproved AI tools. That is not a compliance failure, it is an adoption reality that governance frameworks need to account for.

Employees adopt AI tools because they work. A developer tries Cursor for code generation. A marketer pastes text into Claude for rewriting. A designer uses Midjourney for concepts. A sales rep feeds call transcripts into an AI summarizer. Each decision is rational at the individual level. At the organizational level, it creates an ungoverned AI footprint that grows every week.

Traditional governance assumes a known tool inventory. AI breaks that assumption because:

  • New AI tools launch weekly and employees try them immediately
  • Many AI tools are browser-based and require no IT installation
  • Personal accounts (ChatGPT Plus, Claude Pro) bypass corporate procurement
  • AI features embedded in existing tools (Notion AI, Canva AI) expand the footprint invisibly

A governance framework that only covers approved tools governs a fraction of actual AI usage. The remaining tools, the ones employees chose on their own, carry the highest risk because no one has reviewed their data handling, security posture, or compliance status.

What Regulators Are Starting to Require

Regulatory bodies are now requiring AI usage inventories. According to the EU AI Act, which entered enforcement in stages starting August 2025, organizations must maintain inventories of AI systems, document risk assessments, and ensure transparency in AI-assisted decisions. For companies with EU employees or customers, "we don't know which AI tools are in use" is no longer an acceptable position.

The requirements are practical, not theoretical:

| Requirement | What it means operationally | |---|---| | AI system inventory | Know every AI tool in your org, not just approved ones | | Risk classification | Assess each tool's risk level (minimal, limited, high, unacceptable) | | Transparency obligations | Document when AI assists decisions that affect people | | Record-keeping | Maintain logs of AI system usage for audit |

The US has followed a parallel path. According to the White House Blueprint for an AI Bill of Rights, organizations should provide transparency and accountability in AI use. Sector-specific guidance from the SEC, EEOC, and FTC reinforces these principles with enforcement actions.

None of these frameworks work without knowing which AI tools employees actually use. A compliance team cannot classify risk on tools they cannot see.

Survey-Based Governance Does Not Scale

The common first step is an employee survey: "Which AI tools do you use?" The problem is that surveys produce point-in-time, self-reported data that is stale before the spreadsheet is finished.

According to UC Today, KPMG built an internal AI dashboard for 10,000 employees. That approach works for tracking adoption at scale, but adoption counts are not the same as governance controls.

Governance needs continuous answers to specific questions:

| Governance question | Survey answer | ATT answer | |---|---|---| | Which AI tools are in use? | Whatever employees remember and report | Every AI tool detected per employee per week | | Which teams use unapproved tools? | Depends on honesty and memory | Automatic flagging against approved list | | Is client data flowing through AI? | Unknown unless someone discloses | Project-level AI usage shows which client work touches AI | | Are high-risk tools in regulated workflows? | Self-assessment | Continuous monitoring by project and department |

Surveys tell you what employees think they use. Automatic tracking tells you what they actually use. For governance, the difference matters.

ATT as a Governance Layer

ATT (Agent Token Tracking) gives governance teams the automatic usage inventory they need. Rize captures application names, URLs, window titles, and project context, metadata that identifies every AI tool in use per employee, per project, without manual logging or browser-only tracking.

ATT maps directly to governance requirements:

1. AI system inventory. ATT discovers AI tools automatically, including tools employees adopted outside the procurement process. The result is a live inventory of AI tools, not a quarterly spreadsheet.

2. Risk classification input. Once you know which tools are in use, compliance can classify each one by risk level. ATT data shows usage volume and project context, so risk assessment can prioritize tools with heavy usage in sensitive workflows.

3. Policy enforcement. An approved tool list becomes enforceable when you can detect violations automatically. ATT flags unapproved tools in real-time instead of waiting for the next survey cycle.

4. Audit trail. ATT creates time-stamped records of which employees used which AI tools on which projects. That is the record-keeping layer auditors and regulators look for.

This is the same measurement foundation used for shadow AI detection and AI cost management. Governance, cost management, and productivity measurement all start with the same data: who used which AI tool, for how long, on which work.

Data Exposure Risk: The Governance Priority

The biggest governance risk is not cost, it is data. When an employee pastes client data into a personal ChatGPT account, the company may have no record of the exposure. When a developer uses an AI coding tool on a regulated codebase, the code may be processed by a third-party model with unknown data retention policies.

ATT does not inspect the content of AI interactions, it captures metadata only (application names, URLs, timestamps, project context). But that metadata is enough to answer the critical governance question: "Is AI being used on sensitive work?"

| Signal | Governance action | |---|---| | Personal AI account used on client project | Review data handling policy with employee | | Unapproved coding AI on regulated codebase | Security review of tool's data retention | | High AI usage in finance/legal department | Verify tools meet compliance requirements | | New AI tool appears across multiple teams | Fast-track procurement review or block |

The goal is not to prevent AI adoption. It is to make adoption visible so governance decisions are informed, not reactive. Companies that can identify exactly which teams use AI on regulated work can act quickly when a policy gap appears, rather than learning about exposure during an external audit.

AI Governance Frameworks: NIST, EU AI Act, and ISO 42001

Three major frameworks now define how organizations should govern AI. Each requires usage visibility as a prerequisite, and none works without knowing which tools employees actually use.

NIST AI Risk Management Framework (AI RMF 1.0). Released by the National Institute of Standards and Technology, the AI RMF organizes governance into four functions: Govern, Map, Measure, and Manage. The "Map" function requires organizations to catalog AI systems in use and assess context. The "Measure" function requires ongoing monitoring of AI system performance and impact. Both assume a live inventory of active AI tools. According to NIST, organizations should maintain continuous awareness of AI systems operating in their environment. ATT provides the automatic discovery that makes NIST's Map and Measure functions operational.

EU AI Act. The regulation classifies AI systems by risk level: unacceptable, high, limited, and minimal. High-risk AI systems require conformity assessments, technical documentation, and human oversight. For organizations with EU operations, the classification process requires knowing which AI tools are in use before any risk assessment can begin. A company cannot classify a tool it does not know about. The Act entered staged enforcement starting August 2025, with full compliance required by 2027.

ISO/IEC 42001. This international standard specifies requirements for an AI management system (AIMS). It follows the Plan-Do-Check-Act cycle and requires organizations to define the scope of AI use, establish objectives, implement controls, and monitor performance. According to ISO, the standard requires organizations to determine internal and external issues relevant to AI systems, including identification of all AI applications in operation. ATT data feeds directly into the "Check" phase by providing continuous monitoring of AI tool usage.

| Framework | Key governance requirement | ATT contribution | |---|---|---| | NIST AI RMF | Map all AI systems, measure impact | Automatic AI tool discovery and usage metrics | | EU AI Act | Classify AI by risk level, maintain records | Complete tool inventory for risk classification | | ISO 42001 | Plan-Do-Check-Act for AI management | Continuous monitoring data for the Check phase |

Without automatic visibility, compliance teams spend weeks assembling manual inventories that are outdated before they are reviewed. With ATT, the inventory is live, and each framework's requirements become a filter applied to existing data rather than a new data collection project.

Governance Maturity Model: From Ad-Hoc to Optimized

Most organizations operate at ad-hoc governance maturity, reacting to AI incidents instead of preventing them. Moving to optimized maturity requires progressing through four stages, each building on the visibility layer beneath it.

Stage 1: Ad-Hoc. No formal AI governance. Employees choose their own tools. IT discovers shadow AI only through incident reports or audit findings. According to McKinsey's State of AI survey, 44% of organizations reported at least one negative consequence from AI use in 2024, up from 34% the prior year. Ad-hoc governance means these consequences surface only after damage occurs.

Stage 2: Reactive. The organization has an AI acceptable-use policy and an approved tool list. Compliance is self-reported. Shadow AI is detected through periodic surveys or security reviews. The policy exists, but enforcement depends on employee honesty. Most companies with governance programs are at this stage.

Stage 3: Managed. AI tool usage is tracked automatically. The approved tool list is enforced against real usage data. Shadow AI is flagged within days, not quarters. Risk classification is applied to all discovered tools, not just the ones that went through procurement. ATT provides the measurement layer that makes Stage 3 possible. According to Gartner, by 2026 more than 80% of enterprises will have used generative AI APIs or deployed GenAI-enabled applications. Managed governance ensures that usage is visible, classified, and auditable.

Stage 4: Optimized. Governance data feeds directly into budget planning, risk management, and productivity measurement. The governance team recommends tool consolidations, identifies high-ROI AI workflows, and adjusts policy based on usage trends. Governance becomes a strategic function, not a compliance checkbox.

| Maturity stage | Visibility method | Response time | Risk posture | |---|---|---|---| | Ad-Hoc | None | Months (incident-driven) | Unknown | | Reactive | Surveys, manual review | Weeks | Partially assessed | | Managed | ATT automatic tracking | Days | Classified and monitored | | Optimized | ATT + budget + productivity data | Real-time | Continuously adjusted |

The jump from Stage 2 to Stage 3 is the hardest because it requires a technology investment in automatic tracking. But it is also where the largest reduction in governance risk occurs. A company at Stage 3 knows its full AI tool surface within a week of any new adoption. A company at Stage 2 may not discover a new tool for months.

Building an AI Governance Framework With Visibility

A practical AI governance framework has four layers. Most companies have the first two but not the last two:

Layer 1: Policy. Define which AI tools are approved, restricted, or prohibited. Set data handling rules for each category. Most companies already have this.

Layer 2: Communication. Distribute the policy, train employees, and create a process for requesting new tools. Most companies do this, though enforcement lags.

Layer 3: Visibility. Know which tools employees actually use, on which projects, and how often. This is the layer most companies lack. ATT provides it.

Layer 4: Enforcement. Use visibility data to flag violations, adjust the approved list based on actual demand, and provide evidence for audits. This layer requires Layer 3 to function.

Without Layer 3, governance is aspirational. With it, governance becomes operational. Most compliance programs stall at Layer 2 because they have no automated way to verify what employees actually do after reading the policy.

Implementation Steps: Zero to Governed in 30 Days

A governance program does not need to be a six-month initiative. Teams can move from zero visibility to managed governance in 30 days using a phased approach.

Week 1: Deploy and discover. Install Rize on employee devices. ATT begins capturing AI tool metadata immediately, no configuration needed. By end of week one, you have a complete list of every AI tool in active use across the organization. According to Forrester, organizations that classify AI tools by risk level within the first 30 days of discovery reduce data exposure incidents by 60% compared to those that wait for quarterly reviews.

Week 2: Classify and prioritize. Compare discovered tools against your approved list. Classify each unapproved tool by risk level using the EU AI Act categories or your own framework. Prioritize review for tools with heavy usage on sensitive projects.

Week 3: Decide and communicate. For each discovered tool, make an explicit decision: approve, restrict, or prohibit. Communicate decisions to employees with clear rationale. Add newly approved tools to procurement. Establish a fast-track review process for future tool requests so employees have a path to approval that is faster than shadow adoption.

Week 4: Monitor and iterate. Set up recurring ATT reports for governance review. Configure alerts for new AI tools appearing in usage data. Establish a monthly governance review cadence. The goal is not a one-time audit. It is a continuous control that adapts as AI tool adoption evolves.

The 30-day timeline works because ATT eliminates the longest phase of traditional governance rollouts: data collection. Instead of spending 8 to 12 weeks surveying employees and reconciling vendor invoices, the governance team starts with complete usage data on day one.

Connect Governance to Cost and Productivity

AI governance is not a standalone compliance exercise. The same visibility data that powers governance also feeds AI budget planning and AI ROI measurement.

A governance review that finds 12 unapproved AI tools in use is also a cost finding, those tools represent unbudgeted spend. A governance review that finds heavy AI usage on client work is also a productivity finding, that usage either accelerates delivery or introduces rework.

According to Deloitte's State of AI 2026 report, 93% of AI budgets go to technology and only 7% toward the people and workflows expected to drive value. Governance teams that have visibility into actual usage can redirect that imbalance, approving tools that employees already use productively and cutting tools that exist only on a procurement spreadsheet.

The companies that govern AI well will not be the ones with the strictest policies. They will be the ones with the clearest picture of what is actually happening.

Related Reading

Start tracking time automatically

Join thousands of professionals who stopped guessing where their time goes. Free for 7 days.

“Rize has been a no-brainer for me.” — Ali Abdaal Read more →

Jonathan Wu
Jonathan WuHead of Growth

Jonathan leads growth at Rize, focusing on AI productivity measurement, go-to-market strategy, and helping teams prove ROI on their AI investments with time data.

Frequently Asked Questions

AI governance is the set of policies, processes, and controls an organization uses to manage AI adoption, usage, and risk. It covers which AI tools employees may use, how data flows through those tools, who has access, and how the organization measures compliance. Governance requires visibility: you cannot enforce policies on tools you do not know employees are using.

Most AI governance frameworks assume the organization knows which tools are in use. But 78% of workers use unapproved AI tools according to HelpNetSecurity. Without automatic visibility into which employees use which AI tools on which projects, governance policies are enforced against an incomplete inventory. ATT (Agent Token Tracking) closes this gap by detecting all AI tools automatically.

The EU AI Act requires organizations to maintain inventories of AI systems in use, document risk assessments, and ensure transparency in AI-assisted decisions. For companies with EU employees or customers, this means knowing every AI tool in the workflow, not just the ones IT approved. Automatic tracking with ATT provides the usage inventory the regulation requires.

AI cost management (ACO) tracks how much you spend on AI infrastructure. AI governance tracks which tools are in use, who uses them, whether usage complies with policy, and whether data flows meet regulatory requirements. Cost management is a subset of governance. Both require visibility into actual usage, which is why ATT serves as the foundation for both.

ATT (Agent Token Tracking) captures which employees use which AI tools, for how long, and on which projects, automatically and without surveys. This creates the usage inventory that governance and compliance teams need to enforce approved tool lists, flag shadow AI, audit data exposure risk, and meet regulatory requirements like the EU AI Act.

Related Posts