Sixty-seven percent of enterprises estimate AI ROI instead of measuring it. The gap is not laziness. It is infrastructure. Most AI cost tools track tokens and API calls but not the people using them. Without per-employee attribution, ROI is a guess dressed up as a metric.
Key Takeaway
AI ROI requires three inputs: AI tool cost per employee, net hours saved per project (gross minus 40% rework), and loaded hourly cost. Most companies have the first but not the second. ATT (Agent Token Tracking) closes the gap by capturing per-employee AI usage automatically.
The 67% Estimation Problem
InformationWeek reports that 67% of enterprises still estimate AI ROI rather than measuring it. ModelOp calls this the "AI value illusion": spending is tracked, returns are guessed.
The reason is a measurement gap. FinOps tools like Helicone and Langfuse track API-level spend. HR systems track headcount. Nobody connects the two automatically. A CFO can tell you total AI spend and total headcount, but not which employee's AI usage produced which project outcome.
That leaves companies estimating: "We spent $X on AI, revenue grew Y%, so AI must have contributed Z%." That math does not hold up under scrutiny.
The Actual AI ROI Formula
AI ROI per employee equals the net hours saved times loaded hourly cost, minus AI tool cost, divided by AI tool cost. Net hours means gross savings minus the 40% rework discount that InformationWeek documents across enterprise AI deployments.
Monthly AI ROI = ((net hours saved × loaded hourly cost) − AI tool cost) / AI tool cost
Here is the math for a developer using Copilot at $30/month who saves 3 gross hours per week (per Worklytics):
| Input | Gross model | Net model (40% rework) | |---|---:|---:| | Weekly hours saved | 3.0 | 1.8 | | Monthly hours saved | 12.99 | 7.79 | | Loaded cost ($75/hr) | $974 | $584 | | AI tool cost | $30 | $30 | | Monthly ROI | 31x | 18x |
The net ROI is still strong at 18x. The point is that finance should use the net number, and they need per-employee data to calculate it.
Why Vendor Dashboards Cannot Show ROI
Vendor dashboards answer "how much did we use?" They cannot answer "what did we get back?"
GitHub shows Copilot suggestion acceptance rates. OpenAI shows token usage. Anthropic shows conversation counts. None of them show how much human time was saved on a specific project, or whether that saved time produced better output.
| What vendors report | What ROI requires | |---|---| | Token usage per account | Hours saved per employee per project | | Suggestion acceptance rate | Net time saved after rework | | Monthly active users | AI time vs non-AI time on same work | | Total API cost | Cost per project (AI + human blended) |
The Federal Reserve Bank of Atlanta reports the average company spends $2,068 per employee on AI in 2026. The top 10% spend $2,800 or more. But without per-employee attribution, companies cannot tell whether the heavy spenders are the most productive or the most wasteful.
The Rework Discount Most Teams Ignore
Anthropic's research across 100,000 conversations shows AI reduces task completion time by 80% on average. But InformationWeek reports that 40% of those savings are offset by rework and corrections: debugging AI-generated code, reviewing suggestions, fixing edge cases.
That makes the net gain closer to 48%, not 80%. Still significant. But a CFO who budgets around 80% savings will overspend, and a CFO who hears "maybe 48%" will underinvest. The fix is measurement, not better guessing.
Rework varies by task type and tool:
| Task type | Typical rework rate | Net time gain | |---|---|---| | Boilerplate code generation | 15-20% | ~65% | | Novel architecture work | 50-60% | ~25% | | Content drafting | 30-40% | ~45% | | Data analysis | 20-30% | ~55% |
Teams that measure rework per task type can allocate AI tools where the net gain is highest, instead of deploying blanket licenses and hoping for the best.
Per-Employee Attribution With ATT
ATT (Agent Token Tracking) captures per-employee AI tool usage automatically. Unlike FinOps tools that instrument API endpoints, ATT records which person used which AI tool, for how long, on which project, without manual logging, surveys, or browser extensions.
ATT produces the data that makes AI ROI calculable:
| ATT captures | ROI use | |---|---| | Hours per AI tool per employee | Numerator of ROI formula | | Project-level AI time | Per-project ROI comparison | | Shadow AI tool discovery | True cost denominator | | AI time vs total work time | Utilization and efficiency ratios |
For example, if a team of 10 developers uses Copilot, Cursor, and ChatGPT (all tracked in the AI tools rankings), ATT shows each developer's time in each tool per project. A manager can see that Developer A saves 4 net hours per week with Copilot on the billing project, while Developer B uses Copilot for 15 minutes per week. That is a seat reallocation decision, not a tool cancellation decision.
Rize's ATT connects AI tool time with project work time automatically. This is the same foundation used for shadow AI detection and AI cost management.
A Simple AI ROI Review Process
Run this quarterly, or monthly for teams spending more than $5,000/month on AI tools:
- Collect AI tool costs. Aggregate all AI subscriptions, API spend, and compute costs per team.
- Capture per-employee AI time. Use ATT to measure which employees use which AI tools, for how many hours, on which projects.
- Estimate rework rates. Track correction time per task type. Start with the 40% default and adjust as you gather data.
- Calculate net hours saved. Gross AI-assisted hours minus rework hours equals net hours saved.
- Compute ROI per team and per project. Net hours saved times loaded hourly cost, minus AI tool cost. Compare across teams.
The output is a decision table:
| Team | AI cost/mo | Net hrs saved/mo | Value at $75/hr | ROI | |---|---:|---:|---:|---:| | Engineering | $3,200 | 142 | $10,650 | 3.3x | | Marketing | $800 | 38 | $2,850 | 3.6x | | Design | $450 | 12 | $900 | 2.0x | | Support | $200 | 28 | $2,100 | 10.5x |
That table tells a CFO where to increase AI investment and where to reallocate. Support gets 10.5x ROI at low spend, likely a scaling opportunity. Design gets 2.0x, worth investigating whether the tools match the workflow.
ROI Measurement by Stakeholder
Different stakeholders need different views of AI ROI. A CFO, a CTO, and a team lead each ask a different question, and the measurement framework must answer all three.
The CFO view: cost-to-value ratio. A CFO needs AI ROI expressed in dollars. The question is: "For every dollar we spend on AI tools, how many dollars of productive output do we get back?" That requires total AI cost (subscriptions, API spend, compute) divided by the dollar value of net hours saved. According to McKinsey's State of AI survey, organizations that have moved beyond the pilot phase and are seeing revenue impact from AI have increased from 8% to 13% year over year. The CFO needs to know whether their company is in that 13% or still burning budget on pilots.
| CFO metric | Data source | ATT contribution | |---|---|---| | Total AI spend | Procurement + API billing | Discovers shadow AI spend outside procurement | | Dollar value recovered | Net hours saved x loaded cost | Per-employee time in AI tools by project | | Cost per productive AI hour | Total spend / net hours saved | Denominator from ATT usage data | | Budget forecast accuracy | Actual vs projected ROI | Continuous measurement vs quarterly estimates |
The CTO view: productivity per tool. A CTO cares about which tools accelerate engineering output and which create drag. The question is: "Which AI tools make my teams ship faster, and which ones generate rework?" That requires per-tool comparison of time saved vs rework created. ATT shows time in each AI tool by developer. Combined with sprint velocity or cycle time data, the CTO can compare Copilot ROI against Cursor ROI against Claude Code ROI for the same team.
The team lead view: utilization and skill gaps. A team lead needs to know which people on the team use AI effectively and which need support. The question is: "Are my team members getting value from the AI tools we pay for?" ATT shows per-developer usage patterns. A developer using Copilot for 8 hours a week is different from one who opens it once a month. That usage gap is often a training opportunity, not a performance issue.
The three views feed the same data pipeline. ATT provides the raw usage metrics. The CFO applies cost data. The CTO layers in tool comparisons. The team lead focuses on individual patterns. One measurement system, three decision contexts.
Common ROI Calculation Mistakes
Most AI ROI calculations contain at least one of five common errors. Each one inflates or distorts the number in ways that lead to bad budget decisions.
Mistake 1: Using gross savings instead of net. The 40% rework discount exists for a reason. Teams that report "AI saved us 100 hours this month" without subtracting correction time are overstating by 40 hours. According to InformationWeek, the rework rate is consistent across industries and tool types.
Mistake 2: Counting adoption as ROI. "90% of our team uses AI tools" is an adoption metric, not an ROI metric. A team with 90% adoption and zero measured time savings has spent money without proving return. The Federal Reserve Bank of Atlanta found that the top 10% of companies spend $2,800 per employee on AI while the median spends under $200. Without ROI data, neither group knows if their spending level is right.
Mistake 3: Ignoring shadow AI in the cost denominator. If your ROI calculation includes only approved tool costs but your employees also spend on personal AI subscriptions, the cost denominator is too low. That makes ROI look better than it is. ATT discovers shadow AI tools so the full cost base enters the formula.
Mistake 4: Using vendor benchmarks instead of internal data. "GitHub says Copilot saves 55% of coding time" is a vendor claim, not your team's reality. Internal measurement may show 20% or 60%, depending on codebase, team seniority, and task mix. According to Anthropic's productivity research, AI time savings vary significantly by task type, from 80% reduction on routine tasks to minimal gain on novel work.
Mistake 5: Averaging ROI across the org. Organization-wide averages hide the distribution. One team at 15x ROI and another at 0.5x ROI average to 7.75x, which tells the CFO nothing useful. Per-team and per-project ROI surfaces the actual spread so budget decisions target the right groups.
| Mistake | Impact | Fix | |---|---|---| | Gross instead of net | Overstates ROI by ~40% | Apply task-specific rework rates | | Adoption as ROI | Confuses activity with value | Measure hours saved, not login counts | | Missing shadow AI costs | Understates true AI spend | Use ATT to discover all tools | | Vendor benchmarks | May not match your workflow | Measure internal data with ATT | | Org-wide averages | Hides team-level variance | Report ROI per team and per project |
From Estimation to Measurement
PwC found that 20% of companies capture 74% of AI-driven returns. The differentiator is measurement discipline, not spending level. The companies that measure AI ROI per employee and per project can reallocate spend to high-yield workflows. The companies that estimate cannot.
The progression is straightforward: ACO tells you what you spent. ATT tells you who spent it and on what. AYO tells you whether it was worth it. Most companies are stuck at step one.
If your AI ROI conversation still starts with "we think" instead of "the data shows," the missing piece is per-employee attribution. ATT provides that automatically.
Why ROI Measurement Changes Budget Decisions
The difference between estimated and measured ROI is not academic. It changes where money goes.
According to Deloitte's State of AI 2026, 93% of AI budgets go to technology and only 7% toward the people and workflows expected to drive value. That split persists because most companies cannot attribute returns to specific teams or projects.
With measured ROI, a CFO can see that engineering gets 3.3x return while support gets 10.5x. That data justifies increasing AI spend in support and investigating why engineering's return is lower. Without measured ROI, every team gets the same blanket allocation and nobody knows which investments are working.
The FinOps Foundation reports that 98% of organizations practice some form of AI cost management. But cost management without ROI measurement is like tracking mileage without knowing where you drove. ATT adds the destination: which projects, which employees, which outcomes.
According to Gartner, by 2026 more than 80% of enterprises will have used generative AI APIs or deployed GenAI-enabled applications. That means the measurement gap is widening, not closing. More tools, more employees, more spend, but no more visibility into per-employee returns. The CFOs who build ROI measurement now will be the ones who can defend their AI budgets when the board asks for proof of value.
The transition from estimation to measurement is not a technology problem. It is a data architecture problem. The data exists in AI vendor dashboards, HR systems, project management tools, and finance ledgers. The missing piece is connecting employee identity across those systems. ATT provides that connection by capturing per-employee AI usage at the device level, which is the one layer that touches all tools regardless of vendor.
Related Reading
- AI Productivity Metrics. Track AI tool usage per employee automatically with ATT
- AI Cost Management: ACO to AYO. The full framework from token tracking to AI yield optimization
- GitHub Copilot ROI: What the Data Shows. Apply the rework discount to real Copilot usage data
- AI Spending Per Employee Benchmark. The $2,068 benchmark and the 14x gap between companies
- Shadow AI: The $412K Hidden Cost. Find the AI spending your ROI formula is missing
Start tracking time automatically
Join thousands of professionals who stopped guessing where their time goes. Free for 7 days.
“Rize has been a no-brainer for me.” — Ali Abdaal Read more →
