AI Adoption Trends: 2024-2026 Data

AI Adoption Trends: 2024-2026 Data

jonathan wu · May 13, 2026 · 9 min read

AI adoption between 2024 and 2026 followed three distinct phases: experimentation, adoption, and infrastructure. Each phase shows up in the data differently. GitHub star growth reveals what developers are building with. Enterprise spending data shows where budgets are moving. And time tracking data from Rize's 30,000 active users shows which AI tools are actually used day to day -- not just purchased.

Key Takeaway

AI adoption is no longer about whether teams use AI. It is about whether they measure it. GitHub star data shows coding tools growing at 3,100 stars per week, agents at 1,700, and observability at 650 -- the monitoring layer is the fastest-growing category by percentage. Enterprise AI spending hit $2,068 per employee in 2026 (Federal Reserve Bank of Atlanta), up 50% YoY. The companies that track AI usage per employee will capture disproportionate returns. The ones that estimate will keep spending without a feedback loop.

Three Phases: Experiment, Adopt, Build

AI adoption from 2024 to 2026 moved through three phases, each visible in different data sources: experimentation (2024), formal adoption (2025), and infrastructure buildout (2026).

2024 -- Experimentation. Teams trialed ChatGPT, Copilot, and Midjourney with no formal budget. Usage was individual, untracked, and often invisible to IT. Most companies had no AI line item in their budget. The hype cycle peaked with AutoGPT hitting 100,000+ GitHub stars in months, but few production deployments followed.

2025 -- Adoption. AI tools got formal budgets. Microsoft Copilot rolled out enterprise licensing at $30/seat. ChatGPT Enterprise launched at $60/seat. According to the Federal Reserve Bank of Atlanta, average AI spending reached $1,358 per employee. Companies moved from "let people try AI" to "deploy AI to teams."

2026 -- Infrastructure. Companies now build AI into their operating stack. Agent frameworks (CrewAI, LangChain) power automated workflows. Observability tools (Langfuse, Arize Phoenix) monitor AI performance. And the measurement question shifted from "are people using AI?" to "is AI producing returns per employee?" AI spending hit $2,068 per employee -- up 50% from 2025 -- with Gartner projecting $2.52 trillion in global AI spending.

The phase shift matters because it changes what you should measure. In the experiment phase, login counts were enough. In the adoption phase, seat utilization mattered. In the infrastructure phase, only per-employee yield data -- hours saved per AI dollar spent -- tells you whether the investment is working.

GitHub Stars as a Developer Adoption Proxy

GitHub star growth is a leading indicator of developer adoption, and between April and May 2026 (weeks 17-20), AI coding tools averaged 3,100 new stars per week across eight projects -- the fastest-growing category by absolute volume.

Stars are not users. But star growth correlates with production deployments 6-12 months later. When developers star a repo, they are bookmarking it for future use. The pattern in W17-W20 data from Rize's AI tools tracker reveals which categories are accelerating and which are plateauing.

| Tool | Category | Total Stars | Weekly Growth | Signal | |---|---|---|---|---| | AutoGPT | Agents | 184,170 | +193 | Plateauing | | LangChain | Agents | 136,424 | +690 | Steady | | Claude Code | Coding | 122,519 | +2,249 | Accelerating | | Cline | Coding | 61,622 | +277 | Steady | | CrewAI | Agents | 51,163 | +587 | Accelerating | | Aider | Coding | 44,656 | +346 | Steady | | Continue | Coding | 33,109 | +153 | Steady | | Langfuse | Observability | 26,984 | +445 | Accelerating |

Claude Code is the standout: 2,249 new stars per week, 3.6x the next highest mover (LangChain at 690). It had 10,772 open issues at the time of measurement, suggesting not just interest but active production usage generating bug reports and feature requests.

AutoGPT tells the opposite story. At 184,170 total stars it remains the most-starred AI project, but weekly growth has slowed to 193 -- a 97% decline from its 2023 peak. The autonomous agent hype gave way to practical coding assistants that solve immediate problems.

Category Breakdown: Coding vs. Agents vs. Observability

AI tool adoption splits into three categories with distinct growth trajectories: coding tools lead by volume, agent frameworks lead by diversity, and observability tools lead by growth rate.

| Category | Projects Tracked | Combined Stars | Weekly Growth | Growth Rate | |---|---|---|---|---| | Coding | 8 | ~308,000 | +3,100/wk | ~1.0%/mo | | Agents | 7 | ~282,000 | +1,700/wk | ~0.6%/mo | | Observability | 6 | ~55,000 | +650/wk | ~1.2%/mo |

Coding tools (Claude Code, Cline, Aider, Continue, Cursor, and others) dominate absolute growth. Developers adopt coding assistants first because the value loop is immediate: write code, get suggestions, ship faster. These tools require no organizational buy-in. A single developer can start using them today.

Agent frameworks (LangChain, CrewAI, AutoGPT, Composio, E2B) represent the next wave. Agents automate multi-step workflows -- data pipelines, customer support triage, content generation. But they require infrastructure: API keys, orchestration logic, error handling. That is why agent growth is slower than coding tools. Adoption depends on team-level decisions, not individual ones.

Observability tools (Langfuse, Arize Phoenix, Portkey Gateway) are the smallest category by stars but the fastest-growing by percentage. This is the signal that matters most for the adoption curve. When companies invest in monitoring AI, they have moved past experimentation and into production. You do not observe something you are testing. You observe something you depend on.

Track which AI tools your team actually uses

Rize captures AI tool usage per employee automatically. No surveys, no login counts. See adoption depth, tool overlap, and time spent per tool per project.

Start Free Trial

The Observability Signal: Monitoring AI Means Running AI

Langfuse added 445 GitHub stars per week in W17-W20 -- the fourth-fastest AI project overall and the fastest by category growth rate. Companies do not deploy LLM observability unless they are running LLMs in production.

The observability category tells you something the coding and agent categories do not: where AI has become infrastructure. A developer stars Cursor because it might be useful. A team deploys Langfuse because they have agents running in production and need to monitor latency, cost, and accuracy.

Three observability tools tracked in the W17-W20 window:

| Tool | Stars | Weekly Growth | What It Monitors | |---|---|---|---| | Langfuse | 26,984 | +445 | LLM traces, costs, evals | | Portkey Gateway | 11,675 | +98 | API routing, fallbacks, spend | | Arize Phoenix | 9,613 | +98 | Model performance, drift |

Combined, these three tools added 641 stars per week. That is smaller than any single top coding tool. But the growth rate -- 1.2% monthly -- exceeds both coding (1.0%) and agents (0.6%). When the monitoring layer grows faster than the thing it monitors, adoption has crossed from discretionary to operational.

This pattern mirrors what happened with cloud computing. AWS launched in 2006. Datadog (cloud monitoring) launched in 2010 and grew faster than most cloud services it observed. The monitoring company becomes essential when the underlying technology becomes infrastructure. The same transition is happening with AI right now.

For teams measuring AI adoption internally, the equivalent question is: do you have observability on your AI usage? If the answer is "we check vendor dashboards monthly," you are in the 2025 adoption phase. If the answer is "we track per-employee AI tool usage and yield automatically," you are in the 2026 infrastructure phase.

What Rize Users Tell Us About Real-World AI Adoption

Rize tracks 30,000 active users' work patterns automatically. In May 2026, AI tools appeared across every category of work -- from code editors to meeting assistants to autonomous agents -- with ChatGPT, Grok, and Manus as the most widely used.

Rize's time tracking data captures which applications and websites users spend time in, without surveys or self-reporting. This data shows AI adoption at the individual level -- not enterprise deployments or GitHub stars, but actual time spent in AI tools during work hours.

| AI Tool | Active Users (Sample) | Category | |---|---|---| | ChatGPT (app + web) | 46 | LLM Chat | | Grok | 40 | LLM Chat | | Perplexity | 30 | AI Search | | Manus (app + web) | 45 | AI Agent | | Copilot (all variants) | 26 | AI Assistant | | Cursor | 18 | AI Code Editor |

Three patterns stand out in the data.

Multi-tool adoption is the norm. Users do not pick one AI tool. They use ChatGPT for general queries, Cursor or Copilot for coding, Perplexity for research, and Manus for autonomous tasks. The average AI-active user in the Rize dataset touches 2-3 distinct AI tools per week.

Agent tools are arriving. Manus, an autonomous AI agent, showed 45 combined active users (app and web) -- more than Cursor (18) or Copilot (26). This lines up with the GitHub data: agent frameworks are the second-fastest growing category by stars, and real users are beginning to adopt agent-style tools in daily workflows.

Shadow AI is real. Rize captures tools that never appear in IT's procurement system. Users running Grok (40 active users), Lovable (30 users), or LM Studio (6 users) are adopting these tools without formal approval. According to HelpNetSecurity, shadow AI costs companies an average of $412,000 per year. Without automatic time tracking, these tools stay invisible in adoption statistics.

Enterprise Spending: The 13x Acceleration

AI token spend per firm is up 13x since January 2025 according to Ramp's public data, with four of the top ten trending software vendors now being AI inference platforms.

That 13x is not AI budgets growing 13x. It is the amount companies spend on API tokens and inference compute -- the variable cost of actually using AI, not just licensing it. When Ramp's Economics Lab tracks vendor spending trends and four of the top ten are AI inference providers, it means companies have moved from "buy seats" to "run workloads."

The Federal Reserve Bank of Atlanta's survey puts the per-employee figure at $2,068 for 2026, up from $1,358 in 2025 -- a 50% year-over-year increase. But the distribution is extreme:

| Spending Segment | AI Spend Per Employee | Share of Companies | |---|---|---| | Top 10% | $2,800+ | ~10% | | Average | $2,068 | — | | Median | Under $200 | ~50% |

The 14x gap between the top 10% and the median means half of all companies are still in the experimentation phase -- spending less than $200 per employee on AI while the leaders spend $2,800+. This is where the adoption curve bends. Companies that track per-employee AI spend and yield will separate from the ones that estimate.

For a full breakdown of AI spending benchmarks by industry, team size, and ROI calculation, see our AI spending per employee benchmark analysis.

2026 H2 Predictions from the Trend Lines

Three predictions for the second half of 2026, grounded in the data above rather than speculation: coding assistants consolidate, observability becomes a budget line, and AI measurement tools become the bottleneck.

Prediction 1: Coding assistant consolidation. Claude Code's 2,249 stars per week is 3.6x higher than the next coding tool (Aider at 346). Cursor's growth has slowed to 35 stars per week. The pattern suggests developers are consolidating around fewer, better coding tools rather than spreading across many. Expect 2-3 winners in AI coding by year-end, not 8-10.

Prediction 2: Observability becomes a standard budget line. At 1.2% monthly growth rate, AI observability tools are on track to double their combined star count by year-end. Companies deploying agents cannot afford to run them blind. Monitoring AI cost, latency, and accuracy will become as standard as monitoring uptime for web services. Langfuse at 27,000 stars is where Datadog was in its early years.

Prediction 3: AI measurement becomes the bottleneck. Deloitte's State of AI 2026 found that 93% of AI budgets go to technology and only 7% to measurement. PwC reports that 20% of companies capture 74% of AI-driven returns. The correlation is clear: companies that invest in measuring AI usage -- per employee, per project, per dollar -- will be the 20% that capture the returns. The other 80% will keep buying tools without knowing whether they work.

This is where AI efficiency measurement fits the adoption curve. The experiment phase needed tools. The adoption phase needed budgets. The infrastructure phase needs measurement. Tracking which AI tools each employee uses, for how long, and with what output is the missing layer between spending $2,068 per employee and knowing whether that spend produces returns.

Methodology

All data in this article comes from three sources: GitHub star tracking, Rize usage data, and published third-party research with named sources.

GitHub star data. Tracked weekly (W17-W20, April-May 2026) across 35+ AI projects in four categories: coding, agents, observability, and infrastructure. Stars are a measure of developer interest and intent, not production adoption. Weekly growth rates are calculated as the difference between consecutive weekly snapshots. Category totals are rounded to the nearest thousand.

Rize usage data. Captured from Rize's automatic time tracking across 30,000 active users. AI tool usage is identified by application name and website domain. User counts represent a point-in-time sample from May 2026 and are not annualized. Usage data reflects individual adoption patterns, not enterprise-wide deployments.

Third-party sources. Enterprise spending data from the Federal Reserve Bank of Atlanta. Global AI spending projections from Gartner. AI budget allocation data from Deloitte. AI ROI distribution from PwC. Shadow AI costs from HelpNetSecurity. AI token spend trends from Ramp's public Economics Lab data.

No statistics in this article are fabricated or extrapolated beyond the stated sources. Growth percentages and rates are calculated directly from the underlying data.

For a complementary view of how these adoption trends play out across sectors, see our AI adoption by industry breakdown. For the per-employee cost implications of rising AI adoption, see our AI spending per employee benchmark. And for teams tracking which specific AI tools employees pair together in daily workflows, our AI tool pairings data shows the five most common combinations from 30,000 users.

Start tracking time automatically

Join thousands of professionals who stopped guessing where their time goes. Free for 7 days.

“Rize has been a no-brainer for me.” — Ali Abdaal Read more →

Jonathan Wu
Jonathan WuHead of Growth

Jonathan leads growth at Rize, focusing on AI productivity measurement, go-to-market strategy, and helping teams prove ROI on their AI investments with time data.

Frequently Asked Questions

AI adoption is growing at infrastructure speed in 2026, not just tool speed. GitHub star growth for AI coding tools averages 3,100 new stars per week across eight major projects. AI agent frameworks add 1,700 stars per week. AI observability tools -- the monitoring layer -- add 650 stars per week and are the fastest-growing category by percentage. Enterprise AI spending per employee reached $2,068 in 2026 according to the Federal Reserve Bank of Atlanta, up 50% from 2025.

The AI adoption rate in 2026 varies by measure. Microsoft reports 75% of knowledge workers use AI tools at work. Rize tracking data from 30,000 active users shows AI tools appearing in daily workflows across ChatGPT (46 active users in a sample), Grok (40), Perplexity (30), Cursor (18), and Copilot (26). Enterprise AI spending is up 13x per firm since January 2025 according to Ramp public data. The gap is no longer access but usage depth and measurement.

AI adoption from 2024 to 2026 followed three phases: experimentation (2024), where teams trialed ChatGPT and Copilot with no budget or measurement; adoption (2025), where AI tools got formal budgets, seat licenses, and team rollouts; and infrastructure (2026), where companies deploy AI agents, build observability stacks, and measure per-employee yield. The infrastructure phase is marked by monitoring tools like Langfuse growing faster by percentage than the AI tools they observe.

ChatGPT remains the most widely used AI tool in 2026. Rize time tracking data from 30,000 active users shows 46 users actively spending time in ChatGPT during a May 2026 sample, with ChatGPT Atlas (the desktop app) logging 26 hours across 27 users. Combined with web and app usage, ChatGPT is the highest-adoption AI tool by user count in the Rize dataset, followed by Grok (40 users) and Perplexity (30 users).

Claude Code is the fastest-growing AI tool by GitHub stars in 2026, adding 2,249 stars per week to reach 122,519 total as of May 2026. LangChain adds 690 stars per week, CrewAI adds 587, and Langfuse (observability) adds 445. AutoGPT, the 2023 breakout at 184,000 total stars, has slowed to 193 new stars per week -- a sign that developer attention has shifted from autonomous agents to practical coding assistants and multi-agent frameworks.

The average company spends $2,068 per employee on AI in 2026 according to the Federal Reserve Bank of Atlanta, up 50% from $1,358 in 2025. But the distribution is extreme: the median company spends under $200 while the top 10% spend $2,800 or more. Ramp public data shows AI token spend per firm is up 13x since January 2025, with AI inference platforms now dominating vendor spending charts.

The most telling AI productivity statistic is not adoption rate but usage depth. Rize tracking data shows the average AI tool session lasts under 15 minutes. AI observability tools are growing faster by percentage than any AI tool category, which suggests companies are shifting from asking whether AI is adopted to asking whether it produces returns. The 93/7 split -- 93% of AI budgets go to tools, 7% to measurement -- explains why 80% of companies still cannot prove AI ROI.

AI adoption is speeding up at the infrastructure layer while individual tool growth is normalizing. AutoGPT peaked at 184,000 stars but adds only 193 per week. Claude Code, launched later, adds 2,249 per week. AI observability tools are the fastest-growing category by percentage. Enterprise AI spending is up 13x per firm since January 2025. The pattern is not slowdown but maturation: companies are building measurement infrastructure around AI, not just buying more tools.

GitHub stars are a leading indicator of developer adoption. Between April and May 2026 (W17-W20), AI coding tools averaged 3,100 new stars per week across eight projects, agent frameworks averaged 1,700, and observability tools averaged 650. Star growth correlates with production deployments 6-12 months later. The shift from agents (slowing) to coding tools (accelerating) and observability (fastest by percentage) signals that AI is moving from experimentation into daily workflow infrastructure.

Track AI adoption with automatic time tracking that captures per-employee AI tool usage without surveys or login counts. Agent Token Tracking (ATT) records which AI tools each person uses, for how long, and on which projects. This reveals adoption depth (hours per user, not just headcount), tool overlap (shadow AI), and yield (hours saved per AI dollar). Login-based metrics count access, not usage, and miss 20-40% of actual AI spend from unapproved tools.

Related Posts