Hello fellow keepers of numbers,
After reflecting on 2025 and giving thought to 2026, I want to use this newsletter to cover my 2026 predictions in detail. So this format is going to look a bit different. My hope is that these aren’t your typical predictions. I’ve put some real thought into them, and there’s a heavy splash of realism with statistics supporting the trends. That being said, I love a good debate. If you disagree, let me know why.
MY PREDICTIONS
Prediction 1: The agent skills framework leads to highly successful AI implementations
The conversation in 2025 has largely been about AI agent and AI implementation failures. Firms bought products and services promising autonomous workflows, deployed them, and watched as they failed.
Why would 2026 be any different? Lessons learned the hard way alongside a new framework opening the way for much more reliable and accurate AI agents.
The Proof
The assumption was if you deploy a really smart model into your current processes, the agent will help you do your work. AI providers, vendors, and partners like to pretend AI agents are magic.
What makes a great magician? They painstakingly plan and test their illusions and tricks to make it appear as though it's magic.
The way AI has been deployed thus far has been disastrous. According to one report, more than 80% of AI projects fail (RAND). This is twice the rate of failure for IT projects that don’t involve AI. The infamous MIT study reported that 95% of AI pilots were failing.
Even major vendors are pivoting. Salesforce recently recalibrated Agentforce, adding deterministic controls through a new scripting layer and shifting from autonomous agents to a hybrid approach with rules-based outcomes.
Why? Whether firms deployed pre-built AI agents or built their own with broad instructions, the results were the same. The AI didn't fit firm-specific workflows. It hallucinated. It needed constant supervision. You end up spending more time fixing mistakes than you'd spend just completing the tasks. Without continuous evaluation, results drift as models change, edge cases arise, and firm requirements shift.
The Why
Anthropic launched Claude Skills as a framework for building agents with extremely predictable outcomes. Think of it as an automation an AI agent has access to run and review the results. For example, a skill that reads a PDF to extract specific data and populates an Excel template. The agent has access to scripts (an automation) that it runs, reviews the output, determines if the task is complete, and then responds.
Chaining tasks together can be extremely powerful. Think of the next step from the example above. You might do that for 10 PDFs and then pivot the Excel data and create an executive summary. Chaining a task to create the pivot with another task to create an executive summary might've just saved 15-30 minutes.
Anthropic open-sourced this framework. OpenAI has already adopted it. I expect that skills will work in ChatGPT very soon, similar to how Claude Skills function.
The Difference
This isn't magic. It requires significant time and effort upfront. Firms need to map workflows and create SOPs. They need to brainstorm and plan how skills and agentic workflows can help them with their current processes. They need to write very specific instructions. They need to build scripts to connect systems. They need to define what is a good output. They need to test and iterate.
The Prediction
By the end of 2026, the stigma of failed AI implementations shifts dramatically to success stories by firms who have implemented agent skills and narrowly defined agentic workflows specific to their firm's processes.
Prediction 2: Firm rollout of AI licenses explodes, but staff adoption lags
2026 will be the year accounting firms finally commit to firm-wide AI licenses. But most will fumble it because they’ll skip the firm-wide training. Then they’ll wonder why internal AI adoption is lagging.
The Proof
Firms with no AI plans dropped from 49% to 25% in one year (AuditBoard). But only 37% of accounting firms invest in AI training (Karbon).
The Why
There’s a consistent pattern. Firms buy software. They roll it out and assume people will use it. Six months later, they find the adoption rate sucks and the staff hate it.
It’s not because the software is bad (okay, maybe sometimes). But it’s usually because staff aren’t trained and there’s poor support for change management. It’s no different for AI.
The Difference
Most software is purpose-built for one specific problem. AI is different. It's stochastic: the same input can produce different outputs each time. And it's flexible: it can be used across service lines for a variety of reasons.
Staff need training to understand when and when not to use it. They need to learn best practices and use cases. They need easy access to documentation on these best practices and use cases, plus the firm’s AI policy, FAQs, and more.
Most importantly, unlike training for a typical software, AI training doesn’t end. AI models and tools continually improve and evolve. We all learn new things each week. It’s an ongoing process, and firms must treat it as such.
The Prediction
By the end of 2026, firm-wide AI access will hit 70%+, but weekly usage by staff will lag at less than 50% because firms skip the training.
Prediction 3: Model progress slows down, but… that’s a good thing
No major AI breakthroughs in 2026. Claude, ChatGPT, and Gemini will improve incrementally, but nothing like the jumps we've seen. And that’s good news for us.
The Proof
The proof is in what AI providers are shipping. Anthropic launched Claude Code as a product for existing paid subscribers. They launched Claude Skills in October 2025 as a product for existing paid subscribers. OpenAI launched ChatGPT Health in January 2026 as a product for existing paid subscribers. The list goes on.
Same models underneath. Different delivery mechanisms.
The Why
The honest truth: this is an educated guess. Model progress has slowed over the last year. We've had good releases, but nothing like the leap between GPT-3 and GPT-4 or Claude 3 models and Claude 4 models.
I think this pushes AI providers to lean into products that are fine-tuned for specific industries or verticals. It's hard to gain market share when the difference between your flagship product and someone else's is immaterial.
The Difference
A few things can drive major market share:
Releasing a product that makes it easy for users to take advantage of model capabilities
Releasing a fine-tuned AI model that instantly makes it more useful for a specific industry
Releasing a product that solves a specific business problem
For example, I have no doubt Claude Code has driven significant market share for Anthropic.
The Prediction
By the end of 2026, AI providers ship multiple products fine-tuned for specific industries using current model capabilities. These products become differentiators since model quality is so similar.
Prediction 4: Talent exodus precedes client exodus
AI-resistant firms won’t lose clients first. They’ll lose staff.
The narrative has been about clients leaving firms that don't adopt AI. But the data tells a different story. Staff turnover is the leading indicator, not the lagging one.
The Proof
65% of employees are excited to use AI at work (Gartner). 77% will take AI training when offered (Gartner). 79% of employees say AI skills are important for career advancement (Microsoft Work Trend Index).
Meanwhile, firms that invest in AI training unlock 7 additional weeks of capacity per employee per year (Karbon). Staff see this. They know which firms are investing in their growth and which aren't.
The Why
Talent cares about professional development. They want to build skills that matter for their careers. They want to work at firms that aren't stuck in 2015.
This isn't about replacing accountants with AI. It's about accountants with AI skills replacing accountants without them. Staff understand this even when leadership doesn't.
The Difference
In 2025, most firms struggled with AI. Most firms have been failing together, so there wasn’t much of a comparison point. But we’re heading swiftly up the adoption curve.
Firms will start finding more success. AI rollouts with proper training will motivate staff. They’ll implement agent skills. They’ll build AI agents and automations that work.
Staff will see how another firm has automated 15 hours of boring work that they have to do on every engagement. They’ll see their peers at other firms learning and building and feel the FOMO. The gap becomes visible…and unbearable to many.
The Prediction
By the end of 2026, AI-forward firms will report measurably lower staff turnover than AI-resistant firms. This becomes a standard question on exit surveys and a recruiting differentiator.
Until next week, keep protecting those numbers.
Preston
