Hello fellow keepers of numbers,
At the risk of this turning into ‘The Claude Accountant’, I’ve got another interesting nugget about how good Claude is right now. This time directly from Microsoft. Also, Thomson Reuters launches a sales and use tax AI, powered by CoCounsel, and PwC has an (interesting?) AI playbook for 2026.
Plus, an example of using Claude Code to create n8n workflows. And Wikipedia might stop asking me for money. Probably not, but maybe? But I doubt it.
THE LATEST
Microsoft’s Anthropic spending on pace to hit $500 million a year

Source: Gemini Nano Banana Pro / The AI Accountant
In November 2025, Microsoft, NVIDIA, and Anthropic announced a major AI partnership that brought Claude into Microsoft's product lineup, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio. Now, just two months later, the scope of that deal is becoming clearer.
The Information reported that Microsoft's spending on Anthropic's AI models is on pace to reach roughly $500 million per year, making Microsoft one of Anthropic's top customers. By mid-2025, Microsoft was already spending more than $40 million per month on Claude, and that pace has accelerated since the partnership expanded.
Microsoft reportedly told its Azure sales team that selling Anthropic models to cloud customers will now count toward their sales quotas, the same incentive structure already in place for OpenAI products. This puts Claude on equal footing with GPT inside Microsoft's sales organization.
The spending is separate from Microsoft's $5 billion investment in Anthropic announced in November. Microsoft still holds a $13 billion stake in OpenAI, but the growing Anthropic relationship signals a clear move toward a multi-model strategy across its AI products.
Why it’s important for us:
I feel a little validated here. I've spent the last couple months on board the Claude hype train, and now it seems like Microsoft has joined me.
To put some context on the spending, we don't have a great apples-to-apples comparison across all the AI providers. But $500 million a year clearly shows Microsoft has seen real progress with Claude models inside Copilot and Azure AI Foundry. Not surprising since Claude is pretty much best-in-class intelligence at the moment. But still interesting to see Microsoft accelerating on this front.
Claude is a serious contender in this space for firms. And it also shows that Microsoft is making strides to improve their product in any way possible, including major spending on a direct competitor to what is obviously a massive investment for them in OpenAI.
If you're not doing internal testing on the differences between ChatGPT and Claude, at least you can use news like this to steal someone else's benchmarks. And what Microsoft is telling you right now is that Claude is really good.
This also doesn't account for Claude Code, Claude Cowork, Claude Skills, and other things Anthropic is launching pretty much on a monthly basis at this point. All of which have been wildly successful.
Thomson Reuters goes for ‘touchless’ sales tax compliance

Source: Gemini Nano Banana Pro / The AI Accountant
Thomson Reuters announced ONESOURCE Sales and Use Tax AI on January 15, a new AI-powered module that automates sales and use tax compliance across thousands of U.S. jurisdictions. The tool, powered by CoCounsel, handles data import, validation, and tax return mapping so tax teams can generate signature-ready returns instead of building them manually. Thomson Reuters is positioning this as a move toward “touchless compliance,” where an AI agent does most of the work and humans only step in for review and sign-off.
The system supports more than 1,200 official state, county, and city returns and covers over 19,000 U.S. jurisdictions, with built-in e-filing for 33 states plus Canada. It’s cloud-based with automatic monthly content updates so forms and rates stay current. Users get full audit trails showing each automated decision, which can be used to answer auditor questions and document how positions were determined.
Early pilot results from Thomson Reuters show up to 65% less time spent on routine reporting and up to 75% reduction in audit exposure due to automated validation and complete documentation. They also claim compliance cycles for large enterprises dropped from 30 days to 11 days, with estimated annual savings of around $25,000 for small companies and $60,000+ for larger ones. ONESOURCE Sales and Use Tax AI is available now to U.S. corporations and accounting firms with sales and use tax obligations, and it plugs into the broader ONESOURCE+ indirect tax suite for calculation, certificates, VAT, and e-invoicing.
Why it’s important for us:
My entire amount of knowledge on sales and use tax regulations is as follows: it’s painful and annoying. This is the type of expert analysis you only get from a CPA.
In all seriousness, anything that requires knowledge of rules and regulations for each U.S. state can obviously get quite complicated. This is a genuinely cool use case for AI, and early testing by TR seems to be going well.
I’ll describe my feeling as cautiously optimistic because, as many of you already know, it’s not like the major accounting software providers like TR have a good track record of rolling out successful new features or offerings.
This seems like it could be useful for accounting firms handling sales and use tax compliance for a large number of clients. It’s probably not going to be that beneficial for a firm that files a handful of returns in a couple states. But you’re probably not using ONESOURCE on a small scale anyway.
The point I’m making in a long-winded way is that this still requires sales and use tax experts to review the AI. It’s not yet going to offer a new service line to firms not already doing this type of compliance, and it’s not going to be very accessible by clients looking to do this in-house either.
I also expect to see a lot more AI use cases like this pop up in 2026.
PwC lays out its 2026 AI playbook for enterprises

Source: Gemini Nano Banana Pro / The AI Accountant
PwC published its 2026 AI Business Predictions, arguing that AI is shifting from scattered pilots to enterprise-wide programs led directly by senior leadership. Instead of crowdsourced experiments, they expect companies to run top-down AI portfolios focused on a small number of high-value workflows where leadership applies “enterprise muscle,” including talent, tech resources, and change management, to drive real outcomes.
A central piece of their roadmap is the “AI studio,” a centralized hub that houses reusable components, frameworks for evaluating use cases, sandboxes for testing, deployment protocols, and dedicated AI talent. PwC also highlights “agentic AI,” AI agents that don’t just analyze but take actions across complex workflows, with finance, HR, IT, tax, and internal audit specifically called out as prime candidates for these agents.
PwC expects these agents to be deployed on a centralized orchestration layer that connects to multiple AI models and enterprise systems, with monitoring, logging, and “agents checking each other’s work” built in. They also predict a shift in workforce shape: more junior and senior talent, a thinner mid-tier in knowledge work, and new roles focused on overseeing and orchestrating AI agents.
Responsible AI is framed as a hard requirement rather than a nice-to-have. PwC’s view is that AI ROI increasingly depends on operationalizing governance, including risk taxonomies, controls, monitoring, documentation, and often independent assurance, and that most of the value will come from workflow redesign and proprietary data, not from picking the “best” large language model.
Why it’s important for us:
I have conflicting opinions on this announcement. As a PwC alum, I know these announcements rarely convert to reality. But this is also likely the case for 95%+ of enterprises providing thought leadership.
On one hand, I strongly agree with their assessment that random AI experiments aren’t moving the needle. Too many firms are throwing money at AI and supporting software hoping that it’ll provide an unprecedented ROI just because it’s AI. It may feel like magic sometimes when you get an amazing response from an AI or it nails a file the first time you ask. But the reality is implementations that achieve real ROI are really difficult. They require the efforts of experts, whether internal or external.
On the other hand, I really disagree with how they’re oversimplifying workforce training and transitioning to agentic workflows that provide ROI. It also seems to make some major assumptions about the foundation that businesses have in place today. Most of what I see right now is hard work that firms are doing to build a strong foundation for the age of AI - better document management, better SOPs, systems that don’t lock down their data, foundation-level training for staff, etc.
Don’t let PwC fool you into feeling behind. Most are nowhere near close to implementing agentic workflows where agents are completing tasks and checking each others’ work. We can make meaningful changes to move in that direction, but it would be irresponsible to assume this will be widely adopted by the end of 2026.
PUT IT TO WORK
Lately, I’ve been using Claude Code to build workflows in n8n. I want to share an example of how you can do the same.
The Loom video explores using Claude Code to build an n8n workflow that auto-categorizes emails and pings you in Slack.

WEEKLY RANDOM
Wikipedia is now selling "clean" data feeds to the big AI companies.
Wikimedia Enterprise just announced partnerships with Amazon, Meta, Microsoft, Mistral AI, and Perplexity. Instead of these companies scraping the public site like everyone else, they now get structured APIs and premium data feeds designed specifically for AI systems.
This is actually kind of a big deal if you think about it. Wikipedia has always been one of those sources that AI models just... absorbed. No one really asked permission. Now they're formalizing it with licensed access, quality scores, and something called "Credibility Signals" that flag citation gaps and reference risks.
I sort of wonder how many AI hallucinations came from poorly cited or incorrect Wikipedia pages. It's crowdsourced. And while it gets reviewed, it's obviously not an authoritative source.
It makes me wonder how much weight the AI providers placed on Wikipedia information and whether they even have the ability to fine-tune that level of detail in their models. Maybe this will lead to more accurate information and fewer hallucinations if Wikipedia has historically been a problem.
My biggest question: does this mean Wikipedia can stop posting a 72-paragraph request at the beginning of every page asking me to contribute a dollar?
Until next week, keep protecting those numbers.
Preston
