Hello fellow keepers of numbers,

Lots of incremental updates this week. It’s nice of the AI companies to observe our season of conferences as their downtime.

Canopy released the equivalent of Claude Cowork inside their platform, and it seems to be localized to only their platform. OpenAI and Anthropic are using forward-deployed engineers to provide AI consulting services. And in the most depressing news of the week… the White House thinks they’re experts at determining the viability of AI models.

Plus, a demo of the new Claude in Word add-in.

THE LATEST

Canopy launches an AI coworker

Source: Canopy / Canopy Declares the ERA of the Autonomous Firm

Canopy launched Canopy Coworker, an AI execution layer built into its practice management platform for accounting firms. The company describes it as a move from software that organizes firm work to software that can execute parts of that work inside the firm's existing operating system.

Canopy Coworker is designed to work across the same client records, tasks, documents, transcripts, and workflow data firms already manage inside Canopy. The system can identify next steps, prepare work, route information, and help firms move client work forward without requiring staff to copy context between disconnected tools.

The launch also includes Canopy Notetaker, which automatically captures client conversations and archives them to the correct client record. That matters because Canopy is positioning Coworker around firm memory: calls, notes, tasks, deadlines, and documents all feeding the same AI layer.

Canopy has not announced separate pricing for Coworker in the launch materials. The company is rolling it out as part of a broader product evolution toward what it calls the “autonomous firm.”

Why it’s important for us:

Canopy Coworker looks like Claude Cowork inside of Canopy. It can read the firm’s context, understand the work, and take actions inside the system. That’s exactly what you would expect an AI agent inside a practice management platform to do.

For firms that already use Canopy, especially those that do everything in Canopy, this is a great offering.

The issue is that Canopy only works this cleanly if it’s your source of truth for the entire firm. Your email needs to be connected. Your documents need to be stored there. Your client information needs to be maintained there. Your tasks need to be tracked there. For some firms, that may be true. For a lot of firms, it’s not.

That has always been the tradeoff with Canopy. The product can be very good if you commit to its world, but it has never been especially friendly to firms that want to mix and match tools. Their API is limited, and the workflows generally assume Canopy is the center of gravity.

I find it interesting that they’ve chosen to build this feature inside of Canopy instead of creating an MCP to connect Canopy with Claude Cowork. Claude Cowork is becoming the hub that pulls context from your tools and pushes work back out to them for many firms. Canopy Coworker seems to assume Canopy is the hub. But how do you pull context from other software into Canopy? Seems like you can’t. And then Canopy Coworker would be missing important info it needs to be useful.

White House considers vetting AI models before release

Source: Google Nano Banana Pro / The AI Accountant

The White House is considering government oversight of new AI models before they are released to the public. The proposal would create an AI working group made up of technology executives and government officials to examine possible oversight procedures for frontier models.

The discussions come as the administration is expanding pre-release testing of advanced AI systems. Google DeepMind, Microsoft, and xAI agreed to give the U.S. government early access to new AI models for national security testing before public release.

The reviews are focused on security risks from frontier models, including whether advanced systems could be used for cyberattacks or other national security threats. OpenAI and Anthropic were already working with the U.S. Center for AI Standards and Innovation on similar testing.

The policy is not final. The proposal remains under discussion, but the federal government is moving closer to pre-release visibility into the most capable AI models before they reach the public.

Why it’s important for us:

This feels like the first real movement toward AI regulation that I can remember. Not even necessarily the right version of regulation. But at least there’s some acknowledgement that maybe there are consequences of AI that we should consider.

A lot of this seems tied to Claude Mythos. There have been rumors that the Trump administration was concerned about the model’s cyber capabilities, and we’ve covered the tension between Anthropic and the administration. So the reaction here isn’t very surprising.

The attention is good. AI has been a free-for-all up to this point.

The problem is the method. Are we really supposed to believe the U.S. government is the best body to evaluate whether a frontier model is safe to release? Come on…

I understand the proposed group is made up of tech execs as well. But now the release decisions reside with a combination of tech execs and political appointees? Tech execs have a clear bias toward their business and shareholders. Political appointees have a clear bias toward the opinions of the administration. Seems like a disaster waiting to happen.

Still, I’m cautiously optimistic that people are paying attention. The U.S. government moves way too slow. But hopefully they realize the urgency needed this time. And hopefully they can solve the problems without creating even bigger ones.

OpenAI and Anthropic build PE-backed AI services ventures

Source: ChatGPT Images 2.0 / The AI Accountant

OpenAI and Anthropic are separately building private equity-backed ventures to buy AI services firms and help companies deploy their models, according to Reuters. OpenAI's venture is reportedly in advanced talks on three acquisitions, while Anthropic's new enterprise AI services company has secured $1.5 billion from a Wall Street investor group.

OpenAI's venture is backed by private equity firms including TPG, Advent, Bain Capital, and Brookfield. The goal is to distribute OpenAI's enterprise products across portfolio companies and eventually beyond them, pairing model access with implementation teams that can do the messy deployment work.

Anthropic announced its own enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs as founding partners. The company says the new firm will help enterprises design, deploy, and scale AI systems using Claude, with additional backing from investors including Apollo, General Atlantic, GIC, Leonard Green, and Sequoia.

The structure moves both labs closer to consulting and implementation. Instead of only selling model access, OpenAI and Anthropic are building channels that combine software, deployment talent, acquisition capital, and private equity distribution into the same package.

Why it’s important for us:

Businesses will need help learning how to work with AI. Not just buying access to ChatGPT or Claude. Actually learning the skills required to use AI tools, build agents, redesign workflows, and operate in a more AI-native way.

Anthropic, OpenAI, and their very well-off PE partners seem to agree.

Being good at AI is one thing. Being good at AI and understanding a specific workflow is something else entirely. That’s where accounting firms sit in a really interesting position.

Accountants obviously understand how money moves through a business, where controls live, and what drives decision-making. That makes accounting and finance one of the most logical places for businesses to start using AI agents. It also means accountants are probably better positioned than almost anyone to advise companies on how to implement AI in those functions.

I think there are two paths here. The first looks a lot like advisory today: a firm helps a client figure out where AI belongs, how to set it up, what tools to use, and how to redesign the workflow around it. The business itself still owns the process, with the support of an accounting firm. The second is more productized: an accounting firm builds and maintains AI agents that clients actually use.

The future is still blurry, but we’re able to squint and make out just a little bit of the shape of things now. Firms learning and implementing this internally are improving their own ops, and they’re also building the skills they need to sell this service.

TRENDING NEWS

Anthropic partnered with SpaceX for more AI computing capacity, letting it raise Claude usage limits: This was unexpected, but great news. It goes to show how quickly Anthropic grew over the last 6 months. They’ve finally secured enough compute to provide their generous rate limits again.

Copilot Cowork went mobile and added reusable Skills and plugins: Really great update for Copilot. Skills have been a major unlock for a lot of people utilizing agentic AI. Copilot Cowork going mobile is great news as well. This is the first time I can remember where Microsoft might actually be ahead in any area of AI (assuming the mobile experience is smooth).

Claude made its Excel, PowerPoint, and Word add-ins generally available, with Outlook now in beta: Claude is already the best AI at daily work in Microsoft Office products. Now it sits in all the major Microsoft Office software.

Anthropic released 10 financial-services agent templates for work like month-end close, GL recs, audit support, valuation review, and more: We continue to see significant progress in AI for accounting and finance. Both OpenAI and Anthropic are clearly focusing on this area.

Perplexity launched a finance version of Computer with licensed data connectors and prebuilt workflows: Perplexity continues their push in personal and professional finance. I’ve yet to see any of this catch on. Perplexity is loved by a few, but they’re fighting an uphill battle.

Anthropic added memory review, outcome scoring, webhooks, and multi-agent orchestration to Claude Managed Agents: They’ve called memory review “Dreaming.” Basically, they’re letting the agents review previous sessions during the downtime (their “sleep”). Among the other nice updates to Claude Managed Agents, this is an interesting differentiator.

Microsoft said AI demand is still exceeding available capacity, even with roughly $190B in planned 2026 capex: Even Microsoft is compute-constrained. Like we saw with Anthropic, this could ultimately impact usage limits for consumers.

The Pentagon signed classified-network AI deals with eight companies, but not Anthropic: This administration really hates Anthropic. Yet, reportedly, many departments continue to use Claude.

A Chinese court ruled that companies cannot lay off workers solely to replace them with AI: I’m not very well-versed in Chinese law, but this seems surprising to me. I anticipate we’ll see a lot of tension around this exact topic in the U.S. over the next couple years.

PUT IT TO WORK

Claude in Word is now generally available. It comes with connectors, skills, and plugins.

I created a Form 3115 Attachment from scratch using Claude in Word. It pulled in a workpaper from SharePoint and searched the web to create the document. I can no longer imagine a time where I’d ever create the first version of a Word doc.

The Claude add-ins for Microsoft products are probably the highest ROI AI tools available right now.

WEEKLY RANDOM

Coinbase cut 14% of its workforce while citing AI acceleration. Cloudflare cut about 1,100 jobs, roughly 20%, as part of a shift to an “agentic AI-first” model.

Those are huge cuts, and they feed the obvious public reaction: what exactly is the point of AI if it just continues to replace human jobs without much to show for it?

That’s frustrating, because AI really can do a lot of good. Healthcare should get better. Research should move faster. We should solve problems we haven’t been able to solve before. There are legitimate reasons to be optimistic.

But optimism gets a lot harder to sell when companies keep attaching AI to layoffs without much proof.

If Coinbase is doing an “AI-driven restructuring,” prove it. If Cloudflare is moving to an “agentic AI-first” model, prove it. Show us which workflows changed, what roles were actually replaced, and why AI specifically made the company need fewer people.

Because let’s face it, there are plenty of other possible explanations. Maybe they overhired. Maybe revenue slowed. Maybe they wanted to cut costs and found a cleaner story to tell Wall Street. Maybe AI is just a more convenient (and lucrative) explanation than “we made bad hiring decisions.”

Some jobs really will be replaced by AI. We need to plan for that seriously. But the first step is honesty. If AI made you 20% more efficient, prove it. If it didn’t, stop hiding behind it.

Until next week, keep protecting those numbers.

Preston

Keep Reading