Hello fellow keepers of numbers,
We already have a contender for launch of the year. Claude Code has been packaged into a new interface called Claude Cowork to give non-technical workers the most powerful AI available at the moment. Plus, OpenAI adopts Anthropic’s open-sourced agent skills framework.
Stick around for a tip on how to 4-5x your review of documents, and some AI news about Apple (finally!).
THE LATEST
Anthropic turns Claude into a “coworker”
Anthropic announced Claude Cowork, a new “research preview” feature that lets Claude act as an AI coworker instead of just a chat assistant. Cowork is available today inside the Claude macOS desktop app for Claude Max subscribers, with a waitlist for other plans. The company says it plans to improve the feature rapidly and eventually bring it to Windows.
Cowork works by giving Claude access to a specific folder on your computer, where it can read, edit, and create files directly. Once you describe a task, Claude makes a plan and executes it. It’s built on the same underlying system as Claude Code, but wrapped in a simpler interface aimed at non-developers.
Users can extend Cowork using existing “connectors,” which link Claude to external information sources like cloud drives, and Anthropic has added early “skills” to help create documents, slide decks, and other files. When paired with the Claude in Chrome extension, Cowork can also complete tasks that need browser access, such as pulling information from websites and combining it with local files.
Anthropic emphasizes that users stay in control: you choose which folders and connectors Claude can access, and it asks before taking significant actions like deleting files. At the same time, the company warns that Cowork can take potentially destructive actions if instructions are unclear. It also flags prompt injection, malicious instructions hidden in content Claude reads, as an ongoing risk in this class of agentic tools.
Why it’s important for us:
Rumor has it (from an Anthropic employee), a team at Anthropic “vibe coded” this tool in a week and a half using Claude Code. Given how new it is, expect a few early bugs.
If you’ve followed me at all over the past few months, you know I’m a big fan of Claude Code. Claude Cowork takes the power of Claude Code and wraps it in an interface that’s much more standard for non-technical users. If you took a look at Claude Code previously and the terminal or IDE was daunting, this is the time to give Claude Cowork a look.
I thought it’d be most useful to break this down into 3 of the most common questions.
1) How is it different from Claude’s chatbot or ChatGPT?
Claude Code and Claude Cowork are agentic systems. They’re designed to ingest a prompt or request from a user, create a plan to complete the task, then knock out each step of the plan one-by-one using tools available to it.
This is different from the typical Claude or ChatGPT chatbot experience because they’re not natively built to map out an entire project plan and then complete tasks one at a time, reviewing itself along the way.
When Claude Code and Claude Cowork come across an issue in completing the request, they’ll often adjust their project plan on the fly and attempt to solve the problem another way.
The other obvious difference is how you interact with files in Claude Cowork compared to the regular chatbot experience. In Claude Cowork, you link an entire folder on your local computer. Instead of manually finding and dropping files into the chatbot, Cowork has access to review any files within your folder. It can choose which files to use and which to ignore.
It’s hard to understate how much more seamless the experience of working with your files is because of this new feature. Chatting with files, creating new files, and editing existing files is very simple. And when Claude Cowork is done, those files already exist on your computer. Go to your finder or file explorer and they’re there to open and edit.
2) Is this secure?
Because it’s working with files locally on your computer, it’s as secure as using the chatbot experience. If you’re using the Claude in Chrome extension to provide access to your browser, the security considerations increase dramatically. If you’re not using Claude in Chrome, and you’ve become comfortable with the Claude chatbot from a security standpoint, this is equally as secure.
3) How well does it work compared to Claude Code?
I’ve done some early testing. Actually, I’ve made some early attempts to “break” Claude Cowork. I’m happy to report I was unable to break it in my, admittedly, rudimentary attempts.
The big danger in working with AI that has access to an entire folder or your entire computer is that it can run code capable of permanently deleting files. I’d recommend you still exercise caution with anything like Claude Code or Claude Cowork. Make sure you have backups saved in the cloud, or give it access to a duplicated folder instead of the original.
I attempted to trick Claude into leaving the folder I linked. I wanted it to find and edit files outside the folder. Fortunately, Claude was unable to see any of my files outside of the folder, and it performed as expected.
I also gave it a few vague requests where it might’ve interpreted my request as deleting files. For example, I asked it to clean up a folder where I had about 8 files from 2025 projects mixed in with a few files from 2020 and 2021 that obviously were unrelated to the rest of the files in the folder. Claude Cowork actually noted those old files as outliers, but instead of deleting them, it prompted me with a question about how to organize the folder. It gave a couple of suggestions, one of which was deleting the old files. I instead asked it to create a few new folders for me to organize everything, which it executed perfectly.
Overall, I’m pumped about the launch of Claude Cowork. I think it’s going to make the power of Claude Code far more accessible to millions of people that may not have been aware of its capabilities or felt too amateurish to attempt to work in the terminal or an IDE.
OpenAI adds agent skills to Codex

Source: Gemini Nano Banana Pro / The AI Accountant
OpenAI added support for reusable “agent skills” in Codex, its AI coding assistant for the CLI and IDE extensions. A skill is a folder-based package that includes a required “SKILL.md” file with instructions and metadata, plus optional scripts, reference docs, and templates. Codex can use these skills to run repeatable workflows more reliably instead of re-deriving the process from scratch each time.
Skills follow the open Agent Skills standard, which was originally popularized in the Claude ecosystem. At startup, Codex only loads each skill’s name and description and then uses “progressive disclosure” to pull in full instructions or extra files only when a specific skill is actually needed. Skills can be invoked explicitly (via the “/skills” command or “$skill-name”) or implicitly when Codex detects that a task matches a skill’s description.
Codex discovers skills from multiple locations that define their scope: per-repository folders (e.g., “.codex/skills” in a project), user-level skills in “~/.codex/skills,” admin or system-wide skills, and bundled system skills. OpenAI ships built-in skills like “$skill-creator” to scaffold new skills and “$skill-installer” to pull curated skills from GitHub, with more skills expected over time.
Why it’s important for us:
I covered this briefly in the newsletter two weeks ago, but this is important to cover in more detail. As mentioned last week, Anthropic published guidance on utilizing agent skills across AI models. They’ve essentially open-sourced their architecture for Claude Skills.
OpenAI has already moved forward with the adoption of Anthropic’s architecture. Currently, OpenAI’s coding model, Codex, will be able to understand and run agent skills that follow the same architecture.
I have no doubt this will be available across all of OpenAI’s models and within the ChatGPT application and chatbot within the next few weeks, similar to how Claude Skills work within the Claude chatbot.
Skills have flown a bit under the radar since they were announced just months ago. Think of skills as simple instructions on how to perform a specific task or set of tasks. Behind the scenes, the AI runs scripts and uses its knowledge to complete the tasks. Skills can be chained together to complete multiple tasks consecutively.
As you might imagine, stacking skills could lead to automation of larger workflows within the firm. When everyone talks about deploying AI agents within their firm, this is what I envision as the future. Not broad instructions for an AI to go off and have free reign to complete tasks. Not some software you buy to deploy agents with very restricted capabilities. But spending the time and effort to map out your tasks, creating scripts and skills, and providing guardrails for an agent to complete tasks with high success rates.
I’ve been testing and deploying these in my own workflows with some impressive early results. I plan on sharing more on this in future newsletters.
PUT IT TO WORK
Stop typing your review comments. Talk them instead.
I recently met with a partner at an accounting firm who told me this was a game changer for him. I’ve been doing something similar for a while, but hearing it from someone else made me realize it’s worth sharing.
When you’re reviewing a file, launch a meeting in your typical meeting tool with a meeting recorder running. Then just talk through your review notes and thoughts. Say what needs to change, questions you have, and where things don’t make sense. When you’re done, close the meeting.
If you’re using a meeting recorder that auto-summarizes (e.g., Fireflies, Fathom, Fellow, etc.), you immediately get clean review notes and action items. Send that to your staff to make edits. Or feed the draft file plus the summarized review notes back into AI for a solid next draft.
If you don’t use a tool that auto-summarizes, just feed the transcript into AI and ask for a summary and action items. Then do the same as above.
If you use a voice transcription tool (e.g., Wispr Flow), you can do the same thing. Once you’re done talking, feed the transcribed voice notes into AI and ask for a summary of the review notes and action items. Again, send this to your staff or put this along with the draft file back into AI to get a new draft.
No matter the method, you’ve turned what could be 1-2+ hours of work into 10-15 minutes max. Plus, you can easily track version history alongside your review notes.
WEEKLY RANDOM
Apple officially announced their partnership with Google Gemini to power Siri. This had been rumored for several months now. It also still comes as a bit of a surprise since Apple had previously partnered with OpenAI to put AI into the hands of iPhone users. Whether the relationship soured or that project had been put on the back burner, it seems like OpenAI has fallen out of favor with Apple.
This is another indicator at how powerful the Gemini models have become. It’s also a bit of a shot to the gut for OpenAI. Over the last several months, they seem to be taking blow after blow. ChatGPT no longer feels like a category leader in any relevant area. It’s a shocking turn of events from just a year ago.
Apple has long been one of the biggest losers of the AI race. It’s hard to even call them a loser because they really haven’t even played the game. As a company, they’ve always been a bit slower to adopt, but they typically excel at providing an unmatched user experience once they do. Hopefully this is still true of Apple.
If Apple does well integrating AI into all facets of the iPhone, I expect they’ll single-handedly create millions more AI power users through this alone. There’s still a large subset of people who rarely, if ever, use AI. If they see the power of it on the device they use every day, they may be more likely to adopt other AI products as well.
The Siri revamp is expected some time in 2026, and could even be targeted by spring per some reports.
Until next week, keep protecting those numbers.
Preston
