Hello fellow keepers of numbers,

Bad week for the underdogs. OpenAI and Anthropic dominate the headlines yet again. Anthropic gave Claude two major upgrades with skills and memory. OpenAI introduced their AI browser, Atlas, with agent mode to perform actions. I already miss the days when we all just used Chrome and never had to think about which browser to open.

THE LATEST

OpenAI launches ChatGPT Atlas browser with agent mode

OpenAI released ChatGPT Atlas on October 21, an AI-powered web browser. It is currently only available for macOS. Atlas puts ChatGPT directly into the browsing interface. The browser features a sidebar for querying ChatGPT about any webpage, browser memories that retain key details from browsing sessions, and an agent mode that can autonomously handle multi-step tasks like research and form filling. Agent mode is available in preview to Plus, Pro, and Business users and includes visible controls and stop buttons, allowing users to monitor as ChatGPT navigates sites, clicks links, and fills forms.

This comes just a few weeks after Perplexity opened their AI browser, Comet, for free to all users. Other AI browsers have been making noise recently as well. Google recently integrated Gemini deeper into Chrome. Microsoft Edge introduced Copilot Mode. Anthropic has been working on browser-based AI agents. The Browser Company’s Dia was recently acquired by Atlassian for enterprise use, and several other AI-first browsers are emerging. Nearly all of these browsers are built on Google's Chromium platform rather than developed from scratch.

Like Perplexity’s Comet, security risks have been noted by many cybersecurity experts and users testing the browser. Prompt injection attacks, where malicious instructions hidden on webpages can trick the AI agent into accessing sensitive data or taking unintended actions, remain a critical unsolved problem.

OpenAI’s Chief Information Security Officer posted on X in response. He highlighted the guardrails, safety measures, and testing performed on Atlas prior to launch, while also noting that prompt injection attacks remain an unsolved security issue that must continue to be analyzed.

Why it’s important for us:

I thought about a copy/paste of what I said for Perplexity’s Comet browser two weeks ago.

The reality is that despite OpenAI and Perplexity being SOC 2 Type II certified, their browsers still have major security concerns. Especially for accountants who work with sensitive data on a daily basis.

AI browsers offer the long-term potential of automating many useful tasks, especially things like manual data entry into forms and cloud-based software. They also offer the potential to interact with the internet in an entirely new way.

But right now, the security risks are too great and the AI features promised don’t work well enough. This will certainly change in the future. OpenAI and Perplexity are definitely not lacking money to solve these problems.

It seems to be a matter of time before we’re comfortable enough with the security measures, and the promises made by these browsers are fulfilled. Until then, I guess I’ll just keep saying the same thing every few weeks when we keep getting new browsers.

Anthropic launches Skills for Claude

Source: Google Gemini Nano Banana

Anthropic introduced Skills for Claude, a feature that packages instructions, scripts, and resources into folders that Claude automatically loads when relevant. Skills are available to Pro, Max, Team, and Enterprise users across Claude apps, Claude Code, and the API.

Skills initially consume only 30-50 tokens for metadata, keeping Claude fast while providing specialized capabilities. When Claude scans available skills and identifies relevant ones for a task, it loads only the specific files and information needed at that moment.

Users can create custom skills, and Claude provides a built-in “skill-creator” that can be used as a guide. Skills can contain instructions, code, or resources.

Anthropic warns that malicious skills could lead to data exfiltration or unauthorized system access if sourced improperly. Skills run in sandboxed environments with no network access and cannot install packages at runtime. The company advises users to treat skills like installing software. Only use them from trusted sources, and thoroughly audit contents before deploying.

Why it’s important for us:

Skills are similar to Projects, but offer more flexibility and are available to be used with automations. Think of skills like an advanced Excel macro.

Excel macro: You record a procedure once, and it runs the exact same every time you trigger it.

Skill: Claude can determine when to use it, and can adapt the instructions to a specific situation while still following the guidelines.

Likely the biggest downside right now with Claude is the strict limits they apply for users not part of the Max plan. When you use a project, the limits hit quickly, especially if you have project files. Skills seemingly use far less context than projects, which may keep you from hitting the limits as quickly. Additionally, skills can be combined. If you’ve ever wished you could combine two of your projects into one, skills might be the way to go.

Without having tested for myself, a couple interesting use cases I’m considering:

Deliverable prep: Grab the financials and prepare reports that are well-formatted with your own brand guidelines.

Account classification: Provide the skill with a chart of accounts and examples how to classify transactions, including any logic to flag unusual transactions or any where it’s not confident.

File conversion: Package instructions for how you want files converted (e.g., PDF bank statements to Excel with specific column headers). Claude converts and organizes files following your structure.

Claude memory rolls out to Pro and Max users

Anthropic expanded its memory feature to Claude Pro and Max subscribers on October 23, following an initial release to Team and Enterprise customers in September. Memory enables Claude to automatically generate summaries of conversations and apply that context to future chats, eliminating the need to repeatedly explain preferences and project details.

The memory feature builds on Claude's existing chat search capability, which has allowed users to manually ask Claude to reference past conversations for several months. The new memory feature goes further by automatically creating a synthesis of key insights from conversation history that gets updated every 24 hours and applies to every new conversation without users needing to prompt for it.

Memory operates within Claude's Projects. Claude creates separate memory for each Project to ensure Project information is not inadvertently mixed. Regular conversations outside of Projects also have their own memory space, keeping everything compartmentalized.

Users have full control over what Claude remembers. They can review all stored memories through a summary interface, edit them using natural conversation, or disable the feature entirely. An Incognito chat mode is also available for conversations that shouldn't be stored.

For Enterprise users, admins can disable Memory organization-wide and configure custom data retention policies ranging from 30 days to indefinite storage. Memory data is never used for model training unless users provide explicit consent.

The update also includes experimental import and export memory capabilities. Users can transfer memory between Claude and other AI services.

Why it’s important for us:

Claude is somewhat quietly shipping cool features left and right. The memory feature was one of the final remaining feature differentiators between ChatGPT and Claude. A few months ago, Claude added the ability to search previous chats. Now, they likely offer even better overall memory features than ChatGPT.

This is a big deal if you use Claude Projects. The more you use a Project, the more memories it will retain. It’s similar to “training” your own AI model. Rather than continuously updating Project instructions, your Project will now know the relevant information and context based on your previous history.

Claude has also provided simple instructions on how to import memories from another AI provider as well. If you’re a power user of ChatGPT and want to try Claude, or even just want to ensure you have consistency when flipping between the two, you can try the following:

  1. Send to ChatGPT, “Write out your memories of me verbatim, exactly as they appear in your memory.”

  2. Paste the results of that response into Claude alongside the message, “This is my memory from another AI assistant. Add this information into your memory about me during your next synthesis.”

Claude currently processes memory updates once a day, so it will be updated during that day’s “synthesis.”

It’s also worth noting that Claude says their memory feature is designed for work-related topics, so it may not automatically store personal information. To manually add specific information into the memories, you can go to Settings → Capabilities and click “View and edit memory”

PUT IT TO WORK

Tip or Trick of the Week

ChatGPT continues to roll out native connectors. We’re going to learn the very simple steps to use a connector, as well as how to connect an MCP from another application that isn’t already native to ChatGPT.

Connecting to a Native ChatGPT Connector

ChatGPT has a list of native connectors that is growing weekly.

To use any of these connectors, go to Settings → Apps & Connectors, select an application, and click the “Connect” button. It will ask you to log in to the respective application.

Once connected, open a new chat and select the + icon. Hover over the More option. Your connectors you’ve linked with ChatGPT will show up on that list. Simply select the connector you want to use.

From there, just chat with ChatGPT like usual. Ask it questions about data it can grab from your connector. Most connectors currently allow read-only access, which means it can see and grab data from your application, but it cannot make changes.

Adding a Third Party MCP to ChatGPT

ChatGPT offers a Developer Mode which can be used to add your own MCPs. It’s currently in beta, and it’ll give you a warning that the connectors you add can potentially modify or erase data. This is, in fact, one of the main purposes of an MCP. However, still use it with caution.

You can toggle Developer Mode back off once you’re done using the MCP. The MCP will be saved in your account even after toggling off. Any time you want to use the MCP, toggle Developer Mode back on.

The example below is for Fireflies’ MCP, but the process is the same for other MCPs that applications make available. You’ll need an MCP Server URL from the application you’d like to add.

Again, go to Settings → Apps & Connectors, find and click on Advanced Settings, and toggle Developer Mode on. You should see a colored outline around your chat box once it’s toggled on.

Again in Settings → Apps & Connectors, find the “Create” button next to “Enabled Connectors.”

Use the MCP Server URL provided by the application. If you’re unable to find this, try asking AI to help you search the web for the MCP documentation. In our case, Fireflies provides guidance on how to set it up in ChatGPT.

In our Fireflies example, provide a name you’ll easily recognize in ChatGPT when you want to connect to the Fireflies MCP. I’ve creatively named it “Fireflies MCP.” Then provide the MCP Server URL we got from the application. We’ll use OAuth for authentication, which means we’ll log in to the application once we’ve clicked “Create” to add the MCP.

Your New Connector should look like this:

Click on “Create.” It should ask you to log in to the application after you’ve created it. Once you’ve logged in, you’ve successfully set up the connection. This is still in beta and I’ve found it a bit buggy at times, so occasionally the connections fail. In that case, try this process again.

To use this MCP, make sure you have Developer Mode toggled on. Click the + icon, select the MCP you added, then chat with it like normal. For our example, ask it questions about specific meetings, trends it noticed over your recent meetings, and anything else your heart desires.

WEEKLY RANDOM

Amazon owns the rights to NFL’s Thursday Night Football (TNF). They’ve got an alternate TNF stream called Prime Vision where they use AI for live predictive analytics. Highly recommend. You can choose different streams when you log in to Prime, so just select the Prime Vision stream.

Prime Vision provides AI-powered insights during the game. Before the snap, it highlights which defenders are likely to blitz and which receivers are likely to be open. During the play, “Pocket Health” shows you in real-time how close the QB is to being sacked. Prime Vision also uses AI to analyze statistics and provide comeback scenarios, the best statistical coaching decisions, and more.

It’s powered by RFID chips in players’ equipment that provides tracking data, computer vision that watches the field, and AWS processing hundreds of millions of data points each season.

Prime Vision’s AI models were trained on thousands of historical NFL plays and games. Amazon’s AI is also built on a deep learning network that gets better the more plays it sees, which means it’s continuously learning.

This is a great example of how AI can be used to become an expert in a specific field and do some genuinely cool things.

Amazon has been doing this for a few seasons now, and they’re doing a great job. Which leads me to the biggest question… Where the hell is Amazon in the AI space?

Until next week, keep protecting those numbers.

Preston

Keep Reading

No posts found