Every conversation I have had with finance leaders this month has surfaced the same question dressed up in different ways. Should we build an internal AI platform? Hire a freelancer to automate our close? Wait for our ERP vendors to catch up? Just give everyone ChatGPT?
None of these is wrong. None of them on their own is the right answer either. The useful answer is harder: you build a portfolio, and you match the tool to the work.
This week, I walk through the AI building blocks I see in most finance organizations, how they combine, and why a general-purpose LLM belongs in the stack no matter what else is running.
In the subscriber section, you will find an interactive artifact with the full role-by-role matrix for both a mid-market finance department and a PE firm, plus a three-phase prompt template for testing whether a recurring task is ready for automation.
Introducing Claude in Action
I've been building and testing Claude workflows across finance teams for the past two years. Most teams that have Claude deployed are getting maybe 20% of the value they could be getting. Not because Claude isn't capable. Because nobody showed them how.
Claude in Action is a three-session hands-on training program for finance teams and professional services firms. Built around your actual workflows. Delivered by a CFO who uses these tools every day.
The first three companies to sign up receive three months of additional support at no extra cost. Priority scheduling, async Q&A between sessions, and hands-on help embedding workflows after training wraps.
If this is relevant to your team, apply now.
The Building Blocks
I see four building blocks show up regularly in most finance organizations today:
General-purpose LLMs like Claude (my pick), ChatGPT, or Copilot used directly by individuals
External automation partners or freelancers who build custom workflow automations, inside a single function or across several
AI embedded in the ERP or existing finance software
AI-native SaaS tools built around a specific finance workflow

There are some less common paths too. I recently came across a company that launched an internal AI platform where employees can build their own apps and agents.
None of these approaches are wrong. What is interesting is that most companies are running two or three of them at once. The AI stack is layered, not singular.
The question that matters is not which of these is best. It is which combination fits a given role, a given process, a given problem.
Here is what the mix can look like for one executive in a mid-market company.
On a Tuesday morning the CFO opens her AI-enhanced ERP dashboard to review margin variance, then shifts to drafting a board memo in Claude. The dashboard is embedded AI doing what it is built for: structured data, structured use case, delivered where she already works. Claude handles the memo because the work is diverse, judgment-heavy, and pulls context from three client conversations, an industry report, and her own notes.
In the afternoon, her automated board-prep workflow runs. A freelancer built it last quarter. It pulls the financial pack, vendor commentary, and KPI tracking from five systems, formats them into the board template, and flags exceptions. Some steps use embedded AI for summarization. Some are pure rules. She reviews the output and sends it out.
What is consistent across every role is the general-purpose LLM. An AP specialist may spend most of her day inside embedded ERP AI, but she still turns to Claude or ChatGPT to draft a vendor email or decode a policy question. A controller blends the ERP dashboard with a narrow custom agent for variance analysis, and uses the LLM for the commentary that wraps the numbers. An FP&A lead leans on it for modeling narratives and scenario framing. The CFO uses it across almost everything: board prep, client communication, strategic memos, synthesis across sources.
The pattern is not subtle. The higher the role, the more a general-purpose LLM is the primary tool, not a supporting one. Structured, repetitive work sits inside embedded AI and automation. Judgment, synthesis, and communication sit with the LLM. The higher you go, the more of your day is the second kind.
A simplified preview of how this maps for the CFO role:

There is one more role a general-purpose LLM plays: it is the prototyping layer for every automation that eventually gets built.
Consider how the board-prep automation above came to exist. It did not start as a freelancer build. It started months earlier with the CFO doing the work manually in Claude. Pulling the latest month's financials. Pasting in the board minutes. Iterating on the prompt. Copying the output into PowerPoint or Gamma.
Then came the discoveries. Claude can build the deck directly. Skip the copy step. Claude can connect to the document folder through Cowork. Skip the paste step. Over time, what was manual prototyping inside the LLM collapsed into a workflow stable enough to hand off to a freelancer and formalize as automation.
This is the usual path. You use the LLM to figure out whether the work can be done well by AI at all. Some tests turn into custom automations. Others do not. The ones that do are the ones where the pattern stabilized, the inputs became predictable, and the output quality held up. The LLM gave you the evidence to justify the build.
Without that prototyping layer, companies either skip automation they should build, or they build automation they should not.
Why this particularly matters now: Opus 4.7 just shipped. Every major model release lifts the ceiling on what can realistically be automated. Work that was too expensive, too unreliable, or too messy for automation two months ago may be in reach today. The people best positioned to notice this shift are the ones using a general-purpose LLM every day. They feel the change in their own work before any vendor tells them about it. That feedback is what keeps the rest of the AI portfolio current.
Without that daily signal, finance leaders are reading launch posts and guessing.
The point is not to choose between these strategies. It is to build the right combination for each role, each process, each problem. A general-purpose LLM does not compete with the rest of your AI investment. It complements it, and over time, it informs it.
We have covered the four building blocks, the worked example, and why the general-purpose LLM sits at the center of both prototyping and daily work.
The harder question is how this looks across an entire team, and how you turn that prototyping habit into something a developer can actually build.
In the subscriber-only section, an interactive artifact for the first, and a three-phase prompt template for the second.

Closing Thoughts
Thanks for reading. If this piece made you look at your AI stack a little differently, that was the goal.
Most finance teams I see spend money on one or two of these building blocks and wonder why the results feel patchy. The patchiness is not a sign that AI is overhyped. It is a sign that one tool cannot do the job of four.
Each layer earns its value in a different way. Embedded AI is the fastest to deploy and the cheapest per seat, and it works because it meets people inside the systems they already use. AI-native SaaS goes deeper on a single workflow, usually replacing a legacy tool that was lagging. Custom automation earns its keep on the twenty percent of tasks that run weekly and cost real hours. The general-purpose LLM does the one thing none of the others can: it handles judgment work, and it tells you which of the other three is actually worth investing in.
None of these layers substitute for another. They compound. Automate the wrong thing and you have paid a freelancer to speed up waste. Skip the LLM and you have no way to tell which workflows are stable enough to build. Rely only on the LLM and you will do by hand what embedded AI could do for free.
The mix is what turns AI spend into operating leverage.
See you Tuesday.
Anna
We Want Your Feedback!
This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!
Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation!
Did you find this newsletter helpful? Forward it to a colleague who might benefit!
Until next Tuesday, keep balancing!
Anna Tiomina
AI-Powered CFO
1
