A few months ago I would have told you that the biggest risk in AI-assisted work was output quality. Wrong numbers, hallucinated data, logic that looked right but wasn't. That's what most of the governance conversation is focused on, and it's a legitimate concern.

But after a recent project—a high-stakes vendor dispute reconciliation, 1,500 line items, 1,000 documents, 24-hour deadline—I found myself worried about something different. The output was good. The client was happy. And I couldn't answer a single follow-up question without going back to the tool.

I had become the relay. Pasting questions in, pasting answers out, hoping each time that nothing would break. That experience shifted something in my thinking about what accountability actually means when AI does the work. This edition is my attempt to name what happened, why it matters across every stakeholder context you work in, and what to do about it before you find yourself in the same position.

The Uncomfortable Feeling of Being A Relay

A few weeks ago, I took on an urgent reconciliation project for a client in a vendor dispute. Three years of complicated financial history, three parties involved, 1,500 line items, roughly 1,000 documents. The client needed it done in 24 hours.

I used Claude Cowork to do it. Matching invoices against POs, checking paid vs. unpaid, grouping and slicing the data multiple ways. It worked. The output was fast, clean, and thorough in a way that would have taken days manually. The client was impressed. I was transparent with them that AI was doing the heavy lifting.

And then the follow-up questions started coming in.

I realized quickly that I couldn't answer them. Not because the work was wrong—it wasn't. But because I hadn't been inside it. I had guided the agent at a high level, understood the general shape of what it was doing, but I hadn't built the deep context that normally comes from doing the work yourself. So I did the only thing I could: I pasted the client's question to Claude, got an answer, and pasted it back.

I was the relay. And that felt wrong in a way I'm still working through.

What makes this different

When you delegate to a person, knowledge transfers through the process. They ask questions. You push back. The work comes back in pieces, and you stay in the loop because the loop includes you. By the time the deliverable is done, both of you know something about it.

When you delegate to AI, the output arrives complete. There's no back and forth that builds your understanding. You can receive a finished, high-quality deliverable and have absorbed almost nothing about what's inside it. You weren't in the room — because there was no room.

The accountability, though, hasn't moved. The client, the board, the leadership team — they still think you know. And if you're in the relay loop, you don't. Not really.

Where this gets dangerous

This isn't just a fractional CFO problem or a client services problem. It shows up anywhere you're delivering AI-assisted work to stakeholders who expect you to stand behind it.

With clients, the exposure is relationship and reputation. Transparency expectations around AI are still forming. Follow-up questions can come days or weeks later, when the project feels closed. If you can't answer without re-querying the tool, you have a problem you didn't know you were creating.

With a board, the stakes are higher. The board assumes the CFO is fluent in the numbers, the judgments, the assumptions. If AI prepared the board report and you guided it at a high level but didn't go deep, you're not just in the relay loop—you're in a governance gap. A director asks a pointed question about an assumption in the forecast, and you're mentally reaching for the paste button.

With internal teams and leadership, decisions get made on your work, often without you present. By the time a question surfaces, it may already have been acted on. The relay loop there is invisible until it isn't.

The stakes matter

Not every project carries the same exposure. A routine internal analysis is different from a regulatory filing, an audit-adjacent deliverable, a dispute, or anything that ends up in front of a board or a court. The higher the stakes, the deeper your checking mechanism needs to be - and the more deliberately you need to stay in the loop while the work is happening, not after.

I'll be honest: I don't have a clean answer to the relay problem. I'm still working out what the right standard is. What I do know is that 'the client was happy' is not the same as 'I was accountable.' And I know that the speed advantage AI gives you can quietly erode the situational awareness you need to stand behind your own work.

What not to do

  • Don't let the output being correct convince you that you understood it. Correct and understood are not the same thing.

  • Don't let stakeholder satisfaction close the accountability gap. A happy client doesn't mean you could have answered their questions without help.

  • Don't assume speed justifies skipping the review—especially in regulated environments, high-stakes projects, or anything where follow-up exposure is real.

The disclosure question

When AI did most of the work and you performed a high-level review, what do you tell the stakeholder? The honest answer is somewhere between 'AI assisted with this' and 'I personally validated every line.' The gap between those two statements is where the real accountability question lives.

The single thing that makes this easier, before any of the draft language below, is having the conversation up front. If your firm has an explicit AI usage policy, lead with it at engagement start. If you don't, have a direct conversation before the work begins: how you use AI, what that means for your review process, and what the client or stakeholder should expect. That conversation removes the awkwardness from any disclosure later, it sets realistic expectations, and it sometimes reveals that the stakeholder has more context than you do.

That's exactly what happened in my reconciliation project. The client knew from the start that AI was handling the heavy lifting. When I explained that the speed meant I couldn't go deep into the three-year relationship history, they were fine with it — they had the context themselves and could validate the outputs on their end. The disclosure wasn't uncomfortable because it wasn't a surprise. They were a genuine partner in the process, not a passive recipient of a finished product.

That's the model. Transparency at the start, not damage control at the end.

The relay problem is not a reason to stop using AI for complex work. The reconciliation project needed to get done, the deadline was real, and AI made it possible. But walking into that situation without understanding what you're trading away - that's the part that needs to change. Speed is the headline. Context loss is the fine print. And in our role, the fine print is where the liability lives.

I don't have a clean resolution to offer here.

What I do have is a way to recognize when you're drifting into the relay loop before it becomes a problem, a set of prompts to get back in the loop fast when you feel yourself losing the thread, and a framework for calibrating how deep your review needs to go based on what's actually at stake. That's what's in the paid section below.

Closing Thoughts

The relay problem doesn't have a clean solution yet. AI is moving faster than our professional standards for accountability, and most of us are figuring this out in real time, on real projects, with real stakeholders. That's uncomfortable —but it's also honest.

What I do know is that naming the problem is the first step. If this edition made you think about a project where you were closer to the relay than you realized, that awareness is worth something. Use the prompts, have the upfront conversation with your clients and stakeholders, and calibrate your review to what's actually at stake.

As always, I'd love to hear how you're handling this in your own work. Reply and let me know—these are the conversations that make this newsletter worth writing.

We Want Your Feedback!

This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!

Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation!

Did you find this newsletter helpful? Forward it to a colleague who might benefit!

Until next Tuesday, keep balancing!

Anna Tiomina
AI-Powered CFO

Reply

Avatar

or to participate

Keep Reading