For the last year we wrote a newsletter called Capital & Clarity, built on a simple premise: In-house finance teams produce numbers and investors pressure-test them. The two sides are looking at the same numbers from opposite chairs and reaching different conclusions. Capital & Clarity sat in the middle, where a good unit economics conversation sounds the same whether you're a CFO defending a forecast or an investor stress-testing one.

That premise was right. Clarity is still the foundation of every good finance function we walk into.

But it's no longer where the edge is.

Sharper analysis has been the primary lever for finance leaders for decades. Better data, sharper questions, tighter variance commentary. We came up trained on this:

  • Christian at a hedge fund after investment banking at Barclays.

  • Kenny at a growth equity fund after equity research at Credit Suisse.

The next decade in finance and accounting won't be won by better analysis. It will be won by better systems that produce it.

Here's what we mean.

For most of the post-Sarbanes-Oxley era, the bottleneck in a finance function has been human throughput. Closes take eleven days because eleven days of human attention is what it takes to chase down accruals, reconcile intercompany, fix the ten things that broke in the GL since last month, and write the variance narrative. FP&A forecasts are off by 12% because the analyst building the model has 60% of their week consumed by data-pulling and reformatting before they can think. The job was throughput. So the edge was the controller who threw harder, the FP&A leader who threw smarter, the CFO who managed both.

That bottleneck is moving. Not because AI replaces those people. That's the lazy framing. It's moving because the unit of work a finance team can absorb has fundamentally changed. A controller who deploys a well-designed AI agent across AP coding, bank rec, and accrual identification doesn't get faster at their job. They get a different job. They become the architect of a finance operating system instead of the operator of one. And the CFOs who are figuring that out right now are pulling away from the ones who aren't, in ways the second group can't see yet because the leading indicators don't show up in the income statement for two more quarters.

This is the AI-native CFO. Not a CFO who uses AI tools. A CFO whose finance function is designed assuming AI exists, the same way a 2010 CFO designed their function assuming spreadsheets exist and a 1995 CFO didn't.

The distinction matters because most of the AI-in-finance content you'll read this year will be about tools. "Here are the eight Copilot prompts every CFO should know." That's the AI-curious controller move. The AI-fluent move, the AI-native move, is structural. You redesign the close, the forecast cadence, the audit trail, the AP queue, and the FP&A operating rhythm assuming AI is a permanent participant in the workflow, not a temporary feature on top of the old workflow.

That's the story we want to write every week.

Concretely, we're going to spend the next six issues laying out the AI-Native CFO category. Why read-only AI deployment is the right starting move (next week). What the AI-native close actually looks like, workflow by workflow. Why audit trail architecture is the moat that separates a cool demo from production deployment.

We're writing it because we live it. We started QuantFi to build and run finance departments for scaling companies and regularly jump into weak financial infrastructure. We continually deploy AI-led automations for clients and stay at the cutting edge of AI applications in the finance department.

If that's interesting to you, we'd love to have you along for the ride.

Welcome to The AI-Native CFO. Tuesday mornings, 7am CT, every week.

— Christian & Kenny

VISUAL OF THE WEEK

THIS WEEK IN AI-NATIVE FINANCE

The agent pricing model is structurally broken.

Anthropic moved Claude Code to $100 and $200 monthly tiers this week, then reversed within hours. Same day, GitHub paused Copilot Individual signups, restricted Opus 4.7 to a new $39 Pro+ tier, and dropped older Opus models. GitHub's stated reason: "agentic workflows have fundamentally changed Copilot's compute demands."

Our take: CFOs evaluating AI vendors should assume per-seat pricing keeps breaking, and reverse-engineer the cost-of-AI line item against per-token usage. Budget for 20 to 40 percent upward repricing on agent-heavy SKUs over the next twelve months.

Anthropic's 50th forward-deployed engineer.

Frontier AI vendors are placing senior engineers inside their largest enterprise customers. Goldman, Bridgewater, and three PE megafunds are the public ones. Direct embed model. The engineer reports to the AI lab but works in the customer's stack.

Our take: this is what the AI-native finance function looks like at the top of the market. Not a Copilot license, but an embedded model engineer inside the office of the CFO. The interesting question isn't whether your firm gets one. It's what your finance function looks like when your competitor does and you don't.

Anthropic's harness postmortem.

Two months of "Claude Code feels worse" complaints traced to three harness bugs, not the model. The notable one ran a stale-thinking-clear function every turn instead of once per idle session.

Our take: if you're piloting an AI agent in finance and output quality drifts after a few weeks, suspect the harness before suspecting the model. Memory and context-trimming logic is where silent regressions live. Log token-budget decisions so you can diff them when output changes. This is the kind of operational hygiene that separates a real deployment from a demo.

FROM THE FIELD

A few weeks ago we were invited to teach a private equity firm and their portfolio company executives at the firm's annual general meeting. The talk was titled "AI for the Modern Executive: From Awareness to ROI." About a hundred operators and sponsors in the room. CEOs, CFOs, deal partners.

The premise was simple. Most PE-backed companies have already deployed user-level AI licenses. That's table stakes. The next question every board is now asking is where the ROI shows up. The companies pulling ahead aren't the ones who handed out the most ChatGPT logins. They're the ones breaking processes down step by step, classifying each step as automatable, AI-assisted, or human-required, and rebuilding the workflow around what each tool actually does well.

We walked the room through one of those redesigns live, using a hypothetical month-end close from a $42M PE-backed industrial services company. Ten tasks, thirty hours a month, eight to ten days to close. The exercise everyone did at their laptop: upload the checklist into Claude, classify every task with the A / AI / H framework, calculate the hours saved at $75 an hour, and identify the top three tasks to automate first.

The room got it. The CFOs in particular got it. The follow-up conversations after the session were the real signal. Every one of them was about a different recurring process the operator wanted to redesign next.

That's the work. Not "use AI more." Pick one process. Break it into discrete steps. Decide what each tool actually does best. Rebuild.

WHAT WE'RE READING

  • "Tasteful Tokenmaxxing" (Latent Space, Apr 23). On deliberately spending more tokens per query for higher-quality reasoning. Relevant when designing F&A reasoning chains that have to be defensible, not just fast.

  • "The people do not yearn for automation" (Nilay Patel via Simon Willison, Apr 24). The most useful counter to AI triumphalism we've read this month. Worth reading before you tell your team the automation roadmap is good news for them.

  • "Salary Negotiation: Make More Money, Be More Valued" (Patrick McKenzie). Off-thesis pick. The classic essay on negotiation as a system, not a personality trait. Worth re-reading any time you're advising a portco operator on comp, or sitting on the other side of one of those conversations yourself.

— Christian & Kenny

P.S. If this issue resonates, the most useful thing you can do is reply with one sentence on what you'd want us to write about next. We read every reply, and the first six issues are still mostly clay.

About the authors

Christian Sanford and Kenny Jen are the co-founders and managing partners of QuantFi, where they help PE- and VC-backed companies build investor-grade finance functions powered by AI-native infrastructure.

Christian came up through investment banking at Barclays ($20B+ in M&A and capital markets), buy-side investing at a hedge fund, and fractional CFO work for investor-backed companies. BBA and MSA in Accounting from Texas Tech.

Kenny led financial strategy at Pilot CFO, Pure Beauty, and Emil Capital Partners after starting his career at Credit Suisse. He has overseen finance for consumer, SaaS, and manufacturing businesses, and works directly with founders on pricing, working capital, and capital allocation. BSBA in Finance from Georgetown.

Recently invited to teach AI-native finance to 12,000 finance leaders through CFO Connect, and to a private equity firm and its portfolio company executives at the firm's annual general meeting.

Keep reading