Claude went down Monday morning with Anthropic citing unprecedented demand. Paid subscribers have more than doubled since October and free users are up more than 60% since January. Working professionals across the spectrum are jumping at the promise of increased efficiency and cost savings.
However, a survey of 372 enterprises published last fall tells the other side of that story: 85% of companies miss AI cost forecasts by more than 10%, with nearly 25% missing by over 50%. The most telling finding is where the surprises are actually coming from. Data platforms rank first as the source of unexpected AI spend, cited by 56% of respondents. Network and infrastructure costs rank second. LLMs rank fifth. The thing everyone prices is not what most people get wrong.

The savings case for AI in finance is a live variable like any other. PE operators know how to stress-test it, and they will. What does not get the same scrutiny is the cost side, because the production data to challenge it does not yet exist in most rooms. Nobody in most investment committee conversations has run these workflows through a real close cycle. The cost side walks through on assumption. What follows is a breakdown of where those assumptions tend to break:
About QuantFiWe run AI‑enabled finance and accounting departments for scaling businesses.
We combine AI and specialist finance operators to run your workflows, eliminate manual choke points, and turn efficiency opportunities into shipped processes.
1. Implementation
Before any AI workflow runs reliably in a finance function, the underlying data has to be clean, permissioned, and consistently structured. At most middle market companies, it is none of those things: inconsistent chart of accounts coding across entities, manually handled intercompany eliminations, AP data living in a spreadsheet that connects to nothing.
That cleanup work is rarely budgeted as part of the AI project, but it is a prerequisite for it. A reasonable planning assumption is that total implementation cost, including data preparation, integration, and configuration, will run meaningfully higher than the software subscription alone. How much higher depends on the state of the underlying data, but treating the subscription as a proxy for project cost is where most business cases go wrong before they start.
A useful diagnostic before scoping any engagement: if you asked your team to produce a clean, consistently coded trial balance for the last 24 months by end of week, what would actually happen? That answer is a better indicator of implementation complexity than any technical evaluation.
2. Inference
Token-based API pricing does not behave like a SaaS subscription. It behaves like a utility meter with no circuit breaker. Context length, retry logic, and exception handling all drive consumption in ways that are difficult to model before a workflow runs under real conditions.
A reasonable planning assumption: test environment cost estimates should be treated as a floor, not a projection. A workflow that handles the majority of cases cleanly but requires extended reasoning on exceptions will burn tokens nonlinearly on those exceptions. The practical implication is that cost per workflow should be monitored from day one, and budgets should leave room for production variability that testing will not surface.

3. Oversight Labor
AI in finance does not eliminate labor. It changes the kind required. Someone still has to review exceptions, determine whether a flagged variance is real or a model artifact, and own the process when the output looks wrong. That person needs enough technical context to troubleshoot the system and enough accounting judgment to know when it matters.
That role does not exist yet on most portco finance teams. A reasonable planning assumption is that building it, whether through retraining, a new hire, or a fractional resource, is a real cost that belongs in the model alongside the headcount savings the AI was supposed to generate. There is also a transition period where both the old and new processes run simultaneously before the manual one is retired. That is full-cost labor with no productivity offset and belongs in year-one economics.
4. Ongoing Maintenance
AI workflows drift. New entities get added, the chart of accounts changes post-acquisition, the ERP gets upgraded. The model does not know the business changed. It produces outputs that look plausible and are wrong in ways that take time to catch.
A reasonable planning assumption is that maintenance is a recurring annual cost rather than a one-time project expense. The specific amount will vary, but the pattern is consistent: projects framed as one-time builds tend to generate unplanned follow-on work 12 to 18 months later to address drift that was not anticipated at launch. Building a maintenance line into the original approval is more accurate and easier to defend than explaining cost overruns after the fact.
What the Full Model Produces
The gap between a software subscription and the actual fully loaded cost of an AI deployment in finance is large enough and consistent enough across companies that it needs to be modeled explicitly before commitments are made. The specific numbers depend on data quality, workflow complexity, and team starting point. The point is not a precise multiplier. It is that the one-line version of the business case is not the real business case.

The projects that fail are not mostly failing because the technology does not work. They fail because the savings were modeled against the software cost and the actual cost was a multiple of that.
A more defensible 2026 business case treats implementation as a multi-line budget item, models inference costs with room for production variability, includes an oversight labor line, and carries maintenance as a recurring annual expense. The resulting payback period will be longer than the one-line version. It will also be closer to what actually happens.
Where QuantFi Comes In
We work with finance leaders and their investors to lead AI automation on the accounting and finance side, from initial scoping through implementation. Engagements typically start with a discovery sprint to map the finance function, identify which workflows are actually worth automating, and produce a cost and return estimate grounded in the specific state of the client’s data and systems. That sprint determines whether a full implementation makes sense before any significant capital is committed.
For the workflows that do clear that bar, we build the full cost picture, structure a phased roadmap that gates each stage on demonstrated return from the prior one, and stay involved through deployment. The goal is not to slow down adoption. It is to make sure the right opportunities get pursued, the investment committee approves the real number, and nobody on the team is surprised by a cost that was always going to be there.
Kenny & Christian