
On April 9th, I moderated “Buy, Sell, Hold: What Survives the AI Era?” at the Abacum Summit.
We tackled a simple question: What parts of finance are actually worth keeping — and what needs to go?
Alongside Chris Brubaker (Postscript), Fraser Hopper (PostHog) and Tania Secor (Stamford Health), we turned it into a live, audience-voted debate on what modern finance teams should buy into, sell off or hold on to as AI reshapes the function.
If you couldn’t make it to Gotham Hall, the full session is now available on demand.
BTW - I Can Help You Hire
I didn’t have Recruiting company on my 2026 bingo card, but hey, here we are!
We’ve been filling key finance and accounting roles left and right for all my CFO friends.
If you’d like to hire a Manager, Director, or VP of Finance or Accounting from our readership, we’d love to help make the connection. We can find your right hand person.
If you’re a candidate passively open to kick ass roles (even if you’re in a great role now) and you haven’t entered our warm talent pool, you can do so here. Takes 3 min.

Source: Decision Intelligence
“They’re Not Confessing, They’re Bragging”
A while back OpenAI leaked / posted / "presented" a leaderboard of which companies were burning the most tokens.
As a CFO, my first reaction was: damn, these people just got doxxed. They're all about to be fired.
(Although honestly, if their CFO didn't already know someone spent $3.4M on Codex this month, the CFO should be fired too. I mean, Isaac. What. The. Fuck.)
But the weird part is the companies on the list took the leaderboard and ran with it. They turned it into marketing material, bragging about how much they were spending.
It was the start of tokenmaxxing.
It reminded me of this scene in The Big Short, where the two NINJA mortgage Chads are hanging at the office (bar) talking about all the unqualified borrowers they're writing loans for:
“So do applicants ever get rejected?”
“Seriously?”
“Look, if they get rejected, I suck at my job…”
“Can you hold on a second???”
“I don’t get it… why are they confessing?”
“They’re not confessing… they’re bragging”
That's where we are with token consumption right now.
I just got off stage moderating a panel at Rillet Recon with AJ Ljubich (SVP FP&A at Datadog), Dana Decker (VP Finance at Opendoor), and Micah Richard (AI Principal at PwC). And then I spent the rest of the day in hallway conversations with investors, controllers, and a few former CFOs of public companies.
I walked in thinking the conversation would be about how to deploy AI.
I walked out realizing nobody actually knows how to budget this stuff, control it, or measure ROI yet.

Here's what I came away with.
Just Because You're Burning Tokens Doesn't Mean You're Building Anything Good
I had a sidebar with Amy Butte, former CFO of the NYSE and Navan, and she made me laugh:
"Just because you're consuming a ton of tokens, doesn't mean you're necessarily building something worthwhile."
Reading between the lines: you can use a ton of tokens to build a ton of crap.
(or stuff that gets you in trouble with your spouse)
Token consumption is conflated as a proxy for innovation. But it’s not. It’s merely a proxy for activity.
Activity is when your engineer rips Codex 400 times to refactor a function. Innovation is when one of those runs ships something a customer pays for. So, like, revenue.
And the dispersion between engineers is wild. AJ said Datadog is tracking token usage very closely internally and they're seeing power users with consumption that dwarfs everyone else's.
This recent article shared by Alfred Lin of Sequoia was eye opening: companies are pushing their managers to be somewhere around the 50th percentile amongst their teams.
Why should people managers not be in the 10th percentile, nor 75th? Well, you want to demonstrate enough consumption so you understand how to practically use AI, as well as it’s limitations, so you can better direct your team. But you also don’t want to be so far out in front of your team that the consumption figures imply they aren’t adopting fast enough.
“This applies everywhere. Engineering is just where we start.
This expectation is true across the entire company. Every leader at every level should have real hands-on experience with the tools their teams use. Engineering is where it's most urgent and where we can measure it today.”
Not too hot, not too cold, but once again, a proxy metric.
Ivan Makarov, an Operating Partner at a16z who builds out finance functions for early-stage companies, sent me that article. And he gave me his opinion on consumption trends:
"In my opinion, we're too early in the cycle of tokens being used effectively. Maybe the top 10% of engineers, salespeople, and finance leaders are effective at this because they've been top 10% performers throughout their career. We should not expect this from everyone else as they catch up. They're not going to ramp as fast, and they aren't going to put the tokens to the best use."
Which tracks. The people who were good at their jobs before are still the people who are good at their jobs now.
On stage, Dana gave the example of OpenDoor’s head of growth, who’s crushing it while crushing tokens (in a good way).
So when a company brags about how many tokens they're burning, the obvious follow-up question is: who's burning them? Because if it's the same five engineers who were going to ship great work anyway, Archimedes just gave them a big ass lever. If it's the other 95 who are still figuring out what a good prompt looks like, you're paying for poor gas mileage when the Strait is closed.
Tokens Are the New Benefits Load
Nobody has a clear budgeting framework for internal AI usage yet.
We’ve come up with all sorts of outcome and usage based pricing mechanics when selling AI features, but the same sophistication for accounting for internal usage lags far behind.
Here's how I used to think about per-employee overhead.
Sales tooling — Salesforce, Outreach, Gong, ZoomInfo, the whole stack — runs about 10% to 15% of a salesperson's salary.
Benefits load (healthcare, 401k match, payroll taxes, the works) runs about 20% to 22% of cash OTE.
These are the rough ratios I'd plug into a model when I was sizing up a sales team or building a US based hiring plan.
Tokens are going to land in the same neighborhood. They're the next per-employee line item. Not necessarily COGS, not infrastructure, not some special "AI budget" the CTO controls (unless of course it’s being used to serve the product you sell). Linked to the employee, like a laptop, and closer to payroll assumptions.
Ivan from a16z told me he's already seeing the upper end of that range:
"I have heard some say as much as 25% to 50% of the R&D salaries are now budgeted for AI."
So I tested this on stage. I had 250 finance leaders in the room and ran a quick buy/sell.
First question: in five years, will the average finance employee consume the same amount of tokens as the average engineer?
Most of the room bought. Which is a real big shift. Two years ago if you'd asked whether finance would consume tokens at engineering scale, people would have laughed. The people who still think wing-tipped shoes are cool? Go back to bed.
Second question: in five years, will we spend as much on tokens per employee as we spend on their salary?
Only about 10% bought. I went too high.
Third question: will we spend at least as much on tokens as we spend on benefits?
Most of the room bought.
So the consensus from a room of 250 public-company and growth-stage finance leaders is: tokens land somewhere in the 20% range of cash comp. That’s not a complete salary duplication (or replacement). As we honed in on before, they see it as a benefits-equivalent line item that scales with headcount.
This matters because the AI hype cycle keeps insisting tokens will replace people. The finance people who actually have to budget this stuff are saying: no. Tokens are an additional cost that grows with the team, not a one for one substitute for the team. If you're modeling a 50-person engineering org five years from now, you're not modeling 25 engineers and doubling their OTE for tokens. You're modeling 50 engineers plus a per-head token budget that looks a lot like what you spend on their healthcare today.
Which has a few implications worth chewing on:
You need a per-function ratio, not a company-wide pool.
Engineering's number is going to be different from finance's number is going to be different from sales' number.
Same way benefits load varies by geography and seniority.
Build the ratio, plug it into the model, and stop treating tokens like a slush fund.
You need to start tagging spend by employee.
AJ's already doing this at Datadog.
If you can't tell me what your average engineer is spending versus your top consumer, you can't budget for next year.
You need to renegotiate this in your hiring plans.
If a new hire costs you their salary plus 22% benefits plus 20% tokens, that's a meaningfully different all-in number than what you were modeling 18 months ago.
Hiring plans built on old ratios are going to get exposed like a paper mache suite in the rain.
The "O Word" Is Coming
I'm a bit of an earnings call historian. Actually, archaeologist is closer. I dig through old transcripts wearing my Indiana Jones movie quality replica hat.

Here's one I think about a lot.
Snowflake's Q4 FY2022 earnings call. March 2, 2022. CFO Mike Scarpelli is reading prepared remarks and drops this line:
"We also introduced platform enhancements that improved efficiency higher than expected, which lowered credit consumption."
The street lost its mind.
Wut?
Snowflake's whole model is credit consumption. Every quarter, every guide, every analyst question came back to consumption. And here was the CFO saying: yeah, our customers are using less of the thing we sell. On purpose. Because we made it more efficient.
That's the moment optimization showed up at Snowflake. Customers had been spending freely for years and finally got around to looking at the bill. CFOs started calling it out on earnings calls. Procurement teams got involved. Eng teams got tasked with rewriting queries to use fewer credits. Optimization took on its own life, and it permeated through all parts of the cloud and database infra stack.
The same thing is coming for AI spend. Probably 6 months out.
Right now nobody is pushing back on the budget because they don’t understand how to budget for it. And some companies are totally OK with no budget (for the moment) - while the efficiency may not be there yet and the costs are rising, it is also an investment in the people who are learning to use AI. That’s how many companies can get ahead, especially if they can deal with temporary burn and margin compression.
Said another way, founders are letting their engineers burn whatever they want because the alternative is being slow, and slow is the only thing scarier than expensive. So the clamps are off. Spend whatever, just ship.

Like Uber.
Their CTO said they blew through their 2026 tokens budget by April this year… yes, four months in.
"I’m back to the drawing board, because the budget I thought I would need is blown away already."
That's not going to last.
So a couple of practical things to start doing today:
Tag your token spend by team and by use case. If you can't break it apart, you can't optimize it. This is the same lesson everyone learned about cloud spend the hard way. AWS bills used to come in as one giant number until somebody figured out tagging, and then suddenly everybody could see that the marketing team's analytics pipeline was costing more than their entire payroll.
Build a usage baseline before you have to. When the optimization wave hits, the CFOs who already know what their team's normal looks like will have a starting point. The ones who don't will be panic-cutting and breaking stuff.
Watch for the "O word" on earnings calls. The first public AI company to mention "optimization" or "efficiency improvements" in the same sentence as token consumption is going to mark the hard left turn. After that, every CFO in the industry has to cover the move. It’s like road racing. You see the first guy make a break and you need to cover it before they get away and leave you in the dust.
Which brings me to the point of this post: ROI.

ROImaxxing
Finance folks should have some frameworks of thinking about token ROI. And not just on cost savings, but also on whether the AI adoption is actually moving revenue upwards.
Because that is absolutely the hardest part right now: to correlate your revenue growth to your AI adoption.
I don’t think we need to get cute and invent new metrics. As Micah from PwC mentioned on the panel we were on, the same efficiency metrics still hold. What we need to do is start applying them correctly.
Think about it: how would you measure employee efficiency before? Well, you’d have to break it out by department or function. You wouldn’t measure a finance person on quota attainment. And you wouldn’t measure an engineer on recruiting roles filled.
Ideally the “benefits load” of AI tokens you are adding has a non linear impact on an employee’s productivity.
Recruiters should see an uplift in the number of candidates they are able to place because they no longer need to manually fill out scorecards
Finance folks should be able to hit the ball closer to the pin on forecasts because they can now run multiple probabilistic models in addition to the deterministic ones they used to rely on (see my post on what WellHub has been able to accomplish, bringing their mean error down from 10% to 2%)
CMOs should be able to iterate on ad creative faster and drive more pipeline from the same demand gen dollars
The more I think about it (I’m about to get nostalgic for a second here) tokens should be thought of similar to NOS in Need for Speed Underground. I was obsessed with this video game growing up. You raced souped up cars through the streets of various cities and could tap into this expensive and limited alternative fuel source (NOS) to supercharge your car and pass a competitor.

So we need to take a nuanced approach when measuring productivity at the employee level, segmenting them by department and job function. And at the company level, this should absolutely show up in revenue per employee (the GOAT of SaaS metrics, IMO). And ULTIMATELY it should flow down to profit per employee.
Because AI or no AI, the goal of business has not changed. It’s to make money.
TL;DR: Median Multiples are FLAT week over week.
The overall tech median is 3.3x (DOWN 0.1x w/w).
What Great Looks Like - Top 10 Medians:
EV / NTM Revenue = 14.7x (UP 0.1x w/w)
CAC Payback = 24 months
Rule of 40 = 50%
Revenue per Employee = $595k
Figures for each index are measured at the Median
Median and Top 10 Median are measured across the entire data set, where n = 144
Recent changes
Added: Navan, Bullish, Figure, Gemini, Stubhub, Klarna, Figma
Removed: Jamf, OneStream, Olo, Couchbase, Dayforce, Vimeo
Population Sizes:
Security & Identity = 17
Data Infrastructure & Dev Tools = 13
Cloud Platforms & Infra = 15
Horizontal SaaS & Back office = 17
GTM (MarTech & SalesTech) = 18
Marketplaces & Consumer Platforms = 18
FinTech & Payments = 28
Vertical SaaS = 17
Revenue Multiples
Revenue multiples are a shortcut to compare valuations across the technology landscape, where companies may not yet be profitable. The most standard timeframe for revenue multiple comparison is on a “Next Twelve Months” (NTM Revenue) basis.
NTM is a generous cut, as it gives a company “credit” for a full “rolling” future year. It also puts all companies on equal footing, regardless of their fiscal year end and quarterly seasonality.
However, not all technology sectors or monetization strategies receive the same “credit” on their forward revenue, which operators should be aware of when they create comp sets for their own companies. That is why I break them out as separate “indexes”.
Reasons may include:
Recurring mix of revenue
Stickiness of revenue
Average contract size
Cost of revenue delivery
Criticality of solution
Total Addressable Market potential
From a macro perspective, multiples trend higher in low interest environments, and vice versa.
Multiples shown are calculated by taking the Enterprise Value / NTM revenue.
Enterprise Value is calculated as: Market Capitalization + Total Debt - Cash
Market Cap fluctuates with share price day to day, while Total Debt and Cash are taken from the most recent quarterly financial statements available. That’s why we share this report each week - to keep up with changes in the stock market, and to update for quarterly earnings reports when they drop.
Historically, a 10x NTM Revenue multiple has been viewed as a “premium” valuation reserved for the best of the best companies.
Efficiency
Companies that can do more with less tend to earn higher valuations.
Three of the most common and consistently publicly available metrics to measure efficiency include:
CAC Payback Period: How many months does it take to recoup the cost of acquiring a customer?
CAC Payback Period is measured as Sales and Marketing costs divided by Revenue Additions, and adjusted by Gross Margin.
Here’s how I do it:
Sales and Marketing costs are measured on a TTM basis, but lagged by one quarter (so you skip a quarter, then sum the trailing four quarters of costs). This timeframe smooths for seasonality and recognizes the lead time required to generate pipeline.
Revenue is measured as the year-on-year change in the most recent quarter’s sales (so for Q2 of 2024 you’d subtract out Q2 of 2023’s revenue to get the increase), and then multiplied by four to arrive at an annualized revenue increase (e.g., ARR Additions).
Gross margin is taken as a % from the most recent quarter (e.g., 82%) to represent the current cost to serve a customer
Revenue per Employee: On a per head basis, how much in sales does the company generate each year? The rule of thumb is public companies should be doing north of $450k per employee at scale. This is simple division. And I believe it cuts through all the noise - there’s nowhere to hide.
Revenue per Employee is calculated as: (TTM Revenue / Total Current Employees)
Rule of 40: How does a company balance topline growth with bottom line efficiency? It’s the sum of the company’s revenue growth rate and EBITDA Margin. Netting the two should get you above 40 to pass the test.
Rule of 40 is calculated as: TTM Revenue Growth % + TTM Adjusted EBITDA Margin %
A few other notes on efficiency metrics:
Net Dollar Retention is another great measure of efficiency, but many companies have stopped quoting it as an exact number, choosing instead to disclose if it’s above or below a threshold once a year. It’s also uncommon for some types of companies, like marketplaces, to report it at all.
Most public companies don’t report net new ARR, and not all revenue is “recurring”, so I’m doing my best to approximate using changes in reported GAAP revenue. I admit this is a “stricter” view, as it is measuring change in net revenue.
OPEX
Decreasing your OPEX relative to revenue demonstrates Operating Leverage, and leaves more dollars to drop to the bottom line, as companies strive to achieve +25% profitability at scale.
The most common buckets companies put their operating costs into are:
Cost of Goods Sold: Customer Support employees, infrastructure to host your business in the cloud, API tolls, and banking fees if you are a FinTech.
Sales & Marketing: Sales and Marketing employees, advertising spend, demand gen spend, events, conferences, tools.
Research & Development: Product and Engineering employees, development expenses, tools.
General & Administrative: Finance, HR, and IT employees… and everything else. Or as I like to call myself “Strategic Backoffice Overhead.”
All of these are taken on a Gaap basis and therefore INCLUDE stock based comp, a non cash expense.
Please check out our data partner, Koyfin. It’s dope.
Wishing you trade at a high revenue and EBITDA multiple,
CJ














