“What’s your AI strategy?”

True story - I remember the first time a VC asked me that, and I was like, well that’s a lazy question… ‘what’s YOUR AI strategy? I sell car parts online, my mans’

Didn’t go over that well…

So if that boardroom question makes you sweat, you're not alone. Many of us CFOs are still sorting through where AI actually fits into our finance orgs in a way that is practical, not just performative.

That’s why Brex created The CFO Guide to AI Strategy. It’s a straightforward look at how finance teams are using AI today to cut manual work, move faster, and make better decisions, without overcomplicating things. Get the guide:

“What’d We Break a Window or Something?”

Whenever I go to dinner with my friend Terry, upon receiving the check, without fail, he’ll quickly glance down at it, deadpan the waiter, and exclaim:

“What’d we break a window or somethin’?”

Terry, from Franklin, MA

It gets an uncomfortable chuckle every time.

I recently received the flood insurance renewal for our Florida condo. And I had the exact same reaction (except serious).

I guess the actuarial risk models are expecting a “once in a hundred year” storm every five years now.

And it got me thinking about risk, and the different flavors it comes in.

Two Types of Risk

Definitionally…

Risk = the Net Present Value of all the bad shit that is going to happen to you.

But in practice, it comes in two different forms.

The first type is actuarial risk. It's random, within predictable probability ranges based on your circumstances. Like getting hit by lightning, winning the lottery, or your server hardware failing. None of it is emotionally driven. There are no human adversaries trying to exploit your systems. There’s no ill will involved.

For actuarial risk, we have statistical models that work. Insurance companies can price policies because they understand the underlying probability distributions. When my flood insurance premium goes up, it's because the actuarial tables are adjusting to new data about storm frequency and severity. And honestly, the storms have been brutal the last ten years.

The second type is behavioral risk, with probabilities that change based on what you have to lose and what a counterparty has to gain. Like your house getting robbed, your company data being held for cyber ransom, or your crypto wallet getting hacked. All of it is based on incentives.

As Andy Ellis, former CISO at Akamai, explained to me:

"When you think about how human adversaries work, they're very non-predictable. First of all, they're attacking you, not a random person. You can't use the actuarial table for 'Am I going to be in a car accident?' if somebody's trying to run you off the road, personally. Either it's happening or it's not. There's no random chance involved."

Andy on RTN (Apple | Spotify | YouTube)

Humans respond to incentives. Random chance doesn't.

Yet, we treat both types of “risk” the same in our attempts to quantify bad shit.

The Business of Kidnapping

Back in the 1970s and 80s, a wave of executive kidnappings swept Mexico. At first, it was political groups, but soon it turned transactional. Criminals realized multinational companies carried "kidnap and ransom" insurance. Pay the ransom, get your exec back.

What was the unintended consequence? The insurance market essentially created a pricing floor for abductions, professionalizing what became a whole cottage industry in hostage-taking.

Another classic example happened in British-ruled India in the 1910s. The colonial government put a bounty on cobra skins to reduce deadly snakes in Delhi. At first, it worked. Locals hunted and killed snakes for the reward. But then people began breeding cobras just to collect the bounty. When the government caught on and canceled the program, breeders released their now-worthless cobras into the wild, making the snake problem even worse than before.

The Getty Principle

John Paul Getty understood behavioral risk better than most. In 1973, his grandson was kidnapped in Rome. The abductors demanded $17 million, but Getty, the richest man in the world at the time, refused.

His reasoning was ice-cold game theory:

"I have 14 grandchildren. If I pay one penny, I'll have 14 kidnapped grandchildren."

John Paul Getty

Negotiations dragged on for months. The kidnappers escalated by mailing a severed piece of the boy's ear to an Italian newspaper. Only then did Getty agree to a reduced ransom of about $3 million. He even loaned part of it to his son at 4% interest.

I wonder if he used LIBOR to come up with that interest rate

Two takeaways:

  1. John Paul Getty understood the difference between actuarial risk and behavioral risk

  2. John Paul Getty was a total dick (and a loan shark)

But he was right about the incentives. Pay one ransom, create a market for 13 more kidnappings.

Why Security Risk Quantification is Broken

This brings us to the fundamental problem with how most organizations approach cybersecurity risk. And a lot of CFOs get dragged into the math behind it, as the costs are extremely variable depending on the org’s posture around cyber.

In my experience, everyone spends money on cybersecurity, but no one seems to know how much is enough. So we create these elaborate mathematical equations to get comfortable with our levels of risk.

The problem is we’re applying actuarial math to behavioral problems.

When your CISO walks into your office with a spreadsheet showing "$40 million in security risk" that can be reduced with a "$5 million investment," they're making the same mistake as the cobra bounty program. They're treating intelligent human adversaries like random probability distributions.

As Andy puts it:

"Stop trying to quantify risk in dollars. The math isn't just wrong, it's fundamentally inappropriate for the type of risk you're dealing with."

Scenarios and Surprises

Instead of fake precision through dollar calculations, Andy suggests a different approach he calls the "Pyramid of Pain." Focus on two dimensions:

Impact Severity:

  • Disaster: Company will never be the same

  • Severity 1: All hands on deck, executives can do whatever it takes

  • Severity 2: Significant customer impact, marshal resources

  • Severity 3: Managers care, VPs might not notice

  • Severity 4: Only engineers care

Surprise Level:

  • Repeating: Already happening regularly

  • Plausible: Nobody in the company would be surprised

  • Mildly Surprising: Only the naive/optimistic people would be surprised

  • Surprising: Most of the company would be surprised

  • Shocking: Even paranoid security people would be surprised

"Don't worry about probability because probability numbers just aren't believable when you're dealing with human adversaries," Andy explains. "Just talk about who would be surprised if it happens."

This gives you what matters: directional accuracy for prioritization without false precision that inevitably destroys credibility.

The Actuarial vs Behavioral Test

So how do you know which framework to use? Ask yourself: Are there intelligent human adversaries who benefit from exploiting this risk?

If no, use actuarial approaches. Get insurance. Calculate probabilities. Build statistical models.

If yes, use behavioral approaches. Focus on scenarios, incentives, and building robust response capabilities rather than calculating precise probabilities.

Most cybersecurity risks fall into the behavioral category. A hacker using AI to breach your systems isn't a random event. It's a calculated attack by someone who stands to gain from your loss.

Price of the Premium Going Up

The flood insurance actuaries can keep raising my premiums based on their models. That's actuarial risk. Storms don't care about my feelings or adapt to my defenses (and my fence ain’t that big).

But when it comes to cybersecurity, stop pretending you can mathematically model away behavioral risk. You can't put a reliable dollar figure on human creativity, adaptability, and malice.

Instead, focus on building systems that can handle surprises, respond quickly to incidents, and create better incentives for both your team and potential adversaries.

And not to oversimplify it, but as a CFO or C Suite exec you’re often differentiating between system problems and people problems. While risk is largely unknown, it lives in the same two buckets.

Listen to the full episode with Andy: Apple | Spotify | YouTube

Run the Numbers Podcast

What’s the accounting treatment for private jets?

I sat down with Chris Brubaker, SVP of Finance at Postscript, who’s helped build the finance function from the ground up. He was the first finance hire and has helped scale the company through multiple funding rounds.

Chris shares how he partners with sales through deal desks, sets pricing guardrails, and makes sure finance helps close deals instead of slowing them down. We dig into his hands-on approach to automation using AI with limited engineering resources, how Postscript’s metrics evolved as the company scaled, when to trust internal data over benchmarks, and where teams get tripped up.

This guy’s got the good!

Looking for Leverage Newsletter

How CFOs Think About Travel Spend

Today I want to talk about travel and entertainment spend (or as the QuickBook boyz call it, “T&E”). But first I have to get something off my chest - although you have the budget to do so, there’s no reason why one person should spend $32 at Chipotle.

Yes, I approved it, but I’m deeply worried about you.

Now that we’ve got that budgeting / medical disclaimer out of the way, here are the bus stops we’ll make on this T&E saga:

I. Internal vs External Travel

II. Flight Classes and Alcohol

III. Visibility and Approvals

IV. Budget Timing and Expense Submission

The worst approach to T&E is having no approach at all. This article will give you the building blocks to build your own policy.

Wishing you rational guardrails around risk,

CJ

Reply

or to participate