How does AI actually make money?
Is AI accelerating a usage based pricing revolution? An interview with OV's Kyle Poyar
It’s official - my wife left me because I didn’t know “10 mind blowing prompts to maximize my work output using ChatGPT.”
It was a good run. But you know what they say - technology eventually disrupts everything.
Just kidding… But you’ve all seen the myriad of cringy threads and clickbait career advice pieces surrounding AI.
What I fear is getting lost in all of this is the monumental change to pricing and monetization models that’s occurring.
So I brought in an expert: Kyle Poyar, Operating Partner at OpenView, benchmarking afficionado, and writer behind the amazing
newsletter.Similar to how Adam Neumann would brag to people that “Jaime Dimon was his personal banker”, I like to brag to people that “Kyle Poyar is my personal pricing guru”. His name carries a lot of cache in SaaS circles.
(Sidenote: did I do the cache thing right or is that like the browser type? Anyways…)
Here’s what we covered in our convo:
Part I: How does AI make money?
AI-as-a-service vs AI-enabled applications
Why we’re in the very early days of quantifying value
Part II: Free Trials and PLG
AI’s ability to deliver near instant time to value
Part III: Revenue Generation vs Cost Optimization
Right now the low-hanging fruit is around decreasing costs…
But the bigger opportunity (and where there will be the most pricing power) is around using AI to increase revenue.
Part IV: Lightning Round
Pricing has become a C-level topic
Nobody cares about your features.
They care about what your features help them accomplish.
Part I: How does AI make money?
To kick things off - at the simplest level, how does AI make money today?
At the risk of (way) over-simplifying, you can think of two buckets of AI solutions:
AI-as-a-service, where AI capabilities can be accessed via API and incorporated into other applications. OpenAI’s GPT-4 API gets the most attention, but there are a range of proprietary and open source alternatives (ex: Dolly from Databricks, AutoGPT). AI-as-a-service is typically adopted and bought by technical teams.
AI-enabled applications, which leverage AI – paired with custom language models or proprietary data – for specific use cases in the context of a software application (ex: GitHub Copilot, Notion AI, Canva Magic). AI-enabled applications make AI accessible to everyday business users without the need for custom programming.
These two buckets of AI solutions tend to monetize in different ways.
OpenAI charges on a usage-based model for GPT-4 (AI-as-a-service) even as they charge on a flat-rate subscription model for ChatGPT Plus (AI-enabled application).
The more a developer uses GPT-4, the higher the cost they’ll have to pay. GPT-4 specifically prices based on the number of tokens, where 1 token is roughly equal to a word. (You can enter some text here and see how it’s converted to tokens.) The cost is $0.03 per 1,000 prompt tokens and $0.06 per 1,000 completion tokens. Of course, completion maps more directly to a customer realizing value from AI, so it’s logical that these tokens are twice as expensive. This general pricing model applies across OpenAI’s other AI-as-a-service language models as well (Ada, Babbage, Curie, Davinci, GPT-3.5-Turbo, etc.), although each has its own pricing.
We’re still in the early days of monetizing AI-enabled applications, as folks are trying to figure out which AI-enabled workflows create tangible business value for customers. Regardless of how the AI-enabled apps monetize, they’re often licensing AI-as-a-service and paying real costs for it (along with paying real costs for the extra compute). With their costs scaling on the basis of usage, AI application vendors may wish to charge their own customers based on usage and thereby protect against their costs ballooning on account of a few heavy users.
That’s why I’m fascinated to see Intercom’s approach for their new Fin product, an AI bot for customer support. They’re charging $0.99 per successful AI resolution. It’s usage-based pricing that’s tied to real customer outcomes.
Why I’m digging it:
Price aligns with customer value. A human-assisted support interaction costs far more than $0.99 each. You’re able to “hire” AI to achieve the same result — only in less time and at a lower cost. And you’re not paying if the case isn’t resolved.
There’s built-in expansion. Intercom thinks they can resolve 50% of cases today. You have to imagine that figure will go up and up. As it does, Intercom makes more money instantly. There’s a powerful incentive for Intercom to keep investing to resolve more cases and to invest in customer success to drive more adoption.
There’s (almost) no barrier to trying it out. Customers don’t need to make a big upfront commitment before seeing if it works. They don’t need to waste cycles in procurement and contract negotiation. They just set it up, see value quickly, and pay as they’re successful.
Imagine if Intercom charged per-seat. They’d make less money as they resolved more interactions, as customers could make do with a smaller team.
The million dollar question is whether customers will accept it. Will they trust how Intercom classifies a “successful AI resolution”? Can they get comfortable without being able to predict their bill upfront? I can’t wait to see how this plays out.
So it sounds like AI costs depend heavily on usage, far more so than traditional SaaS.
Yet, consumers are used to subscription-based pricing.
Do you think usage-based pricing might be a learning curve if consumers expect a flat subscription, yet their costs depend heavily on their usage?
I look at this question from a different perspective. Consumers are accustomed to subscription-based pricing for many of their software products. But business users tend to see a much wider variety of pricing. Usage-based models are the norm for cloud infrastructure, developer tooling, data storage, payments/FinTech, and API products.
My perspective is that companies thinking about how to price their AI products should get back to basics and ask themselves:
Who values my product and for what use cases?
Why do they value my product? What ROI do they see?
What is their alternative?
How much do they value my product relative to the alternative?
The more you can quantify the value you deliver to customers, the better you’ll be able to price your AI product to capture that value.
My sense is that we’re in the very early days of quantifying value. Every SaaS company under the sun raced to embed AI into their products. Now they’re putting these AI capabilities into customers’ hands (usually for free) and hoping that customers will tell them what’s valuable about it. One example is ServiceNow, which essentially admitted as much in their recent earnings call.
"I’ll share more with you [about pricing] ... whether it’s an add-on, whether it’s a bundle. And we are working through the details, but we are only going to charge where we provide value for our customers, and that is the first principle we are looking at."
- ServiceNow earnings call
If your value is around individual productivity (think: AI embedded into Notion), then you may want to charge on a per-user, per-month basis. But if your AI application allows your customers to do more with a smaller team, charging on a per-user basis may backfire since your customers will spend less and less as your product delivers more and more value.
Should I think about AI similar to server usage with AWS or GCP? Or maybe Snowflake?
Among the AI-enabled application vendors who are monetizing, the general trend is toward “hybrid” pricing based on a combination of user seats and overall usage. My suspicion is that these hybrid models aligned best with folks' existing pricing, offered customers a sense of budget predictability, covered the costs of licensing ChatGPT, and were straightforward to launch quickly.
I expect this trend toward hybrid pricing will continue as it maps to a broader shift in SaaS pricing. It also protects vendors against an uncertain future as we continue to learn about how AI is valued and how much it’ll cost to embed inside applications.
Part II: Free Trials and PLG
Will most AI tools let you try before you buy?
The trend is absolutely toward try-before-you-buy and product-led growth (PLG).
Part of the challenge for implementing PLG is being able to design your product for self-service adoption and fast time-to-value. If there’s too much friction before a user can experience their ‘aha moment’, a try-before-you-buy model could totally backfire. AI has the potential to deliver not only fast time-to-value, but near instant time to value. It can feel like magic for the end-user. Why not put that magic in the hands of more people?
I’m also finding that end-users still need to internalize how AI helps them get better at their job before they’ll be in a position to ask their boss for budget approval. Having a free offering allows you to learn from your users what are the highest value use cases, then double down on those use cases over time.
One final point on this topic. It’s becoming clear that AI is an extremely social feature. Yes, this has resulted in the cringe “10 best ChatGPT prompt” posts on LinkedIn. But the reason these posts are so ubiquitous is because there’s a real craving for human connection around AI. Users turn to the community for inspiration, education, and support. And when someone is successful, they want to show off what they’ve accomplished.
With AI products, community is becoming a potential moat and source of competitive advantage. More robust communities draw in new users, ensure those users are equipped for success, and thereby fuel even more production adoption. All the more reason to open up access for folks to try your AI product.
Where’s the most common place to put a paywall? What’s the trigger for users to go from free to paid?
The challenge for any PLG product is how to strike the right balance between offering real value for free while still being able to effectively monetize.
I’ve been a fan of usage paywalls where users get to experience a taste of premium features – which drives engagement, stickiness, and habit formation – yet still hit a compelling event that creates urgency to finally pull out the credit card. Well known examples of usage paywalls include Zoom’s 40 minute time limit on group meetings, Miro’s free limit of three editable boards, or even the New York Times’ classic article paywall.
For AI products, I’m noticing that the usage paywall tends to be tied to number of AI queries or number of AI responses (which should be roughly equal). This metric gives folks a chance to not only try out the magic of AI, but to start to develop a habit out of it. It also provides an insurance policy against AI delivering a bad or wrong output – users can get multiple bites out of the apple before they need to start paying.
Part III: Revenue Generation vs Cost Optimization
Will most AI be aimed at increasing revenue or decreasing costs? Is that the best way to bucket AI’s benefits?
Right now the low-hanging fruit is around decreasing costs, i.e. being able to accomplish the same amount with a smaller team or take work that you’d normally send to a vendor and bring that in-house. Saving money is especially compelling from a customer perspective in the current environment.
In my view, the bigger opportunity (and where there will be the most pricing power) is around using AI to increase revenue. This value proposition would change the way people view AI, too, and mitigate some of the fears that AI will take folks’ jobs.
Regardless of where you land, I recommend that you avoid over-indexing on saving time as your value proposition. Sure, saving time helps get end users interested in learning more. Here’s the thing: time savings isn't differentiated, doesn't create urgency, and doesn't capture real $$$. A business case built on saving time won’t stand a chance of getting past the CFO (as CJ can probably attest to!).
Author’s note: #facts
Part IV: Lightning Round
Looking back on your career, what’s changed the most in terms of what you think makes a good pricing strategy?
I’ve been in the pricing strategy world for more than 13 years now and, honestly, many of the fundamentals of good pricing strategy still apply. The biggest change I’ve observed is that pricing has become a C-level topic. That wasn’t the case five or ten years ago. There’s just greater recognition for both the business impact of pricing and for the complexity involved in getting pricing right.
What’s a financial or operating metric you think is overrated?
CAC payback period, especially when folks conceptualize customer acquisition costs as only sales and marketing expenses. In the age of product-led growth and usage-based pricing, products play a bigger and bigger role in how companies acquire, convert, and expand their customers. We need to embrace that trend. We also need to be monitoring the ROI of our product & engineering resources just as we monitor the ROI of our sales & marketing spend.
If you could put one message on a billboard for startup founders to drive by and read every day, what would it be?
Nobody cares about your features. They care about what your features help them accomplish.
A big thanks to Kyle for writing something that even the best AI software could not.
Please subscribe to his newsletter for more useful pricing and GTM strategy breakdowns.
CJ + Kyle, my two favs on SS! 💪
I’m glad to be the Jamie Dimon to your Adam Neumann, CJ! 😭