Mostly metrics is proudly powered by Brex

Most CFOs I know didn't get into finance to chase receipts.

But that's where the time goes. Reviewing expenses that should have been blocked before they happened. Closing books that take longer to close than the month took to live. Approving spend that should have been approved automatically three days ago.

That's not a people problem. That's a systems problem.

That's why I use Brex. Agentic Finance that automates the receipts, catches out-of-policy spend before it hits the books, and closes your month in minutes instead of weeks. So your finance team stops spending on maintenance and starts spending on momentum.

35,000+ companies, including Mostly Metrics, Anthropic, DoorDash, and Coinbase, have already figured this out. See why it’s time to get Brex AF.

Cerebras IPO: S1 Breakdown

When you go in for heart surgery and get 44 Gigs of on-chip memory instead

Cerebras builds the largest computer chip ever commercialized. It is 58 times the size of NVIDIA's flagship and roughly the size of a dinner plate. Which is also like the most American thing to say: Bigger is BIGGER!

I like big chips and I cannot lie

Big as they are, the chips are the easy part of this tale. The financials are where it gets funky.

Cerebras did $510M in revenue in 2025, growing 76% y/y, and is sitting on a $24.6 billion backlog that is essentially one customer. That customer also loaned them a billion dollars and was issued 33 million shares for fractions of a penny. Who do you think it is?

They are filing to IPO this week at a range that implies a $33 billion market cap.

If you LOVED Coreweave you’re going to potentially maybe LIKE Cerebras.

Strap in. There’s a big chip sale going down. And I’m not talking Tostitos.

What Does Cerebras Do?

Every chip you've ever interacted with is really small for a reason. Chipmakers start with a silicon wafer (a round disc about the size of a vinyl record), stamp hundreds of identical chips onto it, and then slice the vinyl apart so each chunk can be packaged and sold individually. Your laptop and your phone each have one, and NVIDIA's flagship B200 GPU has two, mounted together.

The reason chips are small is because defects happen. Silicon is finicky AF; some part of every wafer has a flaw, and the bigger you make the chip, the higher the chance a flaw lands inside it and kills the whole thing. So the industry settled into making chips small and stamping a lot of them per wafer. For 75 years, the largest commercial chip was about the size of a postage stamp.

Cerebras kept the wafer intact. They invented a way for the chip to recognize defects and route the work around them, figured out how to cool a chip the size of a dinner plate without melting the motherboard, and worked with TSMC to develop a manufacturing process that had never existed before. Nobody had done this commercially to date. They call it the Wafer-Scale Engine, or WSE, and it is 58 times the size of NVIDIA's B200.

Here's why size matters for AI:

"The enemy of speed is communication latency. And since communication is thousands of times faster on-chip than across chips, the best way to reduce latency is to keep communication on-chip."

Source: S1 Filing

AI workloads spend most of their time moving data between chips, not computing. Which is something I learned when I was today’s years old, writing this here S1 breakdown.

Picture a GPU cluster as a giant room of accountants who can each do math, but every time one finishes a calculation, they have to walk it down the hall to another accountant to continue the work. The math isn’t hard for them. But they walk really fucking slow, and the walking takes up a ton of their energy.

Another way to think about it is NVIDIA sells you the accountants and the cables that connect them. Cerebras puts 900,000 accountants in the same room on a single piece of silicon, so the data never has to walk anywhere (and gets no lunch brekas). They claim this makes their system up to 15 times faster than NVIDIA on inference at a fraction of the power.

That pitch did not sell in 2020. From their own S-1:

"AI was nascent. It was raw and unproven. Training was time-consuming, a black art, and the domain of a select few. GPUs were not yet the bottleneck. And our solutions struggled to find a home."

Source: S1 Filing

So they sold to a handful of life sciences customers… and did… $25M in revenue in 2022. Womp womp.

Then inference became the workload that mattered the most.

The way modern AI works has changed in a way that finance people should care about, even if they don't care about the technology.

  • Old-style AI (the stuff that ran before late 2024) did its thinking up front during training, and inference was just looking up the answer, which was both fast and cheap.

  • Modern reasoning models like Claude, GPT-5, and Gemini do most of their thinking during inference, planning steps and checking their own work and refining answers in real time before giving you a response.

    • It is the difference between a student who memorized the textbook and a student who actually works through the problem on the page, live.

    • The second student is smarter, but the second student also takes a lot longer and uses a lot more compute, on every single query, for every single user.

So inference is no longer a lookup. It is the bottleneck, and the bottleneck is exactly the thing Cerebras built their chip to solve, four years before the bottleneck existed.

That’s why this IPO is happening now. Cerebras spent five years explaining wafer-scale to a market that didn't need it, and then in early 2025 the market arrived at their doorstep. Revenue went from $78M in 2023 to $290M in 2024 to $510M in 2025. And then OpenAI signed a $20 billion deal in December and AWS signed a term sheet in March. Business is dancin like DJ Esco.

"We firmly believe that once you go fast, you can never go back."

OK I’ll bite. I'm pulling down 500 megabits a second via Xfinity in this bizatch. Let’s keep going.

Key Stats

A quick scorecard before we get into the weird stuff.

  • Revenue (FY2025): $510.0M, +76% Y/Y

    • Up from $290.3M in 2024. Also up from $78.7M in 2023 and $24.6M in 2022. So they went 20x in 3 years.

    • Quarterly trend is also accelerating: Q1 '25 was $99.5M, Q4 '25 was $171.4M.

      • Annualize Q4 and you're at a $686M run rate going into IPO.

  • Gross Margin (FY2025): 39%

    • Down from 42% the year before.

    • Hardware margin is 43%. Cloud and other services margin is 30%.

    • Cloud is the newer part they want to grow, and cloud is the part with worse margin, which we’ll discuss later

  • Net Income (FY2025): $237.8M

    • Don't get excited. The $238M includes a $363.3M one-time gain from extinguishing a forward contract liability, a non-cash accounting event tied to preferred stock arrangements.

    • Strip it out and they lost $75.7M on a non-GAAP basis.

    • If you dare, you can strip out stock-based comp and you get a clearer view of operating performance, which is roughly breakeven on ops and burning (lots of) cash on capex and working capital.

  • Operating Cash Flow (FY2025): ($10.0M)

    • For comparison, 2024 operating cash flow was positive $452.0M.

    • The reason for the swing is not that 2024 was a great year and 2025 was a bad one.

    • It's that 2024 included $640.3M in customer prepayments from a customer (and investor) G42, which lands in operating cash flow.

    • And 2025 saw those prepayments work down by $285.9M as Cerebras delivered against them.

      • Net net: Customer prepayments are funding the business, not profits.

  • Remaining Performance Obligations: $24.6B

    • This both justifies their valuation and also come with a shit ton of asterisks

    • For those keeping score at home, it’s 48x last year's revenue.

    • Most of it is OpenAI.

    • They expect to recognize 15% over the next 24 months (through end of 2027), 43% in the 24 months after that, and the remaining amount sometime after 2029.

      • Fun comparison: CoreWeave IPO'd in March 2025 with $1.92B in revenue and a $15.1B backlog.

      • Cerebras is going public with roughly a quarter of CoreWeave's revenue and a 63% bigger backlog !?!

  • Customer Count and Concentration

    • The S-1 doesn't give a precise number, only that the top 10 customers grew their spend with Cerebras by approximately 80% within 12 months of their initial purchase.

    • Two of the companies in the top 10 are MBZUAI and G42, both based in Abu Dhabi and related to each other.

    • In 2025, MBZUAI was 62% of revenue. In 2024, G42 was 85.0% of revenue.

    • The third major name is OpenAI, who signed in December 2025 and accounts for $0 of 2025 revenue but most of the $24.6B backlog.

  • Employees: 708

    • As of December 31, 2025. About half international, with offices in Canada, India, and the UAE.

    • Revenue per employee is $720K, which is healthy for a hardware company at this scale.

  • Target Valuation: ~$33B

    • At the $155 midpoint of the $150-$160 price range, with 215.2M shares post-offering (185.2M Class B + 30M Class A).

    • The fully diluted number is higher once you factor in the OpenAI warrant, AWS warrant, RSU pool, and options.

    • Series H in January 2026 priced at $89.02, which means new IPO buyers are paying a 74% step-up four months later.

To call out the CoreWeave in the room - the comparison is too clear. It’s important ot note that CoreWeave works WITH Nvidia, souping up their chips. Cerebras has TSMC manufacture their chips and then soup them up. Same same, but different.

But the rest of the story is kinda eerie.

  • Both companies built infrastructure for AI training, then got supercharged when inference became the bottleneck.

  • Both have customer concentration concerns (Microsoft was 67% of CoreWeave’s revenue in 2025; MBZUAI is 62% of Cerebras’).

  • Both have a giant OpenAI backlog (~$22B for CoreWeave; most of Cerebras's $24.6B).

  • Both went public with no real cloud business yet.

CoreWeave priced at $40 in March 2025, down from an initial $55 target if you recall, which was a dark cloud over the IPO. The stock opened flat, ran up 300% by June, then gave half of it back. It’s currently trading at roughly 19x current sales after $5.1B of 2025 revenue and bonkers growth. They are doing well. But also, the market does not know what to do with this category. And with Cerebras, it’s growing.

How They Make Money

Cerebras has two revenue lines.

Hardware: $358M (70% of revenue)

They sell the Wafer-Scale Engine inside a fully integrated system called the CS-3, and they sell racks of those CS-3s wired together into what they call an AI supercomputer. Also, it sounds like a mid sized Audi with a twin turbo.

Customers buy these and put them in their own data centers. This is the original Cerebras business and it's where most of the revenue still comes from.

Is that masked lumberjack hipster fellow robbing our data center?

  • The good news: hardware grew 69% Y/Y, from $212M to $358M.

    • The S-1 attributes most of that growth to one customer (MBZUAI, worse name ever) buying a lot of on-premises systems.

  • The less good news: hardware revenue is lumpy. It depends on big customers cutting big POs and then waiting for delivery. The S-1 puts it bluntly:

"Each individual sale tends to be large as a proportion of our overall sales, which has impacted our ability to accurately forecast revenue and manage cash flows."

Source: S1 Filing

A miss on one customer is a miss on the quarter.

Cloud and other services: $152M (30% of revenue)

They launched the Cerebras inference cloud in August 2024. Customers can rent compute by the token, by the month, or for dedicated long-term capacity, and they can buy it directly from Cerebras or through partner marketplaces (AWS, Microsoft, IBM, Vercel, OpenRouter, Hugging Face).

  • The good news: cloud grew 94% Y/Y, from $78M to $152M.

    • Subscription revenue inside that line grew even faster.

  • The less good news: cloud gross margin is 30%. Hardware gross margin is 43%.

    • The business they want to grow has worse margins than the business they already have.

Sit with that for a sec, bc it's the inverse of what the IPO narrative craves.

The standard SaaS-pivot story is that you start as a lumpy hardware or services business and migrate to recurring software revenue with higher margins, sticky retention, and a clean multiple.

Cerebras is doing the opposite. The hardware sale is a one-time invoice with reasonable margin. The cloud product is the recurring revenue line, and it's costing them more to deliver than the hardware they're trying to migrate customers away from.

Quarterly cloud gross margin in 2025 went: 68% → 26% → 16% → 21%. The Q2 crater is what happens when you stand up a bunch of new cloud capacity faster than you can fill it. The S-1 attributes the gross margin decline to

"higher data center costs related to our cloud inference capacity services."

Source: S1 Filing

Two things to know about where this goes.

  1. Cerebras is locked into the cloud direction by their largest customer. The OpenAI deal is structured as cloud capacity, not hardware sales. Most of the $24.6B backlog converts to cloud revenue, which means the gross margin mix is going to get more cloud-heavy, not less, as the backlog burns down. The same pattern applies to the AWS term sheet.

  2. The cloud margin will probably improve once the capacity they're building gets filled. The 16% Q3 number reflects unfilled data center capacity sitting there earning nothing while still costing money in lease, power, and depreciation. As OpenAI ramps and AWS comes online, utilization goes up and the margin recovers. The bull case is that cloud GM eventually approaches hardware GM in the high 30s or low 40s. The bear case is that running a data center against hyperscalers who do it for a living means structurally lower margins forever.

So terrible now, but maybe not terrible forever.

Either way, the company that's IPOing is not a software company. It's a hardware company with a margin-dilutive cloud bolt-on.

Customer Concentration

Two customers account for 86% of Cerebras's 2025 revenue.

Those customers are MBZUAI and G42, both based in Abu Dhabi. They are also related parties to each other, as defined by Accounting Standards Codification 850. Meaning, there’s some stuff going on here.

G42 is an Abu Dhabi-based technology holding company with subsidiaries spanning energy, finance, cloud, security, and healthcare. MBZUAI is the Mohamed bin Zayed University of Artificial Intelligence, also in Abu Dhabi, also operating in the orbit of the same sovereign-backed AI strategy. The S-1's own description:

"We have established strategic relationships with Group 42 Holding Ltd... and the Mohamed bin Zayed University of Artificial Intelligence ('MBZUAI')... G42 and MBZUAI have acted as our customers, vendors, partners, and/or research collaborators on multiple initiatives in model training, inference, and AI compute infrastructure."

Source: S1 Filing

Customers, vendors, partners, and research collaborators. Roommates?

Here's how the concentration has moved:

  • 2024: G42 = 85% of revenue. MBZUAI = not yet material.

  • 2025: MBZUAI = 62% of revenue. G42 = 24%. Combined: 86%.

The accounts receivable is even more concentrated than the revenue.

  • At year-end 2025, MBZUAI was 77.9% of accounts receivable.

  • At year-end 2024, G42 was 91.0%.

Working capital concentration is a risk. So good thing they are investors!

Oh, then there's how the money moves between them.

G42 prepaid Cerebras $640.3M in 2024 to fund the build-out of the hardware they wanted Cerebras to deliver. That prepayment is what makes Cerebras's 2024 operating cash flow look healthy. Without it, OCF was deeply negative.

G42 also holds warrants. In December 2025, Cerebras issued G42 a warrant to buy 1,857,516 shares of Class N common stock at $0.01 per share. That warrant was fully vested and exercised in January 2026. In April 2026, Cerebras issued G42 another warrant, this time for 1,655,975 shares, also at $0.01 per share, also fully vested and exercised the same month.

So G42 is:

  • a customer that prepaid hundreds of millions of dollars for product,

  • was 85% of revenue one year, is 24% the next, and

  • was issued two warrants worth (at the $155 IPO midpoint) approximately $544 million in equity

  • for an aggregate exercise cost of about $35,000.

Doing the math… so like roughly half a billion in equity for the cost of a slightly used Nissan Altima.

There's also AWS.

In March 2026, Cerebras signed a binding term sheet with AWS to be the first hyperscaler to deploy Cerebras systems in its own data centers. The term sheet is binding on pricing, exclusivity, minimum capacity, and protections in favor of AWS, but the definitive agreement hasn't been negotiated yet.

AWS also got a warrant: up to 2,696,678 shares of Class N common stock at $100 per share, vesting tied to product purchases beyond the initial lease.

The AWS warrant is structurally different from G42's. The $100 exercise price means AWS pays real money (~$270M if fully exercised) and only vests if they actually buy a lot of compute. That's a more normal commercial arrangement.

The S-1 names the dependency layer cake in its risk factors. They name the four customers who are essentially the whole company:

"A reduction in demand from, or a material adverse development in our relationship with any of our significant customers, including OpenAI, G42, MBZUAI, and AWS, or our failure to meet our obligations under the MRA with OpenAI, would harm our business, financial condition, results of operations, and prospects."

Cerebras also has supplier concentration on the back end. They use one foundry, TSMC, to manufacture every wafer, and the S-1 acknowledges they have no formalized long-term supply commitment with TSMC, who also fabricates wafers for Cerebras's competitors (including NVIDIA, who is many times larger and buys many more wafers). On top of that, the data centers hosting the cloud business are leased, not owned.

So the picture is:

  • TSMC manufactures the chips.

  • OpenAI funds the working capital and accounts for most of the backlog.

  • G42 and MBZUAI account for most of the current revenue and have prepaid hundreds of millions of dollars in advance of future deliveries.

  • AWS will host the cloud business at scale, in their own data centers, with exclusivity provisions.

  • Leased data centers host the rest.

Reading the cap table is like attending a Targaryen wedding. The customers are also the owners. The owners are also the lenders. The lenders are also your sister.

The OpenAI Warrant Deserves Its Own Section

In December 2025, Cerebras signed a Master Relationship Agreement (sounds like a Uranium production deal) with OpenAI. OpenAI committed to purchase 750 megawatts of cloud compute capacity over multiple years, with options to expand to 2 gigawatts. The deal is valued at more than $20 billion and represents most of the $24.6B backlog the IPO pricing is built on.

Alongside the MRA, Cerebras issued OpenAI a warrant to purchase 33,445,026 shares of Class N common stock at an exercise price of $0.00001 per share. At the $155 IPO midpoint, that warrant is worth approximately $5.18 billion in equity. The exercise cost to OpenAI to take possession of all 33 million shares is $334.45. So, like, about what it costs to take my three kids to a character breakfast at the Contemporary Hotel in Disney World.

OpenAI also advanced Cerebras a $1.0 billion working capital loan. It accrues 6% interest, matures in December 2032, and is repayable in cash, compute capacity, or hardware. Dealer’s choice!

It is secured, and it comes with strings: if the MRA is terminated for any reason other than OpenAI's material uncured breach, OpenAI can direct the bank to freeze the cash and demand immediate repayment of principal plus interest.

The warrant vests in three tranches.

  • A first slug of ~$691M in equity already vested in January 2026 when Cerebras accepted the $1B loan.

  • A second slug of ~$864M vests when Cerebras's market cap clears $40 billion on a 30-day rolling average, meaning the CEO is incentivized to hit a valuation milestone that automatically transfers almost a billion dollars of equity to his largest customer.

  • The remaining ~$3.6B vests as Cerebras delivers compute capacity, with the full amount only vesting if OpenAI exercises every option and scales the deal to 2 gigawatts.

So Cerebras's largest customer is also the customer most economically incentivized to keep buying more.

Success be expensive (and dilutive) AF.

Financials

Source: S1 Filing

  • R&D: $243.3M, +54% Y/Y, 48% of revenue.

    • Almost half of revenue going to engineers.

    • For comparison, NVIDIA spent 12% of revenue on R&D in 2025. They are also very big. So not a great comp (yet)

      • Reminder: Cerebras is a process-node company designing 5nm wafer-scale chips against a competitor with 50x their cash flow.

      • So they have to spend at this level to have a shot.

  • S&M: $70.6M, +237% Y/Y, 14% of revenue.

    • The build is more for cloud, not hardware.

    • Hardware sells through a few reps closing big deals.

    • They're staffing a SaaS go-to-market on top of a hardware go-to-market.

  • G&A: $31.0M, down 31% Y/Y.

    • G&A went down in a year where revenue was up 76% and headcount was growing.

    • The S-1 attributes it to "lower legal expenses and litigation settlement costs." Hmmmmm

  • Operating loss: ($145.9M), vs. ($101.4M) in 2024.

    • Opex outran gross profit.

    • This is the part of the financial story that gets buried under the GAAP net income headline (which is technically positive, and misleading)

  • Other income/expense: +$390.7M.

    • Includes the $363.3M forward contract gain.

    • This line is what makes 2025 GAAP profitable and has almost nothing to do with the underlying business.

  • Balance sheet: Balance sheet: $701.7M of cash on hand at year-end 2025.

    • The S-1 also discloses that OpenAI advanced a $1B working capital loan in January 2026, which sits on top of that cash but can be clawed back if the agreement terminates the wrong way.

      • So, much of the cash they have access to today is tied to the OpenAI relationship..

  • Cash burn:

    • Operating cash flow was -$10M.

    • Investing was -$668M, including $383M of CapEx for cloud capacity.

    • Financing was +$1.0B, almost all preferred stock sales (Series G and H).

      • Cerebras funded 2025 by raising another billion of equity and taking on a billion of OpenAI loan, while burning around $400M on the cloud build.

  • The revolver:

    • $250M facility from Morgan Stanley signed in April 2026, upsizing to $850M post-IPO.

    • Morgan Stanley is also the lead-left underwriter, so your bookrunner is also your revolver lender. Normal in IPO season but worth knowing.

RPO recognition schedule:

  • Next 24 months (through 2027): 15% = ~$3.7B

  • Months 25-48: 43% = ~$10.6B

  • After 2029: 42% = ~$10.3B

That's ~$1.85B/year of recognized revenue over the next 24 months from backlog alone, growing to ~$5.3B/year in years 3-4. If it plays out, 2026-2027 combined revenue from the backlog is more than 3x what they did in 2025. That is the bull case.

The asterisks: it assumes

  • OpenAI takes delivery on schedule,

  • MBZUAI keeps buying,

  • no MRA disputes, and

  • the cloud capacity gets built and powered on time.

Any one of those slips and recognition shifts out out out.

Source: S1 Filing

Source: S1 Filing

Potential Red Flags

1. They told the SEC they don't have a great accounting function.

In their own S-1:

"The material weaknesses that we identified relate to (i) inadequate or missing resources who possess an appropriate level of expertise to timely review account reconciliations and identify, select, and apply U.S. generally accepted accounting principles ('GAAP') pertaining to several financial statement areas, including revenue recognition, inventory management and costing, data center assets accounting, and equity administration and (ii) the failure to maintain adequate IT general controls, including ineffective segregation of duties."

Source: S1 Filing

I've spent enough time in audit meetings to know what "ineffective segregation of duties" means in practice. Basically, it means somebody is writing the journal entries and reviewing the journal entries, which is the kind of thing that gets you on the front page of the Wall Street Journal in a bad way. Remember: this is the company about to go public at a $33B implied market cap, with $24.6B in remaining performance obligations under a complex revenue recognition agreement (the OpenAI MRA), telling you in writing that their controls over revenue recognition suck.

2. One foundry to make chips. No long-term supply commitment.

Every Cerebras wafer is manufactured by TSMC, with no formal long-term supply agreement. TSMC also manufactures wafers for NVIDIA, AMD, and most of Cerebras's competitors, all of whom buy a lot more wafers than Cerebras does. If TSMC reduces allocation, raises prices, or prioritizes one of those competitors, Cerebras has no second source. It's not like you can call up Samsung Foundry on Monday and have wafers shipping by Friday.

Wafer-scale adds a wrinkle: Cerebras buys an entire wafer per unit on a specialized manufacturing process built in partnership with TSMC, which is not easily portable to another foundry. Bigger is bigger risk?

3. The UAE concentration is a geopolitical bet, as well as a customer bet.

86% of 2025 revenue came from two related Abu Dhabi entities. That's a customer concentration problem and a political concentration problem layered on top of each other. G42 has drawn scrutiny from the U.S. government over historical ties to China, which is why Microsoft put $1.5B into them in 2024 (basically buying G42 a U.S. passport). The Commerce Department continues to update its export framework for advanced semiconductors in jurisdictions where G42 and MBZUAI operate, and the framework has gotten tighter, not looser, over the last 18 months.

If the U.S.-UAE technology relationship shifts the wrong way, a meaningful portion of Cerebras's revenue base disappears with limited notice. That's not a hypothetical, by the way. They call it out as an explicit risk factor in the S-1.

4. The cloud business is unproven and they're betting the company on it.

Cerebras launched the inference cloud in August 2024. As of the S-1, it is fewer than 18 months old. The OpenAI MRA, the AWS term sheet, and most of the $24.6B backlog are all structured as cloud capacity, not hardware.

Cap Table and the Series H Step-Up

The founders and the board still hold meaningful equity, but Cerebras has been a heavily funded private company for almost a decade and institutional names dominate the cap table.

Top holders (post-IPO, before any over-allotment):

  • Fidelity: 11.0%

  • Benchmark: 9.5% (Eric Vishria on the board)

  • Foundation Capital: 8.3% (Steve Vassallo on the board)

  • Eclipse: 7.3% (Lior Susan on the board)

  • Alpha Wave: 6.5%

  • Andrew Feldman (co-founder, CEO): 5.4%

  • Sean Lie (co-founder, CTO): 2.9%

Feldman and Lie have been a duo since SeaMicro in 2007, which AMD acquired in 2012. They both did time at AMD post-acquisition, then co-founded Cerebras in 2016. So the operating partnership has 18 years of continuity. Feldman is a founder-CEO running his second company, not a hired gun.

Combined founder ownership is 8.3%, which is on the lower end for an IPO-stage company. Cerebras founders got diluted harder than most, because the business needed a lot of capital before the inference moment showed up.

Series H

In January 2026, Cerebras closed a $1.0 billion Series H at $89.02 per share, with Alpha Wave, Benchmark, and Fidelity all participating. Four months later, they're pricing the IPO at a midpoint of $155, which is a 74% step-up from Series H to IPO.

For context, Series G priced at $36.23 per share in September 2025. The company was reportedly thinking of going public at that point but did the round instead. So inside 12 months: $36 → $89 → $155. 4.3x in a year, almost all of it riding on the OpenAI deal signing in December 2025.

The other thing to flag is that Fidelity put in $700 million at the Series G round and another $100M at Series H. That’s a fat IPO position.

Founder voting control survives the IPO

Cerebras has three share classes. Class A (one vote per share, the public IPO shares), Class B (20 votes per share, held by insiders/founders), and Class N (non-voting, held by OpenAI and G42 via their warrants). Class B will represent 99.2% of voting power immediately after the offering. The new public shareholders own 14% of the economic upside and 0.8% of the voting power. LOL.

Valuation

At the $155 midpoint, Cerebras prices at roughly $33B on $510M of 2025 revenue and ~$2.5-3B of expected 2026 revenue (using the RPO recognition schedule plus the base business). That's an EV/forward revenue multiple of ~10-13x.

Where the public peers trade right now

Cerebras is pricing at a premium to every peer except NVIDIA. The semiconductor peers (AMD) and the neocloud peers (CoreWeave, Nebius) both trade at half the multiple Cerebras is asking for. Only NVIDIA gets a higher multiple, and NVIDIA prints free cash flow at industrial scale, which Cerebras does not.

What's actually defensible:

  • At 6-7x (AMD): ~$16-21B. The fabless semiconductor comp. The chip is good but customer concentration is real and cloud is unproven.

  • At 7-8x (CoreWeave): ~$22-25B. CoreWeave has more revenue, more customers, and more operating history, so if Cerebras reads as a CoreWeave with worse concentration, the multiple compresses below it.

  • At 10x: ~$25-30B. Credit for owning the silicon. No benefit of the doubt on cloud margin.

  • At 13x (midpoint): ~$33B. The market believes wafer-scale creates a durable margin advantage the comps don't have.

  • At 20x (NVIDIA): ~$50B. OpenAI deal narrative wins. Possible in week one. Hard to sustain without NVIDIA-style cash flow.

The midpoint is the high end of what's defensible with the current information. The NVIDIA-style multiple is what the OpenAI deal narrative will be selling. Whether it sticks depends on whether the market reads the OpenAI backlog as "huge customer win" or as "huge customer dependency."

Misc. Stuff of Note

The CEO has a 2007 guilty plea for circumventing accounting controls.

Andrew Feldman was a defendant in SEC v. Pereira in 2008 related to his time as VP of Corporate Marketing and Corporate Development at Riverstone Networks in 2001-2002, where the SEC alleged he was aware of and aided in sales transactions that were improperly accounted for. He settled with the SEC without admitting wrongdoing, agreed to a permanent injunction against future securities violations, and paid $289,507 plus interest. In a parallel DOJ action, he pled guilty to one count of circumventing accounting controls of an issuer. He was sentenced to three years probation and a $5,000 fine.

These are 18-year-old events from a different company, and Feldman wasn't barred from serving as an officer or director. He's gone on to run SeaMicro (sold to AMD) and now Cerebras. The S-1 discloses all of it in his bio. Standing next to the present-day material weakness disclosure on revenue recognition controls, it's the kind of thing the audit committee is presumably already on top of.

Three classes of stock, the same, except for the parts that matter.

The S-1 says the three share classes "are identical, except with respect to voting and conversion rights."

  • Class A gets 1 vote per share.

  • Class B gets 20 votes per share.

  • Class N gets zero votes per share. Class B (insiders and founders) will represent 99.2% of voting power post-IPO.

    • Class N is what OpenAI and G42 hold via their warrants, which means OpenAI gets a 15% economic stake with no governance influence.

    • Founders keep control.

    • Public Class A holders get diluted by Class B's voting power and Class N's economic share, while paying full price for the privilege.

$40B = Tendies

The Tranche 2 OpenAI warrant (~$864M in equity to OpenAI) vests when Cerebras's market cap clears $40 billion on a 30-day rolling average. At the $155 midpoint, Cerebras opens at $33B. So the company is one good month of trading away from automatically transferring almost a billion dollars of equity to its largest customer.

Some guy named Craig Hallum is on the cover.

The lead-left underwriter is Morgan Stanley. The bracket is Citi, Barclays, UBS, Mizuho, and TD. Then further down the cover: Needham, Wedbush, Rosenblatt, Academy, Credit Agricole, MUFG, First Citizens, and Craig-Hallum Capital Group LLC.

I smoked pot with Craig Hallum. It was Craig Hallum, and Sloan Kettering, and they were blazin' that shit up everyday.

Disclosures: None of this is investment advice. Do your own homework.

Wishing you an IPO with ample float and no material accounting weaknesses,

CJ

Reply

Avatar

or to participate