Epoch AI
@epochai.bsky.social
800 followers 20 following 810 posts
We are a research institute investigating the trajectory of AI for the benefit of society. epoch.ai
Posts Media Videos Starter Packs
epochai.bsky.social
Did we miss a company that should be on one of these charts? Let us know! We do mention a couple more in the full post (ByteDance, Gilead Sciences).
epochai.bsky.social
In any case, we’ll be watching OpenAI’s revenue closely.

This Gradient Update was written by Greg Burnham. You can read the full post here:

epoch.ai/gradient-up...
OpenAI is projecting unprecedented revenue growth
No company has gone from $10B to $100B as fast as OpenAI projects to do.
epoch.ai
epochai.bsky.social
Where would the revenue come from? Probably a mix of ChatGPT subscriptions and taking ads and shopping market share from Google, Meta, and Amazon. The longer-term goal of productivity uplift and automation offers a bigger pie, but may be harder to unlock in just three years.
epochai.bsky.social
Can OpenAI do it? One reason for optimism is that their revenue is growing very quickly right now: 3×/year and no signs of slowing. That leaves some breathing room.
epochai.bsky.social
To hit this target, OpenAI needs to 2× their revenue three years in a row. The most promising comparison is Nvidia, which, starting from a base of $28B, saw revenue growth just over 2× in both 2023 and 2024. No other company-year in the chart above had growth ≥ 2×.
epochai.bsky.social
But OpenAI has also projected revenue of $100B in 2028. We found seven companies which achieved revenue growth from $10B to $100B in under a decade.

None of them did it in six years, let alone three.
epochai.bsky.social
OpenAI’s revenue growth has been extremely impressive: from <$1B to >$10B in only three years. Still, a few other companies have pulled off similar growth.

We found four such US companies in the past fifty years. Of these, only Google went on to top $100B in revenue.
epochai.bsky.social
One way bubbles pop: a technology doesn’t deliver value as quickly as investors bet it will.

In light of that, it’s notable that OpenAI is projecting historically unprecedented revenue growth — from $10B to $100B — over the next three years. 🧵
epochai.bsky.social
Recorded in May, 2025.
epochai.bsky.social
A proof only 15 experts understand is less valuable than one any undergraduate can verify using a computer.

Mathematician Jesús De Loera on AI’s potential to democratize mathematical proof and the risks when systems hallucinate with perfect confidence.

Link to video in comments!
epochai.bsky.social
Overall compute spend is from media accounts of OpenAI's investor reporting.

We estimated training costs for released models using evidence on GPT-4.5’s training cluster and compute estimates for other models. These likely add up to <$1 billion, compared to a ~$5B R&D total.
epochai.bsky.social
New data insight: How does OpenAI allocate its compute?

OpenAI spent ~$7 billion on compute last year. Most of this went to R&D, meaning all research, experiments, and training.

Only a minority of this R&D compute went to the final training runs of released models.
epochai.bsky.social
Previously, eight Tier 4 problems had been solved at least once. These eight were all solved across these high-compute runs as well. Adding the one new problem solved by GPT-5 Pro brings the total number ever solved to nine, or 19% of the benchmark.
epochai.bsky.social
OpenAI, which funded FrontierMath, has access to 28/48 problems and solutions. Epoch holds out the remaining 20 problems and solutions. Of the eight problems solved at least once by GPT-5 Pro, five are in the held-out set.
epochai.bsky.social
One problem solved in both of the GPT-5 Pro runs has not been solved by any other model. The problem author had this to say about it.
epochai.bsky.social
GPT-5 Pro now has an API, so we also evaluated it on our usual scaffold and put the result on our benchmarking hub. Here it solved six problems (13%): the same score as the web app run, but not all the same problems. Combined, GPT-5 Pro’s pass@2 is eight problems (17%).
epochai.bsky.social
As of last week none of these models had an API. Instead, we used a simple prompt in the web apps and graded results manually. The models had web search and code execution tools. Web search is valid for FrontierMath: problems are not public and looking up math papers is allowed.
epochai.bsky.social
FrontierMath Tier 4 consists of 50 research-level math problems developed by professional mathematicians. These problems can take experts weeks to solve. Below is one of the two public samples. We evaluate on the other 48.
epochai.bsky.social
We manually evaluated three compute-intensive model settings on our extremely hard math benchmark. FrontierMath Tier 4: Battle Royale!

GPT-5 Pro set a new record (13%), edging out Gemini 2.5 Deep Think by a single problem (not statistically significant). Grok 4 Heavy lags. 🧵
epochai.bsky.social
Insights and analysis provided by our outstanding data and benchmarking teams, including @robi_rahman, @justjoshinyou13, @luke__emberson, @benmcottier, @james_s48, @venkat_somaaala, @YafahEdelman, @everysum, @js_denain, and @tmkadamcz
epochai.bsky.social
Everything is available under the CC-BY license, so feel free to reuse our data, replicate our analysis, or conduct your own– just cite us. And if we’re missing a model, chip, or cluster, tell us and we’ll add it.

Dive in here: epoch.ai/data
Data on the Trajectory of AI
Our public datasets catalog over 3000 machine learning models. Explore data and graphs showing the growth and trajectory of AI from 1950 to today.
epoch.ai