ttunguz.bsky.social
@ttunguz.bsky.social
The winners will be those who use the sizzling phase to build fat worth congealing around.

This fun analogy came up during my conversation with Harry, Jason, & Rory.

www.youtube.com/watch?v=sWkp...

tomtunguz.com/the-bacon-an...
Peter Thiel and Softbank Sell NVIDIA - Why? & Why VC Will Hit $1TRN and The Opening of Retail
YouTube video by 20VC with Harry Stebbings
www.youtube.com
November 21, 2025 at 9:48 PM
Switching costs will start to matter more than marginal performance gains. The custom tools I’ve built, the muscle memory I’ve developed, the integrations my company has deployed, the enterprise contracts signed, all inertia.

At that point, the fat begins to congeal.
November 21, 2025 at 9:48 PM
This is the Great Game of Risk in Category Creation & aggression wins.

But this era of fluidity won’t last forever. The rate of improvement in AI models will eventually attenuate. When the performance gap between the best model & the second-best model shrinks, the incentive to switch evaporates.
November 21, 2025 at 9:48 PM
Will the foundational models play at the application layer? Or will the applications differentiate themselves enough to overcome model differences?

Who can take advantage of the next big leap in model performance fastest? Which sales team can reach the target customers first & write the RFP?
November 21, 2025 at 9:48 PM
Today, startups, incumbent software companies, cloud providers & AI labs all are competing. First the model, then infrastructure (memory & retrieval), then tools, then applications.
November 21, 2025 at 9:48 PM
As long as the underlying models hurtle towards PhD level performance, people will continue to test. How much better is Gemini 3 at coding? tool calling? writing?

If the progress is material, then the benefit of switching is worth the activation energy.
November 21, 2025 at 9:48 PM
Market share is fluid because no one yet knows what AI can do & the second we think have grasped it, models improve. The Nvidia chip performance & the launch of Gemini 3 the biggest gain ever in Google model performance suggest no simmering ahead.
November 21, 2025 at 9:48 PM
Gemini 3 proves the scaling laws are intact, so Blackwell’s extra power will translate directly into better model capabilities, not just cost efficiency.

Together, these two data points dismantle the scaling wall thesis.

tomtunguz.com/gemini-3-pro...
The Scaling Wall Was A Mirage
Gemini 3's release and Nvidia's earnings confirm that AI scaling laws are accelerating. Pre-training gains combined with massive infrastructure buildouts signal a new era of model performance.
tomtunguz.com
November 20, 2025 at 8:17 PM
The infrastructure is accelerating headlong into hundreds of billions next year & Nvidia predicts it will be in the trillions, citing “$3 trillion to $4 trillion in data center by 2030”.

As Gavin Baker points out, Nvidia confirmed Blackwell Ultra delivers 5x faster training times than Hopper.
November 20, 2025 at 8:17 PM
"The clouds are sold out and our GPU installed base, both new and previous generations, including Blackwell, Hopper and Ampere is fully utilized. Record Q3 data center revenue of $51 billion increased 66% year-over-year, a significant feat at our scale."
November 20, 2025 at 8:17 PM
By executing our annual product cadence and extending our performance leadership through full stack design, we believe NVIDIA will be the superior choice for the $3 trillion to $4 trillion in annual AI infrastructure build we estimate by the end of the decade."
November 20, 2025 at 8:17 PM
"We currently have visibility to $0.5 trillion in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026...
November 20, 2025 at 8:17 PM
This is the strongest evidence since o1 that pre-training scaling still works when algorithmic improvements meet better compute.

Second, Nvidia’s earnings call reinforced the demand.
November 20, 2025 at 8:17 PM
Oriol Vinyals, VP of Research at Google DeepMind, credited improving pre-training & post-training for the gains. He continued that the delta between 2.5 & 3.0 is as big as Google has ever seen with no walls in sight.
November 20, 2025 at 8:17 PM
Then Gemini 3 launched. The model has the same parameter count as Gemini 2.5, one trillion parameters, yet achieved massive performance improvements. It’s the first model to break 1500 Elo on LMArena & beat GPT-5.1 on 19 of 20 benchmarks.
November 20, 2025 at 8:17 PM
Context remains the true challenge & the biggest opportunity for the next generation of AI infrastructure.

Explore the full interactive dataset here : survey.theoryvc.com or read Lauren’s complete analysis : theoryvc.com/blog-posts/a....

tomtunguz.com/ai-builders-...
survey.theoryvc.comor
November 17, 2025 at 11:30 PM
The tools exist. The problem is harder than better retrieval or smarter chunking can solve.

Teams need systems that verify correctness before they can scale production. The tools exist. The problem is harder than better retrieval can solve.
November 17, 2025 at 11:30 PM
88% use automated methods for improving context. Yet it remains the #1 pain point in deploying AI products. This gap between tooling adoption & problem resolution points to a fundamental challenge.
November 17, 2025 at 11:30 PM
The timing reveals where the stack is heading. Teams need to verify correctness before they can scale production.
November 17, 2025 at 11:30 PM
Synthetic data powers evaluation more than training. 65% use synthetic data for eval generation versus 24% for fine-tuning. This points to a near-term surge in eval-data marketplaces, scenario libraries, & failure-mode corpora before synthetic training data scales up.
November 17, 2025 at 11:30 PM
The center of gravity is data & execution, not conversation. Sophisticated teams build MCPs to access their own internal systems (58%) & external APIs (54%).
November 17, 2025 at 11:30 PM
Agents in the field are systems operators, not chat interfaces. We thought agents would mostly call APIs. Instead, 72.5% connect to databases. 61% to web search. 56% to memory systems & file systems. 47% to code interpreters.
November 17, 2025 at 11:30 PM
70% of teams use open source models in some capacity. 48% describe their strategy as mostly open. 22% commit to only open. Just 11% stay purely proprietary.
November 17, 2025 at 11:30 PM