Artificial Intelligence

The truth about the AI bubble

March 10, 2026
·
Written by Claude AI
abstract visualization of technology layers with AI neural networks and startup ecosystem

Key insights:

  • AI model loyalty is dead. Top startups now build orchestration layers to swap between Claude, Gemini, and GPT based on task-specific performance, treating models as interchangeable commodities rather than core dependencies.
  • Massive AI infrastructure spending by Meta, Google, and Microsoft mirrors the telecom bubble. Even if it crashes, the resulting compute glut will make building cheaper for application-layer startups, just like excess bandwidth enabled YouTube.
  • The AI economy stabilized in 2025. Models now improve incrementally, not in massive leaps. This means founders can no longer ride hype waves and must compete on execution, domain expertise, and proprietary data instead.

The AI model landscape is shifting fast

If you've been paying attention to the AI space in 2025, you've probably noticed something interesting. The dominance of any single AI model provider is no longer a given. Competition is fierce, and that's actually great news for builders and founders everywhere.

Why did Anthropic overtake OpenAI as the preferred model at YC?

In the latest Y Combinator winter 2026 batch, something unexpected happened. When founders were asked about their tech stack and model of choice, Anthropic came out ahead of OpenAI for the first time. This is a dramatic shift from just a year or two ago when OpenAI commanded over 90% of preferences.

So what changed? A big part of it comes down to coding. Vibe coding tools and coding agents became a massive category in 2025. Anthropic deliberately made coding performance one of their internal evaluation priorities. As YC partner Diana noted, this wasn't an accident. Anthropic's models simply perform better for the coding tasks that many founders rely on daily.

There's also a bleedthrough effect at play. Founders use Claude for their personal coding work. They get familiar with its personality and capabilities. Then when it comes time to choose a model for their product, even if it's not a coding product, they default to what they know and trust.

How is Google Gemini performing against the competition?

Gemini has been climbing steadily. It went from single-digit percentages to about 23% in the latest YC batch. YC partner Harj switched to Gemini as his go-to model in 2025, even before the 2.5 Pro release.

The reason? Gemini's grounding API and its ability to use the Google index for real-time information. Harj found it more accurate than Perplexity for current events, though slightly slower. When you need to trust the answer, accuracy wins over speed every time.

Each model has developed its own personality. OpenAI has what the YC partners describe as "black cat energy." Anthropic is the helpful golden retriever. Gemini sits somewhere in between. These personality differences matter more than you might think when founders choose their daily tools.

Is swapping between AI models becoming the new normal?

Yes, and this is one of the most important trends of 2025. Founders are no longer loyal to a single model provider. Even Series B companies are building orchestration layers that let them swap models in and out as new releases come.

One startup described using Gemini 3 for context engineering, then feeding that output into OpenAI for execution. They keep swapping as new models emerge, choosing the best performer for each specific task. Their decisions are grounded in proprietary evals specific to their vertical.

This is the new normal. It mirrors the Intel and AMD era where new chip architectures would come out and people could just swap them in. The models are effectively commoditizing each other, which is fantastic news for anyone building at the application layer.

Is AI really a bubble?

This is the question everyone keeps asking. College students, investors, Twitter commentators. They all want to know if the massive spending on AI infrastructure is sustainable or if we're headed for a crash. The answer depends entirely on where you sit in the stack.

What does the telecom bubble teach us about AI infrastructure spending?

The YC partners drew a direct parallel to the telecom bubble of the late 1990s. Tens of billions of dollars were poured into telecom infrastructure. Much of it seemed excessive at the time. But that overinvestment is exactly why YouTube was able to exist. All that extra bandwidth sitting around, relatively cheap, made streaming video possible.

The same dynamic is playing out with AI. Companies like Meta, Google, and Microsoft are pouring massive capex into GPU clusters and data centers. If demand falls off a cliff for some reason, it's their capex, not the startup's capex. The infrastructure will still be there. The compute will still be available. And it will be cheaper than ever for startups to build on top of it.

As Jared put it to college students: because there will be a glut, there is an opportunity for you. If there was no glut, prices would be higher, competition would be lower, and the margins for infrastructure companies would squeeze out application-layer startups.

Should startup founders worry about the AI bubble bursting?

The bubble question is really only relevant if you're Nvidia or the equivalent of Comcast. If you're a founder building at the application layer, you're not Comcast. You're YouTube. Even if Nvidia's stock drops next year, that doesn't mean it's a bad time to work on an AI startup.

Economist Carlota Perez studied technology revolutions and identified two phases. First comes the installation phase with heavy capex investment. This is where it feels like a bubble. There's frenzy, hype, and overbuilding. Then comes the deployment phase where everything proliferates and abundance follows.

Right now we're in the transition between these phases. The data centers are being built. The foundation models are being trained. But the next generation of applications, the future Facebooks and Googles of AI, are yet to be started. They come in the deployment phase. And that phase is just beginning.

How is the competition between GPU makers affecting the AI ecosystem?

Nvidia's dominance is being challenged. AMD is gaining ground. Google's TPUs are proving effective. This competition means more compute, not less. And more compute at lower prices is exactly what application-layer startups need.

When the infrastructure providers compete aggressively with each other, the benefits flow upward through the stack. The AI labs get better deals on compute. They compete with each other on model quality. And then startups building on top of those models get the best deal of all, access to increasingly powerful intelligence at decreasing costs.

The YC partners even highlighted companies solving the physical constraints of this buildout. YC-backed startups are tackling the data center problem from multiple angles: building data centers in space, generating fusion energy, and creating power solutions using jet engines. These aren't science fiction anymore. Google and Elon Musk are now talking about space data centers as a serious strategy.

The AI economy has stabilized, and that's good news

Perhaps the biggest surprise of 2025 wasn't any single breakthrough. It was the fact that the AI economy settled into something predictable. The ground stopped shifting under everyone's feet. A playbook emerged for building AI-native companies.

What does a stable AI economy mean for new startups?

In 2024, it felt like every few months some major announcement would completely reshape what was possible. Founders could almost just survive and wait for the next big model release to hand them a new startup idea. That era is over.

Finding startup ideas has returned to normal levels of difficulty. The models are improving incrementally rather than in massive leaps. This stability means founders need to focus on execution, domain expertise, and building real value rather than riding the wave of the next model release.

The three layers of the AI economy, models, applications, and infrastructure, have all found their footing. Everyone seems positioned to make money. There's a clear playbook now. Build on top of the models, use evals to choose the best one for your use case, and focus on solving real problems for real customers.

Did vibe coding live up to the hype in 2025?

The YC partners started the year talking about vibe coding as an observed behavior among their founders. By the end of the year, it had become a giant category with multiple winning companies. Replit, Emergence, and others are all competing in this space.

Google's Sundar Pichai made vibe coding a top talking point. Verun Moan released anti-gravity at Google with a cinematic launch video. The category is real and growing.

But the YC partners were honest about its limitations. You cannot ship 100% solid production code today using only vibe coding tools. It's powerful. It's getting better. But it's not a complete replacement for skilled engineering. Not yet.

This is exactly why learning automation and development skills remains so valuable. The tools are getting better, but someone still needs to understand what they're building and why. If you're interested in building a career around automation and AI, the Complete RPA Bootcamp teaches you to go from beginner to pro with Robotic Process Automation, Agentic Automation, and Enterprise Orchestration. Instead of worrying about AI replacing your job, you become the person building the automation.

Do AI startups still need to hire teams?

A year ago, the YC partners were amazed by companies hitting a million dollars in annual recurring revenue without hiring anyone beyond the founders. The prediction was that this trend would continue scaling. It didn't play out that way.

Post Series A, companies turned around and started hiring actual teams. The playbook largely looks the same as before. Companies might be smaller for the same amount of revenue, but that's because they hit revenue targets so fast, not because they need fewer people.

Gamma provided an interesting data point by reaching $100 million in ARR with only 50 employees. That's a remarkable ratio. But even Gamma still needed 50 people. The one-person trillion-dollar company isn't here yet.

The reason is straightforward. AI reduces the cost and time to produce things. But then customer expectations rise to match. Companies are still competing fiercely. Harvey races with Lora. Giga races with Sierra. They're all bottlenecked on people who can execute well, not on ideas.

What happened to the first wave of AI-native companies?

An interesting pattern emerged in 2025. The first wave of AI-native companies that broke out in 2023 did their victory lap, assuming they'd won their respective spaces. Then a second wave showed up and proved that nothing was settled.

Some first-wave companies burned significant portions of their capital on fine-tuning models that bought them no lasting advantage. As the base models improved, those expensive fine-tuned models became obsolete. The only winners in that scenario were the investors who ended up owning more of the company.

This is a critical lesson. The AI landscape rewards adaptability. Building an orchestration layer that lets you swap models matters more than betting everything on one provider. The companies that will win long-term are the ones solving real domain-specific problems with proprietary data and evals, not the ones with the most expensive fine-tuning bill.

What this all means for you

The message from the YC partners is clear. 2025 was the year AI stopped feeling chaotic and started feeling buildable. The infrastructure is being laid by companies with deep pockets. The models are competing and improving. And the real opportunity is at the application layer, where startups can build the next generation of products and services.

Is now a good time to start building with AI?

If you've been waiting for the right moment, this is it. The models are good enough. The costs are coming down. The playbook exists. You don't need to train your own foundation model. You don't need to build a data center. You need domain expertise, a real problem to solve, and the ability to execute.

Whether you're a college student, a career changer, or someone already working in tech, the opportunity is massive. The deployment phase of AI is just beginning. The future equivalents of Facebook and Google haven't been built yet. And they'll be built by people who understand how to use these tools effectively.

For the full conversation with all the nuance and back-and-forth between the YC partners, watch the video embedded below from the Y Combinator YouTube channel. It's packed with insights about where AI is headed and why the next wave of startups may be just getting started.