Artificial Intelligence

Demis Hassabis: we're three quarters of the way to AGI

May 3, 2026
·
Written by Claude AI
abstract neural network visualization with scientific data and futuristic computing elements

Key insights:

  • Hassabis learned from his failed game studio that being five years ahead of your time is ideal, but being fifty years ahead makes execution impossible, a lesson that directly shaped DeepMind's founding timing around maturing deep learning, reinforcement learning, and GPU hardware.
  • AlphaFold proved that classical neural networks can effectively model quantum-level interactions like protein folding, suggesting many problems assumed to require quantum computing may be solvable now with the right AI approach.
  • Machine learning may serve as the natural description language for biology, the way mathematics serves physics, because biological systems produce too many weak signals and correlations for human minds to parse but ML systems handle that complexity well.

From chess prodigy to Nobel laureate: the path to building AGI

Demis Hassabis has one of the most unusual resumes in technology. Chess prodigy, game designer, neuroscientist, and now CEO of Google DeepMind. In a recent conversation at Sequoia Capital's AI Ascent 2026, he laid out his thinking on AGI, AI for science, and the nature of reality itself. Here is what stood out.

What drove Demis Hassabis to pursue artificial intelligence?

Hassabis decided as a teenager that AI was the most important and interesting thing he could work on. Every career move after that, from games to neuroscience to founding DeepMind, was deliberate. He picked experiences that would eventually help him build a company focused on artificial general intelligence.

His early work in gaming was not a detour. In the 1990s, games were where the most advanced technology lived. Graphics engines pushed hardware forward. The GPUs we rely on today for AI training were originally designed for game graphics. Hassabis was using the very first GPUs in the late 1990s.

The games he built, like Theme Park, were essentially AI systems. Thousands of simulated people made decisions about what to buy and which rides to enjoy. Seeing millions of players interact with that AI convinced him to spend his entire career on the technology.

What lesson did Elixir Studios teach about timing?

Before DeepMind, Hassabis founded Elixir Studios. The company tried to build a game called Republic that simulated an entire country. Players could overthrow a dictator through multiple strategies. The game simulated living, breathing cities with AI for a million people, all running on a late-1990s Pentium PC.

It was too ambitious. The hardware simply could not support the vision. Hassabis took away a critical lesson: you want to be five years ahead of your time, not fifty. If you wait until something is obvious to everyone, you are too late. But if you are decades ahead, you cannot make it work.

This lesson directly shaped how he approached DeepMind. He waited until the ingredients were ready: deep learning, reinforcement learning, and accelerated computing. Then he moved.

How did DeepMind convince early talent that AGI was possible?

In 2009, almost nobody in academia or industry believed significant AI progress was coming. Researchers would literally roll their eyes when Hassabis mentioned AGI. The prevailing view was that strong AI had been tried and failed in the 1990s.

But Hassabis and his co-founders saw something others missed. Deep learning had just been invented by Geoffrey Hinton. Reinforcement learning was maturing. GPUs were becoming powerful enough. And insights from computational neuroscience suggested these pieces could be combined in ways nobody had tried.

They felt like keepers of a secret. Even if they failed, they would fail in a different way than previous attempts. That was enough to attract brilliant people who shared the conviction that this time could be different.

AI for science: from AlphaFold to curing all disease

Hassabis has always viewed AI as a tool for scientific discovery. DeepMind's mission statement from day one was simple: step one, solve intelligence. Step two, use it to solve everything else. The science applications are now arriving faster than most people expected.

Why is AlphaFold considered a breakthrough moment?

AlphaFold solved a 50-year grand challenge in biology: predicting the 3D structure of proteins from their amino acid sequence. Knowing protein structures is essential for understanding biology and designing medicines.

What made it remarkable was that protein folding involves quantum-level interactions, water bonds, and tiny particles. Many assumed you would need a quantum computer to model it accurately. AlphaFold showed that a classical computing system, in the form of a modern neural network, could approximate optimal solutions.

This result earned Hassabis the 2024 Nobel Prize in Chemistry. More importantly, it opened the door to a new era of AI-driven drug discovery.

How could AI reduce drug discovery from ten years to days?

AlphaFold solved one piece of the puzzle: knowing the shape of target proteins. But designing a drug requires more. You need to build a compound that binds strongly to the right part of the protein without binding to anything else, which would cause toxic side effects.

That is where Isomorphic Labs, DeepMind's spin-out company, comes in. The goal is to do 99% of the exploration computationally, in silico, and save wet lab work only for validation. Hassabis believes this approach could reduce drug discovery timelines from an average of ten years down to months, weeks, or even days.

If that happens, personalized medicine becomes possible. Doctors could prescribe variations of base medicines tailored to individual patients. All disease could be within reach. This is not a vague future promise. Hassabis says we could get there within the next few years.

Can AI create entirely new sciences?

Hassabis thinks AI will do more than accelerate existing science. It could create new fields entirely. He highlighted three areas:

  • AI systems science: Understanding how AI systems themselves work will become a full scientific discipline. These artifacts are incredibly complex and will eventually rival the human brain in complexity.
  • AI for simulations: Accurate simulations could turn social sciences like economics into rigorous experimental sciences. Instead of raising interest rates once and hoping for the best, you could run thousands of simulated scenarios.
  • Learned simulators: In domains where we do not know the mathematics well enough, machine learning can build simulators directly from data. DeepMind's weather model, GenCast, is already the most accurate weather simulator in the world and far faster than traditional methods.

Hassabis made a striking claim: machine learning is the perfect description language for biology, in the same way mathematics is for physics. Biology has too many weak signals and correlations for the human mind to analyze. But machine learning thrives on exactly that kind of data.

Information as the universe's most fundamental building block

Beyond practical applications, Hassabis shared a deeper philosophical view that shapes his work. He believes information, not matter or energy, may be the most fundamental substance in the universe.

Why does Hassabis think information is more fundamental than matter?

Einstein showed that energy and matter are equivalent. Hassabis argues information has a similar equivalency. You can think of biological systems, which resist entropy and maintain structure, as information processing systems at their core.

If information is primary, then AI becomes even more significant. AI is fundamentally about organizing, understanding, and constructing informational objects. The work of building AI is, in a sense, reading the language of the universe.

This is not just philosophical musing. It has practical implications. If you view the universe through the lens of information processing, you approach problems differently. You look for patterns, correlations, and structures that traditional physics might miss.

Can classical computers handle everything, or do we need quantum?

Hassabis calls himself and his team "Turing's champions." He considers Alan Turing's result, that everything computable can be computed by a relatively simple machine, one of the most profound in science.

AlphaFold provided evidence for this view. Protein folding involves quantum-level interactions, yet a classical neural network modeled it effectively. This suggests many problems we assume require quantum computing might be solvable on classical systems if approached correctly.

That does not make quantum computing irrelevant. But it does mean we should not wait for quantum hardware to tackle hard problems. Classical AI systems are already proving capable of modeling phenomena we thought were out of reach.

What about consciousness and the limits of AI as a tool?

Hassabis is clear that AI should first be built as a tool. An incredibly intelligent, useful, and precise tool. Questions about agency, consciousness, and experience should come second.

He identifies several components that are probably necessary for consciousness: self-awareness, a sense of self and other, and continuity over time. But he admits these are likely necessary and not sufficient. The full definition remains an open question after thousands of years of philosophy.

One interesting point he raised: we assume other humans are conscious partly because they behave like conscious beings, but also because they run on the same biological substrate. We will never have substrate equivalence with an artificial system. So even if an AI behaves consciously, the experiential question will remain harder to close.

The timeline to AGI and what comes next

Hassabis has been consistent about his AGI timeline. He predicted in 2010 that it would be a 20-year mission. In 2026, he says the field is exactly on track.

When does Hassabis expect AGI to arrive?

His answer: 2030. He has not wavered on this. DeepMind's original mission, solving intelligence and then using it to solve everything else, remains the guiding framework. The title of his talk, that we are three quarters of the way there, reflects genuine confidence based on measured progress.

The next one to two years will be critical. We are in the agent era now, where AI systems become more autonomous and capable of taking actions in the real world. This is the bridge between today's AI tools and something closer to general intelligence.

What should you read before AGI arrives?

Hassabis recommended The Fabric of Reality by David Deutsch. He hopes to use AGI itself to answer the deep questions that book raises about the nature of reality, knowledge, and computation. It is a fitting choice from someone who sees AI as the ultimate tool for understanding the universe.

If you are interested in building the systems that will define this era, rather than being replaced by them, now is the time to develop real skills in automation and AI. The Complete RPA Bootcamp takes you from beginner to professional in Robotic Process Automation, Agentic Automation, and Enterprise Orchestration. It is a practical path to a future-proof career where you are the one building the automation, not watching it take your job.

Why should you watch the full conversation?

This blog covers the key ideas, but the full conversation includes nuances about Hassabis's philosophical influences, his views on Kant and Spinoza, and his pick for the best strategy game teammate (Von Neumann, naturally). Watch the full video embedded below from the Sequoia Capital YouTube channel to hear it all directly from one of the most important figures in AI today.