
Key insights:
Demis Hassabis has been thinking about artificial general intelligence longer than almost anyone in the field. As the co-founder of DeepMind and now head of Google DeepMind, he has a unique perspective on how close we actually are. In a recent conversation with Y Combinator's Garry Tan, he laid out exactly what he thinks is still missing from today's AI systems.
Hassabis is confident that the core components we have today, large scale pre-training, RLHF, and chain of thought reasoning, will be part of the final AGI architecture. He doesn't see a world where we look back in a few years and call this a dead end. The progress is too significant for that.
But he's equally clear that something is still missing. Specifically, he points to continual learning, long-term reasoning, and memory as unsolved problems. These aren't minor gaps. They're fundamental capabilities that AGI will require.
His estimate? There's about a 50/50 chance that one or two big new ideas are still needed beyond what we already know. It might be that existing techniques scale up with incremental innovation. Or it might not. Google DeepMind is working on both paths simultaneously.
Hassabis did his PhD on how the hippocampus works, specifically how the brain integrates new knowledge into existing knowledge during sleep. The brain does this remarkably well through REM sleep, replaying important episodes so you can learn from them.
Current AI systems handle memory with what Hassabis bluntly calls "duct tape." We shove everything into the context window and hope for the best. Even with million-token context windows, this approach is unsatisfying. The context window is essentially working memory. Humans have about seven digits of working memory. AI has millions of tokens. But we're trying to store everything in there, including things that aren't important and things that are wrong.
There's also a retrieval cost problem. Even if you can store everything, finding the right piece of information for the specific decision you need to make right now is non-trivial. And if you're processing live video, a million tokens only covers about 20 minutes. That's not enough for an AI assistant that needs to understand your life over weeks or months.
DeepMind's very first Atari program, DQN, used experience replay borrowed from neuroscience. That was back in 2013. Hassabis believes there's still enormous room for innovation in how AI systems handle memory.
This is one of the most practical questions for anyone building with AI today. Current models are stateless. They don't learn from the specific context you put them in. Hassabis sees this as one of the key things holding back agents from completing full tasks.
Right now, agents are useful for aspects of tasks. You can patch them together and do impressive things. But they don't adapt well to context. The missing piece is the ability to learn about the specific environment they're deployed in. Until we crack continual learning, we won't have agents you can truly fire and forget.
Despite impressive benchmarks, today's AI systems still make errors that a smart undergraduate wouldn't. Hassabis has a fascinating perspective on why this happens and what it means for the future of AI agents.
Hassabis describes current thinking paradigms as "fairly simplistic" and "fairly brute force." He sees huge scope for improvement, particularly in monitoring chain of thought and interjecting midway through a reasoning process.
He shared a telling example. He likes to play chess against Gemini. All leading foundation models are surprisingly poor at games, which makes them useful for studying reasoning. When he looks at the thinking traces, he can see the model consider a move, realize it's a blunder, fail to find anything better, and then play the blunder anyway. That simply shouldn't happen in a precise reasoning system.
This is why we get what he calls "jagged intelligence." A model can solve gold medal problems at the International Mathematical Olympiad but still make elementary math errors when you pose the question in a certain way. There's something about introspection, about a system monitoring its own thought process, that feels fundamentally missing.
Hassabis is firmly in the "just getting started" camp. His reasoning is straightforward: you need an active system that can actively solve problems to get to AGI. Agents are that path.
But he's also honest about where we are. He sees people setting off dozens of agents for 40 hours, and he's not sure the output justifies that level of input yet. We're still in the experimentation phase. The technology is only now getting good enough to add real value rather than being a nice demonstration.
His litmus test is revealing. We haven't seen a AAA game that tops the app store charts that was vibe coded. We haven't seen a kid make a hit game that sells 10 million copies using AI tools. These things should be possible given the effort going in. Something is still somehow missing, whether it's the process or the tools.
He expects this to change in the next 6 to 12 months. The progression should be: first, people in rooms like Y Combinator operating at 1000x productivity. Then companies building bestselling apps and games using these tools. Then more of that gets automated.
This is where Hassabis gets philosophical. He uses AlphaGo's famous Move 37 as a reference point. That move was creative and useful. But can a system invent Go? That's what he really wants.
Imagine giving a system a high-level description: "A game you can learn the rules of in 5 minutes, but it takes many lifetimes to master. It's beautiful aesthetically. You can play it in a few hours." And getting back something as elegant as Go. Today's systems can't do that.
He acknowledges the answer might not require new technology at all. It might be that current systems are capable of this with a brilliant enough creative person using them. Someone who experiments with the tools all day and night, who combines deep creativity with mastery of the tools. That combination might already be enough. But we haven't seen the proof yet.
Beyond agents and AGI timelines, Hassabis shared insights on where AI will have the biggest scientific impact and what founders should focus on today.
Hassabis spun out Isomorphic Labs from DeepMind after AlphaFold 2 to tackle the broader drug discovery process. AlphaFold solved protein structure prediction, but that's just one piece. Isomorphic is working on the adjacent biochemistry and chemistry needed to design compounds with the right properties.
The ultimate goal is a virtual cell, a full working simulation of a cell that you can perturb and get outputs close enough to experimental results to be useful. Hassabis estimates we're about 10 years away from that. DeepMind is starting with a virtual cell nucleus because it's relatively self-contained.
The key challenge is data. If we could image a live cell at nanometer resolution without killing it, we could convert the problem into a vision problem, which AI knows how to solve. But no current imaging technique can do that. So the solution will either come from hardware breakthroughs or from building better learned simulators of dynamical systems.
Hassabis described a clear pattern from all the "Alpha" projects. The ideal problem has three characteristics:
When these conditions are met, today's methods can find the needle in the haystack. Hassabis thinks about drug discovery the same way. There is a compound out there that would solve a given disease without side effects. As long as the laws of physics allow it, the only question is how to find it efficiently.
For founders working in biotech, materials science, or other scientific domains, this framework is incredibly useful. If your problem fits this pattern, AI methods are likely to make a massive difference.
Hassabis thinks we're close but not there yet. Google DeepMind has systems like AI co-scientist and AlphaEvolve that go beyond what basic Gemini can do. But he hasn't seen a true genuine massive discovery yet.
The gap is related to the creativity problem. True scientific discovery isn't pattern matching because there's no pattern to match to. It's not even extrapolation. It's some kind of analogical reasoning that current systems don't seem to have, or at least we're not using them in the right way to access it.
He frames the challenge beautifully. Solving the Riemann hypothesis would be amazing. But coming up with a new set of millennium prize problems that top mathematicians regard as deep and worthy of a lifetime of study? That's another level harder entirely.
His "Einstein test" is a practical way to measure progress: train a system with the knowledge available in 1901 and see if it can produce what Einstein did in 1905, including special relativity. Once a system can do that, we're on the verge of truly novel invention.
Hassabis had pointed advice for anyone building companies today, especially those applying AI to science or deep technology.
Hassabis recommends combining where AI is going with some other deep technology area. That sweet spot, whether it's materials, medicine, or other hard areas of science, is where the most defensible companies will be built.
Interdisciplinary teams are key, especially if the work involves the world of atoms. There's no shortcut to that, at least in the foreseeable future. These areas are relatively safe from getting swarmed by the next foundation model update. If you're looking for defensibility, deep tech combined with AI expertise is one of the strongest positions.
Ideally, you're an expert in both machine learning and the domain you're applying it to. Or you can create a founding team with that combined expertise. Hassabis believes there's huge impact and huge value to be built in these intersections.
This might be the most practical piece of advice from the entire conversation. Hassabis puts his AGI timeline at around 2030. If you start a deep tech journey today, that's typically a 10-year journey. Which means AGI could appear right in the middle of it.
That's not necessarily bad, but you have to account for it. Ask yourself:
He envisions a future where general purpose models like Gemini or Claude use specialized systems like AlphaFold as tools. Putting all protein knowledge into a general model wouldn't make sense. It would hurt language skills. Better to have excellent general purpose tool-using models that can train and deploy specialized tools as separate systems.
This has implications for what you build today. Physical things, factories, finance systems, scientific tools. Build something that would be useful if AGI arrives halfway through your journey.
His answer is simple but powerful. Going after hard problems and deep problems is no more difficult than going after shallow, simple ones. They're just differently difficult. Different things are hard about each path.
Life is short. You only have so much time and energy. You might as well put your life force into something that will really make a difference, something that wouldn't have happened if you hadn't been there to push it.
He also emphasized working on things you're genuinely passionate about. He would have worked on AI no matter what happened. He decided from a young age it was the most consequential and most interesting thing he could think of. That conviction carried him through years when investors said AI didn't work and academia considered it a niche dead end.
Hassabis's insights carry direct implications for anyone working in automation and AI-driven systems. The convergence of agents, reasoning improvements, and specialized AI tools is creating opportunities that didn't exist even six months ago.
The agent revolution Hassabis describes is directly relevant to anyone in the automation space. As AI agents get better at continual learning and reasoning, the line between traditional robotic process automation and AI-powered agents will blur. The people who understand both worlds, automation architecture and AI capabilities, will be in the strongest position.
If you're interested in positioning yourself at this intersection, building skills in RPA, agentic automation, and enterprise orchestration is a smart move. The Complete RPA Bootcamp takes you from beginner to pro across all three areas. Instead of letting AI and automation replace you, you become the one building it. It's a practical path to a future-proof career in exactly the space Hassabis says is "just getting started."
The full conversation between Demis Hassabis and Garry Tan covers even more ground than what's captured here, including details on open models, multimodal AI, and the economics of inference. Watch the complete discussion in the embedded video below from the Y Combinator YouTube channel to hear Hassabis explain these ideas in his own words. It's one of the most insightful conversations about where AI is actually headed that you'll find anywhere.