The last time I interviewed Demis Hassabis was back in November 2022, just a few weeks before the release of ChatGPT. Even then—before the rest of the world went AI-crazy—the CEO of Google DeepMind had a stark warning about the accelerating pace of AI progress. “I would advocate not moving fast and breaking things,” Hassabis told me back then. He criticized what he saw as a reckless attitude among some in his field, who he likened to experimentalists who “don’t realize they’re holding dangerous material.”
[time-brightcove not-tgx=”true”]Two and a half years later, much has changed in the world of AI. Hassabis, for his part, won a share of the 2024 Nobel Prize in Chemistry for his work on Alphafold—an AI system that can predict the 3D structures of proteins, and which has turbocharged biomedical research. The pace of AI improvement has been so rapid that many researchers, Hassabis among them, now believe human-level AI (known in the industry as Artificial General Intelligence or AGI) will perhaps arrive this decade. In 2022, even acknowledging the possibility of AGI was seen as fringe. But Hassabis has always been a believer. In fact, creating AGI is his life’s goal.
Creating AGI will require huge amounts of computing power—infrastructure which only a few tech giants, Google being one of them, possess. That gives Google more leverage over Hassabis than he might like to admit. When Hassabis joined Google, he extracted a pledge from the company: that DeepMind’s AI would never be used for military or weapons purposes. But 10 years later, that pledge is no more. Now Google sells its services—including DeepMind’s AI—to militaries including those of the United States and, as TIME revealed last year, Israel. So one of the questions I wanted to ask Hassabis, when we sat down for a chat on the occasion of his inclusion in this year’s TIME100, was this: did you make a compromise in order to have the chance of achieving your life’s goal?
This interview has been condensed and edited for clarity.
AGI, if it’s created, will be very impactful. Could you paint the best case scenario for me? What does the world look like if we create AGI?
The reason I’ve worked on AI and AGI my entire life is because I believe, if it’s done properly and responsibly, it will be the most beneficial technology ever invented. So the kinds of things that I think we could be able to use it for, winding forward 10-plus years from now, is potentially curing maybe all diseases with AI, and helping with things like helping develop new energy sources, whether that’s fusion or optimal batteries or new materials like new superconductors. I think some of the biggest problems that face us today as a society, whether that’s climate or disease, will be helped by AI solutions. So if we went forward 10 years in time, I think the optimistic view of it will be, we’ll be in this world of maximum human flourishing, traveling the stars, with all the technologies that AI will help bring about.
Let’s take climate, for example. I don’t think we’re going to solve that in any other way, other than more technology, including AI assisted technologies like new types of energy and so on. I don’t think we’re going to get collective action together quick enough to do anything about it meaningfully.
Put it another way: I’d be very worried about society today if I didn’t know that something as transformative as AI was coming down the line. I firmly believe that. One reason I’m optimistic about where the next 50 years are going to go is because I know that if we build AI correctly, it will be able to help with some of our most pressing problems. It’s almost like the cavalry. I think we need the cavalry today.
You’ve also been quite vocal about the need to avoid the risks. Could you paint the worst-case scenario?
Sure. Well, look, worst case, I think, has been covered a lot in science fiction. I think the two issues I worry about most are: AI is going to be this fantastic technology if used in the right way, but it’s a dual purpose technology, and it’s going to be unbelievably powerful. So what that means is that would-be bad actors can repurpose that technology for potentially harmful ends. So one big challenge we have as a field and a society is, how do we enable access to these technologies to the good actors to do amazing things like cure terrible diseases, at the same time as restricting access to those same technologies to would-be bad actors, whether that’s individuals to all the up to rogue nations? That’s a really hard conundrum to solve. The second thing is AGI risk itself. So risk from the technology itself, as it becomes more autonomous, more agent-based, which is what’s going to happen over the next few years. How do we ensure that we can stay in charge of those systems, control them, interpret what they’re doing, understand them, put the right guardrails in place that are not movable by very highly capable systems that are self improving? That is also an extremely difficult challenge. So those are the two main buckets of risk. If we can get them right, then I think we’ll end up in this amazing future.
It’s not a worst-case scenario, though. What does the worst-case scenario look like?
Well, I think if you get that wrong, then you’ve got all these harmful use-cases being done with these systems, and that can range from doing the opposite of what we’re trying to do—instead of finding cures, you could end up finding toxins with those same systems. And so all the good use-cases, if you invert the goals of the system, you would get the harmful use-cases. And as a society, this is why I’ve been in favor of international cooperation. Because the systems, wherever they’re built, or however they’re built, they can be distributed all around the world. They can affect everyone in pretty much every corner of the world. So we need international standards, I think, around how these systems get built, what designs and goals we give them, and how they’re deployed and used.
When Google acquired DeepMind in 2014 you signed a contract that said Google wouldn’t use your technology for military purposes. Since then, you’ve restructured. Now DeepMind tech is sold to various militaries, including the U.S. and Israel. You’ve talked about the huge upside of developing AGI. Do you feel like you compromised on that front in order to have the opportunity to make that technology?
No, I don’t think so. I think we’ve updated things recently to partly take into account the much bigger geopolitical uncertainties we have around the world. Unfortunately, the world’s become a much more dangerous place. I think we can’t take for granted anymore democratic values are going to win out—I don’t think that’s clear at all. There are serious threats. So I think we need to work with governments. And also working with governments allows us to work with other regulated important industries too, like banking, health care and so on. Nothing’s changed about our principles. The fundamental thing about our principles has always been: we’ve got to thoughtfully weigh up the benefits, and they’ve got to substantially outweigh the risk of harm. So that’s a high bar for anything that we might want to do. Of course, we’ve got to respect international law and human rights—that’s all still in there.
And then the other thing that’s changed is the widespread availability of this technology, right? So open source, DeepSeek, Llama, whatever, they’re maybe not quite as good as the absolute top proprietary models, but they’re pretty good. And once it’s open source, basically that means the whole world can use it for anything. So I think of that commoditized technology in some senses, and then what’s bespoke. And for the bespoke work, we plan to work on things that we are uniquely suited to and best in the world at, like cyber defense and biosecurity—areas where I think it’s actually a moral duty for us, I would argue, to help, because we are the best in the world at that. And I think it’s very important for the West.
There’s a lot of talk in the AI safety world about the degree to which these systems are likely to do things like power-seeking, to be deceptive, to seek to disempower humans and escape their control. Do you have a strong view on whether that’s the default path, or is that a tail risk?
My feeling on that is the risks are unknown. So there’s a lot of people, my colleagues, famous Turing Award winners on both sides of that argument. I think the right answer is somewhere in the middle, which is, if you look at that debate, there’s very smart people on both sides of that debate. So what that tells me is that we don’t know enough about it yet to actually quantify the risk. It might turn out that as we develop these systems further, it’s way easier to keep control of these systems than we thought, or we expected, hypothetically. Quite a lot of things have turned out like that. So there’s some evidence towards the fact that that things may be a little bit easier than some of the most pessimist were thinking, but in my view, there’s still significant risk, and we’ve got to do research carefully to quantify what that risk is, and then deal with it ahead of time with as much foresight as possible, rather than after the fact, which, with technologies this powerful and this transformative, could be extremely risky.
What keeps you up at night?
For me, it’s this question of international standards and cooperation, not just between countries, but also between companies and researchers as we get towards the final steps of AGI. And I think we’re on the cusp of that. Maybe we’re five to 10 years out. Some people say shorter. I wouldn’t be surprised. It’s like a probability distribution. But either way, it’s coming very soon. And I’m not sure society’s quite ready for that yet. And we need to think that through, and also think about these issues that I talked about earlier with to do with the controllability of these systems, and also the access to these systems and ensuring that that all goes well.
Do you see yourself more as a scientist, or a technologist? You’re far away from Silicon Valley, here in London. How do you identify?
I identify myself as a scientist first and foremost. The whole reason I’m doing everything I’ve done in my life is in the pursuit of knowledge and trying to understand the world around us. I’ve been obsessed with that since I was a kid. And for me, building AI is my expression of how to address those questions: to first build a tool—that in itself is pretty fascinating and is a statement about intelligence and consciousness and these things that are already some of the biggest mysteries—and then it can have a dual purpose, because it can also be used as a tool to investigate the natural world around you as well, like chemistry and physics, and biology. What more exciting adventure and pursuit could you have? So, I see myself as a scientist first, and then maybe like an entrepreneur second, mostly because that’s the fastest way to do things. And then finally, maybe a technologist-engineer, because in the end, you don’t want to just theorize and think about things in a lab. You actually want to make a practical difference in the world.
I want to talk a bit about timelines. Sam Altman and Dario Amodei have both come out recently…
Ultra-short, right?
Altman says he expects AGI within Trump’s presidency. And Amodei says it could come as early as 2026.
Look, partially, it depends on your definition of AGI. So I think there’s been a lot of watering down of that definition for various reasons, raising money—there’s various reasons people might do that. Our definition has been really consistent all the way through: this idea of having all the cognitive capabilities humans have. My test for that, actually, is: could [an AI] have come up with general relativity with the same amount of information that Einstein had in the 1900s? So it’s not just about solving a math conjecture; can you come up with a worthy one? So I’m pretty sure we have systems that can solve one of the Millennium Prizes soon. But could you come up with a set of conjectures that are as interesting as that?
It sounds like, in a nutshell, it’s the difference that you described between being a scientist and being a technologist. All the technologists are saying: it’s a system that can do economically valuable labor better or cheaper than a human.
That’s a great way of phrasing it. Maybe that’s why I’m so fascinated by that part, because it’s the scientists that I’ve always admired in history, and I think those are the people that actually push knowledge forward—versus making it practically useful. Both are important for society, obviously. Both the engineering and the science part. But I think [existing AI] is missing that hypothesis generation.
Let’s get more concrete in terms of specifics. How far away do you think we are from an automated researcher that can contribute meaningfully to AI research?
I think we’re a few years away. I think coding assistants are getting pretty good. And by next year, I think they’ll be very good. We’re pushing hard on that. [Anthropic] focuses mostly on that, whereas, we’ve been doing more science things. [AI is still] not as good as the best programmers at laying out a beautiful structure for an operating system. I think that part is still missing, and so I think it’s a few years away.
You focus quite strongly on multimodality in your Gemini models, and grounding stuff in not just the language space, but in the real world. You focus on that more than the other labs. Why is that?
For several reasons. One, I think true intelligence is going to require an understanding of the spatio-temporal world around you. It’s also important for any real science that you want to do. I also thought it would actually make the language models better, and I think we’re seeing some of that, because you’ve actually grounded it in the real world context. Although, actually, language has gone a lot further on its own than some people thought, and maybe I would have thought possible. And then finally, it’s a use-case thing too, because I’ve got two use-cases in mind that we’re working on heavily. One is this idea of a universal digital assistant that can help you in your everyday life, to be more productive and enrich your life. One that doesn’t just live on your computer, but goes around with you, maybe on your phone or glasses or some other device, and it’s super useful all the time. And for that to work, it needs to understand the world around you and process the world around you.
And then secondly, for robotics, it’s exactly what you need for real-world robotics to work. It has to understand the spatial context that it’s in. [Humans are] multimodal, right? So, we work on screens. We have vision. There’s videos that we like to watch, images that we want to create, and audio they want to listen to. So I think an AI system needs to mirror that to interact with us in the fullest possible sense.
Signal president Meredith Whittaker has made quite a significant critique of the universal agent that you’ve just described there, which is that you’re not just getting this assistance out of nowhere. You’re giving up a lot of your data in exchange. In order for it to be helpful, you have to give it access to almost everything about your life. Google is a digital advertising company that collects personal information to serve targeted ads. How are you thinking about the privacy implications of agents?
Meredith is right to point that out. I love the work she’s doing at Signal. I think first of all, these things would need to all be opt-in.
But we opt into all kinds of stuff. We opt into digital tracking.
So first, it’s your choice, but of course, people will do it because it’s useful, obviously. I think this will only work if you are totally convinced that that assistant is yours, right? It’s got to be trustworthy to you, because for it to be just like a real life human assistant, they’re really useful once they know you. My assistants know me better than I know myself, and that’s why we work so well as a team together. I think that’s the kind of usefulness you’d want from your digital assistant. But then you’d have to be sure it really is siloed away. We have some of the best security people in the world who work on these things to make sure it’s privacy-preserving, it’s encrypted even on our servers, all of those kinds of technologies. We’re working very hard on those so that they’re ready for when the assistant stuff, which is called Project Astra for us, is ready for prime time. I think it will be a consumer decision, they’ll want to go with systems that are privacy-preserving. And I think edge computing and edge models are going to be very important here too, which is one of the reasons we care so much about small, very performant models too, that can run on a single device.
I don’t know how long you think it is before we start seeing major labor market impacts from this stuff. But if or when that happens, it will be massively politically disruptive, right? Do you have a plan for navigating that disruption?
I talk to quite a lot of economists about this. I think first of all, there needs to be more serious work done by experts in the field—economists and others. I’m not sure there is enough work going on in that when I talk to economists. We’re building agent systems because they’ll be more useful. And then that, I think, will have some impact in jobs too, although I suspect it will enable other jobs, new jobs that don’t exist right now, where you’re managing a set of agents that are doing the mundane stuff, maybe some the background research, whatever, but you still write the final article, or come up with the final research paper. Or the idea for it. Like, why are you researching those things?
So I think in the next phase there’ll be humans super-powered by these amazing tools, assuming you know how to use them, right? So there is going to be disruption, but I think net it will be better, and there’ll be better jobs and more fulfilling jobs, and then the more mundane work will go away. That’s how it’s been with technology in the past. But then AGI, when it can do many many things, then I think it’s a question of: can we distribute the productivity gains fairly and widely around the world? And then there’s still a question after that, of meaning and purpose. So that’s the next philosophical question, which I actually think we need some great new philosophers to be thinking about today.
When I last interviewed you in 2022 we talked a little bit about this, and you said: “If you’re in a world of radical abundance, there should be less room for inequality and less ways that it could come about. So that’s one of the positive consequences of the AGI vision if it gets realized.” But in that world, there will still be people who control wealth and people who don’t have that wealth, and workers who might not have jobs anymore. It seems like the vision of radical abundance would require a major political revolution to get to the point where that wealth is redistributed. Can you flesh out your vision for how that happens?
I haven’t spent a lot of my time personally on this, although probably I increasingly should. And again, I think the top economists should be thinking a lot about this. I feel like radical abundance really means things like you solve fusion and/or optimal batteries and/or superconductors. Let’s say you’ve solved all three of those things with the help of AI. That means energy should [cost] basically zero, and it’s clean and renewable, right? And suddenly that means you can have all water access problems go away because you just have desalination plants, and that’s fine, because that’s just energy and sea water. It also means making rocket fuel is… you just separate hydrogen and oxygen from sea water, using similar techniques, right? So suddenly, a lot of those things that underlie the capitalist world don’t really hold anymore, because the base of that is energy costs and resource costs and resource scarcity. But if you’ve now opened up space and you can mine asteroids and all those things—it’ll take decades to build the infrastructure for that—then we should be in this new era economically.
I don’t think that addresses the inequality question at all, right? There’s still wealth to be gained and amassed by mining those asteroids. Land is finite.
So there’s a lot of things that are finite today, which then means it’s a zero-sum game in the end. What I’m thinking about is a world where it’s not a zero-sum game anymore, at least from a resource perspective. So then there’s still other questions [like] do people still want power and other things like that? Probably. So that has to be addressed politically. But at least you’ve solved one of the major problems, which is, in the end, in a limited-resource world which we’re in, things ultimately become zero-sum. It’s not the only source, but it’s a major source of conflict, and it’s a major source of inequality, when you boil it all the way down.
That’s what I mean by radical abundance. We no longer, in a meaningful way, are in a zero-sum resource-constrained world. But there probably will need to be a new political philosophy around that, I’m pretty sure.
We got to democracy in the western world, via the Enlightenment, largely because citizens had the power to withhold their labor and threaten to overthrow the state, right? If we do get to AGI it seems like we lose both of those things, and that might be bad for a democracy.Maybe. I mean, maybe we have to evolve to something else that’s better, I don’t know. Like, there’s some problems with democracy too. It’s not a panacea by any means. I think it was Churchill who said that it’s the least-worst form of government, something like that. Maybe there’s something better. I can tell you what’s going to happen technologically. I think if we do this right, we should end up with radical abundance. If we fix a few of the root-node problems, as I call them. And then there’s this political philosophy question. I think that is one of the things people are underestimating. I think we’re going to need a new philosophy of how to live.
Read More Details
Finally We wish PressBee provided you with enough information of ( Google DeepMind CEO Demis Hassabis on AI in the Military and What AGI Could Mean for Humanity )
Also on site :