
The Nobel Prize in Chemistry 2024 was awarded, one half to David Baker ‘for computational protein design’ and the other half jointly to Demis Hassabis and John Jumper ‘for protein structure prediction’. The Nobel Prize webpage says, “In 2020, Demis Hassabis and John Jumper presented an AI model called AlphaFold2. With its help, they have been able to predict the structure of virtually all known proteins. AlphaFold2 has been widely used in many areas, including research into pharmaceuticals and environmental technology.”
A fireside chat on Transforming Discovery: AI’s Role in Scientific Progress was arranged on 17 February 2026 at the Indian Institute of Science (IISc) between Sir Demis Hassabis, Co-Founder and CEO of Google DeepMind and Govindan Rangarajan, Director, IISc, moderated by Varun Mayya. A transcript of the chat, including questions from the audience, has been compiled here with minimal editing.

Varun Mayya: We’ve seen everything that’s going on with AlphaFold. We’ve heard about the work going on there. But how exactly does it translate to India? What I mean by that is, India is known for low-cost generic drugs. We are now getting the GLP-1s to India. What is the exact post process between you starting out with that tool all the way to it becoming a drug that people can use in India?
Sir Demis Hassabis: What we tried to do with AlphaFold in the beginning was crack an amazingly hard scientific problem but also one that would have many downstream benefits, especially in things like drug discovery. So, with AlphaFold, but also with some of our other scientific work like AlphaGenome, we are developing tools that, I think, can accelerate drug discovery and also help with disease understanding including things like rare genetic diseases and so on. So, I think all of that will impact India but also all the world in the next few years. We collaborate actually with a lot of contract research organisations in India already, with Isomorphic Labs. We spun out to build on the AlphaFold work. I am also very excited about how general AI systems like Gemini will help with providing healthcare information to everyone in the world. Again, I am very excited about the impact it has here in India too.
Varun Mayya: It’s a very high-quality problem to pick up right? – which is, “Hey, how can we solve protein folding?”
This is a question to both of you; a follow-up question related to this. I think both of you have been in science for a very long time. How does one build scientific taste? How do you know what kind of problems to go after and what kind of problems are not worth going after?
Prof Govindan Rangarajan: You are asking about how to build scientific taste with AI systems?

Varun Mayya: In general. How do you, as a person, build scientific taste?
Prof Govindan Rangarajan: I think, scientific taste… usually as a grad student, you learn it from your PhD mentor. I think that is when you first encounter it. But if you talk about developing scientific taste in AI systems, I think that is a very interesting and very hard problem.
There have been some interesting attempts like the Ramanujan machine, where they tried to replicate the intuition that Ramanujan had. On the other hand, if you use standard things like reinforcement learning from human feedback, it tends to average things out and you tend to revert to the mean. I don’t think you get interesting things from there.
What I think is maybe, you build a custom-built LLM, which then is mentored by a master scientist. The LLM acts like an apprentice to the scientist, and you give constant feedback. You need somebody with a lot of commitment and time to do it. That may be an interesting way because that is the way we learnt from our mentors. So, that may be an interesting way to do it. And maybe, who knows, you may have future generations of AI who trace their lineage to human masters, and you may have schools, you know, like the Demis Hassabis School of AI Systems or Terence Tao School of AI Systems, which think in a way akin to how they think about which problems are important and which problems are not important. I think if you do it on the average, you are not going to get much. I think it has to be much more personalised in some sense. But I think Demis may have a different take on that.

Sir Demis Hassabis: I think the question of taste… you can break it down into intuition and creativity. It has aspects of both of those things. It is probably the hardest thing in science, and I think it is probably the hardest thing for machines to be able to mimic. I think that is good; I think that is what separates the great scientists from the good scientists, human scientists.
I would say, you know, every one of you here is, of course, great technically but then to really discover something new, ask the right question, formulate your own hypothesis requires good tase, and I think that comes partly from graduate studies and learning from professors, the great professors that you have here. Certainly, that’s why it is important I learnt a lot of that during my PhD at UCL with my professor, Eleanor Maguire.
But I think you can only really develop great taste through doing as well and not just passively learning it. I think you have to do it and develop it. And it’s a little bit mysterious altogether, is what I would say. So, we’ll have to see if machines are able to develop it or learn it somehow. But I think they are going to need to do active experimentation in a way that we all do through grad school to really understand what it means to do science at the frontier, at the cutting edge. And so, I think it remains to be seen; maybe we’ll understand better what taste is at the end of all of that.
Varun Mayya: Amazing! So, you are saying that if we run a lot of experiments, you’ll eventually develop a sense for what experiments are worth spending your time on. That’s amazing! You know, I have a question from my wife. She texted me on WhatsApp in the morning and she said, “You should ask Demis this question,” which is “We’ve seen AlphaFold. What is in the future of medical AI? What is something we can look forward to?”
Sir Demis Hassabis: Well, look! AlphaFold, I think, was just the beginning. Of course, there were 50 grand challenges in science, embryology, and understanding the structure of protein is incredibly useful in understanding disease and eventually developing drugs. But it’s only one small component of the drug discovery process. It’s an important component, but it’s a small component. So, we are trying to develop many other technologies adjacent to AlphaFold at Isomorphic, mostly in biochemistry and chemistry areas to develop the right compounds that will bind to the right part of the protein but also other properties we care about like the toxicity, the absorption properties of these compounds to make sure that actually in the human body, they do the right things and don’t have any side effects and so on.
In some ways, those things are more complicated than the protein structure. But we have a lot of belief that this is possible because of AlphaFold, the success of AlphaFold, which was thought to be an almost impossible challenge, and we were able to do it. So, I do think that these methods can scale to these very, very difficult problems. And, you know, we have some very promising results at Isomorphic, developing these technologies further, and I hope that eventually, we’ll be able to bring down the drug discovery process. You know, it takes on average 10 years to come up with a new drug, bring that down by a factor of 10 to a matter of months, maybe even possibly weeks. I think that could be possible. I mean, it sounds science fiction today, but then so was finding the protein structures of all the proteins known to science, 200 million proteins known to science. We’ve managed to fold and put up predictions for all of them now. And that would have seemed impossible 10 years ago. So, I think the same kind of thing will happen over the next decade with drug design.
Varun Mayya: I think this is a fascinating use case of AI, which is that the person sitting on stage right now, whose work is going to be used by so many people who are sick and need those clean drugs over time. So, thank you so much for all your contributions. Demis, I actually want to switch to tracks here. I want to talk about something totally different, which is a personal passion of mine, which is gaming. We are working on a game. We wanted to make a world-class game, and we wanted to do it bootstrapped. You’ve been a game developer. You’ve worn many hats and gamedev was one of the first ones you wore. You worked at Bullfrog back then. The minute Genie came out and I saw it, I spent like five seconds looking at the screen and I was like, “Wait, I need to use this!” Then, I used it and I was like, “Wait! Okay! At least you can’t tell a story yet! How long do I have left? What date should I release before…?” What’s the next three to four years of game development going to look like?
Sir Demis Hassabis: I saw your game; great looking game by the way! Thanks for sending me the video of it. I love game design and game development and that’s kind of how I started my professional career and also my journey into AI. When I was a teenager, as you said, I was working for Bullfrog which at the time was probably Europe’s premier development house. We did some really creative games, simulation games that AI was the core part of, like Theme Park and actually, really, that’s when I sort of decided, when I was about 16 years old, that AI was going to be my career, when I saw how much enjoyment people got from interacting with this game AI and the potential of that.
I think it’s come full circle now where games used to be the cutting edge of where technology was developed. Graphics and AI and also hardware like GPUs were, of course, invented for games, and now we use it for AI development. Now, maybe AI has got good enough that it can help with game development like you are saying. I think it’s going to help with many things, for example creating assets, graphics, 3D models. I think the technologies are pretty good now. Probably in the next year or two, it will be pretty amazing; from just a concept art, it could probably create the 3D, the asset. What I am most excited about is the new type of genres of games that might be possible now that we have AI; for example, big, massive, multiplayer online games that are populated with game characters and PCs that are actually smart and can advance the story line and things like that.
I think they will also be very useful tools for bug testing, auto balancing the games; but then, you mentioned Genie… Genie3 is our world model; so, what you are able to do… for those who are not familiar… we just released a kind of a beta version of it recently where you can type in a prompt and you get a playable world. You can only play it for one minute and then, it is sort of like a dream, and then it disappears because it can only stay coherent for a minute. But I think over the next four to five years, we will be able to extend that time. But as you say, that doesn’t necessarily… it’s like an interactive movie and it is fascinating to try it, but it doesn’t make for a fun game yet. That’s still going to require game design, game mechanics, all of the amazing things the game industry has built. So, it may just facilitate faster prototyping and faster iteration of ideas, and then hopefully, maybe, will be a new golden era of game development like it was when I was in games in the early ‘90s where you could have small teams that could experiment with new, really creative ideas because it was fast enough to prototype and cheap enough to build those games and you could test out quite experimental ideas; hopefully, these tools will allow us to do this again and again in the games industry.
Varun Mayya: Very cool! I have a question to both of you. And this is a question about a difference between the average person that I have spent time with and both of you, which is that both of you seem very cross-functional, which means that one day you’ll be playing chess, the other day you’d be making games, the next day you’ll do life sciences, the next day you are in AI. So, it’s like you are cross-functional.
[speaking to Prof Rangarajan] You’ve done a lot of the same, right? You’ve done a little bit of CAOS, and you’ve gone and you’ve said, “Hey! Let’s do satellite-based courses!”. I think the word for this is ‘polymath’ and I think it’s very fascinating being a polymath because the range of conversations you can have with those people is so wide. How does one truly be a polymath? It’s a tough question. The answer might just be, “Hey, you are born with it!” But, you know, I am going to shoot my shot at it anyway.

Prof Govindan Rangarajan: Well, maybe others may have a different phrase for it, you know, ‘jack of all trades and master of none!’ But still, I think it’s just the basic curiosity; you are curious about so many different disciplines; there are so many fascinating things to do that you just get into different areas. I think it’s that basic curiosity that drives all this.
Sir Demis Hassabis: I thought a lot about this, and I think, for me at least, I’ve always had an insatiable curiosity from when I can remember. Even in my games career, that happened. I started playing chess for chess teams, but then I realised that there are many other cool games out there like go and poker and really interesting things as well just beyond chess. Chess players just stay with chess and that’s all they play. And so, even in games, I could feel myself drawn to so many interesting things as the Professor says are interesting in the world.
But there is another thing too, which is that I think a lot of the best inventions in the modern era will come at the intersection of two or more subjects. You can think of DeepMind – when we started, it was a kind of combination of neuroscience, engineering, and machine learning, a sort of intersection of all of that and now, you look at Isomorphic – it is an intersection of machine learning, chemistry, and biology. I love those areas and I think a lot of the fastest progress till now… I would encourage you to become experts in two or more areas and then find the connections between them but also maybe the analogies between them. There are a lot of interesting analogies when you look at things from a first principles point of view.
Then, the other thing too is that I think I am just being drawn to my kind of favourite people from the past; my heroes are kind of the polymaths really like you said – Da Vinci or Aristotle, who I feel like they didn’t really see the boundaries between even, not just the sciences, but even art and science and philosophy. I like that approach, and I feel these are all about finding out about the world but just using different techniques.
So, in the end, if you are curious about how the universe works, you should be curious about it from all these different viewpoints. And I suppose, for me, building AI as this sort of ultimate tool for science and discovery has kind of given me the excuse to learn about a lot of other subject areas which I love doing because we can apply AI to those areas.
Varun Mayya: Professor, do you think we are making a mistake in science in India by having too many siloed ways of learning? Because I feel like sometimes when I speak to people they say, “Hey! I am a Mechanical Engineer and there’s no way I’d be interested in any other type of engineering!” You feel that’s a mistake?
Prof Govindan Rangarajan: Yeah! I think it’s a mistake. I think probably the original mistake was done when we abandoned universities and started… I mean, we also are an example of that… of starting specialised research institutions. You know, we lost that cross talk between different disciplines. We have become so siloed that law is different, management is different, everything… medicine is different. We are trying to remedy that by bringing medicine back here and things like that, but I think it’s a serious issue that India faces and it’s going to become a bigger issue with AI coming in when you really need this intersection of disciplines. So, it’s going to be a problem.
Sir Demis Hassabis: I hope more of you become multidisciplinary; maybe, I can give a couple of pieces of advice or tips on how to do that. There’s a couple of things you need. One reason it’s hard to be multidisciplinary is, of course, one has to be a world-leading expert in at least one domain. This is also why siloing has happened in departments because you have to have that; otherwise, you can’t contribute at the frontier of discovery. But then, what I at least have done and I think everyone can do is develop techniques to quickly learn, to maybe a grad level, other subject areas.
How do you transfer your own learning? Of course, it’s what we are trying to do with AI systems. You can also do that with your own mind. Find those connection points, understand it from what you know from first principles so that you can quickly apply it very fast to a new area or new domain at least to a sufficient level of understanding so you can combine it with your expert area.
I think the other reason, I’ve seen in university systems, that people don’t do this more is that I think it takes a little bit of humility or maybe confidence and humility kind of both together to become a beginner in some other area when maybe you are already an expert in one area or very expert in one area, let’s say machine learning, and then, “Oh! I don’t know that much about biology and I’m going to be willing to learn from the experts, start again and be willing to put the effort in to do that learning!” I would encourage everyone to do that; it’s really worthwhile. But I think that sometimes, maybe the academic system doesn’t reward that side of doing things.
Varun Mayya: Fantastic! I have a question on general intelligence. I am sure that everyone has this question right? which is, for a very long time I had… and even before the entire AI wave… I grew up reading Asimov, and I said, “One day, there’s going to be… we’re going to have AGI.” But you know, as I got older and I saw Gemini come out and a bunch of models come out, I said, “This feels like AGI!” But then, the goalposts moved; I mean, I said, “No, no, no! It has to do this!” So, I made a joke out of this and my Twitter user name is ‘waiting for AGI’ because it’s the kind of internal joke I have with myself because nobody else gets it. But a question to you, Demis: what’s the capability you would see where you’d go, “That’s AGI!”
Sir Demis Hassabis: My definition of AGI has never changed. So, how much I could tell you is we’ve always defined it and I’ve always defined it since I’ve started work on this 20 to 30 years ago is a system that can exhibit all the cognitive capabilities that humans can. Now, why is that important? First of all, because the brain is the only existence of proof that we have, that we know of, maybe in the Universe of a general intelligence. That’s also partly why I studied neuroscience because I wanted to understand the only data point that we have that this is possible and understand that better. And so, that’s the definition I use; it’s quite high bar because it means if you wanted to test the system against that, it would have to be capable of all the things humans can do with this brain architecture, which is incredibly flexible.
It’s clear that today’s systems, although they are impressive and they are improving, they don’t do a lot of those things. So, true creativity, continued learning, long-term planning – they are not good at those things. Another thing that is missing is general consistency across the board at capabilities. Of course, in some circumstances, they can get gold medals in international Maths Olympiad questions, but they can still fall over in relatively simple maths problems if you pose them in a certain way. That shouldn’t happen in true general intelligence; it shouldn’t be sort of a jagged intelligence like that.
So, there’s still quite a lot of things missing. I think the test that I would be looking for is maybe training an AI system with a knowledge cutoff of say 1911 and then seeing if it could come up with general relativity like Einstein did in 1950. That’s the kind of test, I think, a true test of whether we have a full AGI system, and I think we are still a few years away from that. I think that’s going to be possible eventually but it’s clear today’s systems couldn’t do that.
Varun Mayya: Professor, do you think about AGI?
Prof Govindan Rangarajan: Well, I am just a consumer of AI right now, not an expert. So, I’ll defer to Demis on this question. In the way it’s going, probably, it will happen sometime. But I think it is enough of a useful tool right now that everybody should use it. We need not worry about AGI. That’s what I tell all the students and faculty – that it’s a good enough tool to use right now and accelerate your research.

Varun Mayya: Interesting! Demis, how do you balance both? How do you balance the commercial pressure of “This is Google, we have to make money” and also, “This is DeepMind, we have to do research.” How do you balance the two because there is some short-term pressures, long-term pressures? I’d just like to know how you think about this.
Sir Demis Hassabis: There are these competing pressures. The answer is we just do both to the maximum. That’s one advantage we have with our size; we can explore both to the limit. We have a large research team; I think we have the broadest and deepest research bench of any organisation in the world. But we also have… we are like the engine room of Google, at DeepMind these days. We have to support that too. That’s what, in the end, brings in the revenue and the money and the funds to do more research. We have to get that balance right, and roughly half my team works on those kinds of immediate priorities and support for those things and that’s very exciting too because building foundation models like Gemini is on the shortest path to AGI, in my opinion. But then we have half the team who are doing the next frontier, and it is sort of my job as leader of that organisation to protect the blue sky research and make sure it has room to flourish and deliver maybe on an 18-month/two-year timescale or more, and we make sure that we are not just overly focused on the near term.
The short answer is we need both and we’ve got, I think, that balance pretty good even over the last decade of new innovations but also plugging that in to the latest products so billions of people around the world can benefit from them.
Varun Mayya: Professor, do you think about the same problems when it comes to funding for science in India? How do you balance between what you want to do versus what there are grants and funding for? Is that a big challenge?
Prof Govindan Rangarajan: It is always a bit of a challenge because now, as you know, India does not have infinite resources. So, it has to prioritise resources which are priority areas for the country. So, there are these national missions like the AI mission, the semiconductor mission, the quantum mission, and things like that, where there are deliverables that you are supposed to deliver on. But that again, of course, conflicts with the general attitude academia has, where you, as Demis said, just do blue sky research. But I think, on the whole, we have been able to balance the two. There are enough venues for getting funding for basic research and even with applied research, I think very interesting open research problems can come out of pursuing even applied research. So, I think one can do the balance of the two. Funding in general needs to increase in India because we are at 0.7 percent of GDP; I think we should at least go up to two percent. That would be much more comfortable. And given our aspirations, I think that would be warranted also.
Varun Mayya: I have a worry, and this is a personal worry, which is that 200+ billion dollars of India’s exports is IT services. I read this post recently, which is just XYZ amount of tokens. It’s a worry because we have software engineers who are good, but not great and we do have some great ones but some of them end up going abroad. But the ones that stay behind – they are competing now against models that are getting better at doing software. Do you have advice for people in software right now who are working on these projects and seeing AI rapidly improve?
Sir Demis Hassabis: Look, I think a lot of areas are going to get disrupted and changed. I think with change, there are challenges but also come opportunities. So, what I would recommend every engineer today, wherever they are, is to lean into these AI tools and also get incredibly good at using them. I think there is a lot of untapped potential there for the youth of today wherever they are and what one engineer can do can probably be 10x of that. I think new startups are going to happen that couldn’t be done before and so, in some ways, that equalises the playing field because everyone around the world has access to the same tools pretty much. So, it’s about everybody figuring out how to best integrate that into their workflows, and then, we’ll see what different new industries or new services come out of that. But I think there will be some new higher-level versions of the things that we do today.
Varun Mayya: So, you’re saying that there might be a higher-level version software where you just prompt the thing? But you know for a lot of engineers in India, it just feels like.. it doesn’t feel like the craft anymore because you just feel like you are writing in English.
Sir Demis Hassabis: Well, maybe there’ll be a different sort of craft. But first of all, we are not there yet. Secondly, when I was starting off in games, we used to write in assembly language, but when I was trying Theme Park, we then went to C, C++ and of course now, we have Python and all these even higher languages. So, one could view this as a continual abstraction that is happening. I think that actually broadens the access to creativity to more people to try out their ideas and build their ideas. So, maybe it’s a slightly different skill set that’s needed but going back to this question of taste, I think that’s going to increasingly become the valuable differentiator.
Varun Mayya: I have one last question to both of you, which is, what is something from your field or what you do every day that is very exciting to you right now but the world hasn’t heard of yet and that you can potentially reveal without violating NDAs or whatever, that we can look forward to a few years from now, from both your fields?
Prof Govindan Rangarajan: From the limited knowledge that I have, I think what is going to surprise people is the progress in Math because if you look at the general public, you know, Math is always thought of as something inaccessible and populated by geniuses, which of course is not, but I think when people see that AI is making these tremendous progresses in Math, they are going to be surprised that such a difficult subject… what’s thought to be a difficult field… can be cracked open by AI because it’s based on axioms, it’s based on definitions, and your predictions can be either proved right or wrong; so, those may be much more accessible to AI than other fields.
Sir Demis Hassabis: I think for me, what I am looking forward to is maybe AI in the physical world. I think robotics is going to come of age in the next two three years; there’s a lot of things to be solved in my opinion, but I think we’re getting to the point where there will be some breakout moment. I think also that AI understanding the physical world; we tried very hard to do that with Gemini as a multimodal model, probably the best in the world at that so that you could have an assistant that maybe on your glasses and comes with you or on your phone and understands the world around you in the context around you. Obviously, we are seeing self-driving cars and things are about to become a reality around the world. And then, I am excited about things like automated labs that maybe speed up scientific discovery, not just in theory but also in the practical realm. I think that’s all going to come in the next five years or so.
Varun Mayya: Very cool! I’ve seen a glimpse of that in the movie Transcendence, where you just give AI a couple of hands and then it keeps improving itself.
Thank you so much for your time. I want to open the floor to the audience. I’m sure we have some questions from the audience. We will take two or three and then we’ll run it through the spam filter which is me and then take it to the people on stage.

Audience person 1: Mine is not a very technical question, but you mentioned studying neuroscience as the only data point that you have for general intelligence. We have all this talk about moving to neuromorphic designs and everything, but why do we want to move forward to an intelligence that is similar to ours? Not only does it lead to a loss of capital for a large percentage of people, it also leads to a loss of identity I feel and there are lots of things that we are bad at which AI is already doing, like when you have needle in a haystack kind of situation, you can ask it to do literature reviews or pool large amounts of data together to find something specific. Why not make an alternate intelligence that works in synchrony with ours instead of trying to replace human intelligence?
Sir Demis Hassabis: It’s not about replacing human intelligence. As I explained earlier, the thing about the human brain is it’s the only thing that… you can think about it as a Turing machine… approximate Turing machine if you want to think about it mathematically. We have to understand what true generality is. Turing showed that with the Turing machine, and our brains – I think most people would accept that its some kind of approximate Turing machine. So, if you are interested in general intelligence that can be applied across the board, then it has to have roughly that set of capabilities. At least, that’s the only set that we know of. Other animals are not general enough; for example, they don’t have big enough prefrontal cortexes and so on. So, it’s not really about replacing humans; it’s about understanding what general intelligence is.
I think the tools… as to why the industry is doing that is because we find that with these general tools, they can transfer to the specialised domains. So, it’s probably going to be more efficient to develop a general structure that can be used in these more specialised domains than develop hundreds of specialised systems. That’s why you are seeing the economic pressure to do that. So, there are two different things. One is the scientific question, which is very valid, like what is a general system and how would you answer that question, and then the other is a more economic question.

Audience person 2: How do we use AI to deepen first principles thinking without removing the struggle that brings real understanding? Do you have a framework or steps that would make it easy for us?
Sir Demis Hassabis: I think it’s down to the individual, you know? It’s like the Internet, computers – you can use them in ways that would degrade your thinking, but you can also use them in ways… you know, we were talking earlier about becoming a polymath… well, today it’s a dream in a way with YouTube and all the information on the Internet you have for someone who wants to learn something very quickly up to say undergrad level, it’s all there, like the best lectures in the world, all that. So, that’s one way you can use this technology. Obviously, with AI, if you use it in a lazy way, it will make you worse – thinking, critical thinking, and so on. But that’s down to you as the individuals; no one can help you do that. The technology is sort of neutrally sitting there. You need to be smart enough to use these technologies in ways that will enhance your thinking rather than make it worse.
Audience person 2: I get the point that you are trying to tell but also as you told, we have so many resources for learning. So, we might get into the pool of learning from a resource and listing down the resources. So, what would be a better mindset for us to narrow down the resource and get into the learning pool?
Sir Demis Hassabis: The number one thing that you should do when you’re in school is work out how you learn – learning to learn. And I’m surprised that that is not done more in schools and so on. Figure out how you learn best. There isn’t going to be one answer for everyone. You need to think that through – how you work best, what environment, what modes you work best in, and then double down on that.
Audience person 2: Did something work for you, like the art of how to learn?
Sir Demis Hassabis: There’ll be a little bit, yes. But it’s not possible, probably, to explain it in one minute, you know? So, it’s many things, it’s sort of developing the mind. For me, actually, it was games that I trained my mind on, multiple games that exercised different parts of the thinking process and getting really good at getting capable at that. So, it’s kind of the way we developed AI for the early days of DeepMind, using games as a proving ground for testing your own ideas.

Audience person 3: I have a question around memory. Amongst all the neurological aspects, I find memory to be very intriguing, like the way hippocampus works and how we try to model episodic memory, semantic, the long-term, short-term, like sometimes I get glimpses of what happened in my childhood. It’s not about weighted averaging ,right? It’s some glimpses that remain and for example, if you hear any keyword and that strikes something else for you and probably something else for someone else… how are foundation models trying to handle this problem? How are we planning to handle this? Because it’s very systematic like something that you can interpret. But as far as I have seen, memory is a very abstract concept. It’s very difficult to understand how the brain resolves it. What’s your take on it?
Sir Demis Hassabis: I agree with you. It’s one of the most interesting things; that’s why I’ve studied memory as well and hippocampus, as you probably know, and imagination, partly because machines in those days were very bad at those things, to some extent still are. So, we’re kind of, I would say, badly approximating the hippocampus at the moment with context window. Really, the context window is more like working memory but because computers… we only have working memories of seven digits plus or minus three digits or something… but, of course, a computer can have… like Gemini… you know, a million token context window. The problem is that’s still not as good as episodic memory where you’re sort of… it’s kind of brute force, right?… you’re remembering everything when in fact, most tokens are irrelevant and you want to only remember the important things, which is the way human memory works, right? We remember the emotional things actually better than the neutral things. So, both positive and negative. Maybe that’s one of the functions of emotion. We don’t need to remember everything we’ve seen today. We’ll just remember some of the key moments that might be useful for learning or for future use or for imagining new or simulating new scenarios.
So, I think even in the realm of AI and machines where we can have millions of or maybe tens of millions one day or billions of memory units, you still pay a cost of searching that memory, right? I think of our video models or our project Astra which is supposed to work on glasses, you know it can record maybe 20 minutes of video, that’s about a million tokens, right? – first of all, that’s not a lot of time; secondly, to then find something in that is quite expensive, right? – because we have to look through everything and so, I think, ironically, one of the things we maybe are missing is forgetting or if you want to talk about it in computer science language – garbage collection, so that you actually compress what you’re remembering and maybe consolidating it as well for those of you neuroscientists in the audience, you know, so that it’s just efficient in the things that you are remembering and you have to sort of search through.
Varun Mayya: I have a follow up to that which is, in high school biology, I read that the amygdala and the hippocampus fire together right? So, like you said, when you are very emotional, you tend to remember things. Is there amygdala equivalent for LLMs?
Sir Demis Hassabis: Not at the moment, but maybe there should be. I don’t think you’d want it to be like emotional or amygdala like the human, like we have. I think it would have, maybe, some value judgement at the point of writing the memory that makes some kind of value calculation on how useful this memory would be for future learning or future behaviour; that probably would be pretty useful and something that we are researching.
Varun Mayya: Amazing! Thank you so much everybody for joining and of course, congratulations for being on stage and giving us your time.

