Podcast: The Coming Wave with author Michael Bhaskar and Schroders’ Peter Harrison

“Vital”, “terrifying”, and “unmissable”. The Coming Wave is being hailed as one of the most significant books ever written about AI. Schroders’ CEO, Peter Harrison met with one of the authors to discuss the transformative power of AI, and the challenges we’ll have trying to contain it.

19/12/2023
FT book of the year

You can listen to the podcast by clicking the play button above. You can watch most recordings of the podcast on the Schroders Youtube channel.

You can also subscribe, download, rate and review the Investor Download via PodbeanApple PodcastsSpotify, Google and other podcast players. New shows are available every Thursday from 5pm UK time.

You can read the full transcript below:

David Brett: Welcome to the Investor Download. The podcast about the themes driving markets and the economy now and in the future. Welcome to a very special edition of the investor download where your host today is Schroders CEO. Peter Harrison, and the topic is all things AI. Join Peter as he talks to Michael Bhaskar, one of the authors of the coming wave, a 2023 best seller, which critics have described as vital, terrifying, and unmissable. Michael visited Schroders London office to record the conversation which was joined by Nils Rode, Charlotte Wood, and Alex Tedder, each of whom in their different roles at Schroders, is passionate about AI and brings their own angle to the topic. The Coming Wave was one of six books shortlisted for the 2023 Financial Times and Schroders' business book of the year. Its other author, AI entrepreneur, Mustafa Suleyman, is co- founder of DeepMind and inflection AI. In the book, Suleyman and Bhaskar, warn about the dangers of innovations such as generative AI, synthetic biology and quantum computing, and assess how and if these dangers could be contained. Enjoy the show.

Peter Harrison: I'm delighted to be joined today by Michael Basker, co-author of this fabulous book, The Coming Wave. Couldn't be timelier in terms of understanding both AI and synthetic biology. I'm also joined by several colleagues, Nils Rode, CIO of our Schroders Capital business, Charlotte Wood, head of our innovations group, and Alex Tedder, head of Global and thematic equities. So I'm looking forward to a fantastic conversation. And perhaps, Michael, if I could start the questioning, chat GPT four, I think was the or chat GPT generally was the fastest adoption of any consumer product in history. When you started playing with Chat GPT 4, were you surprised by just how good it was?

Michael Bhaskar: Yes, I was. It's easy to forget not only with, GPT 4, but GPT 3.5, 3, all of the large language models that that are out there. What we see is that people set these benchmarks and they set tests for them, that are really hard, you know. So, before chat so GPT 4 came out. People were saying things like, oh, you could never get, a large language model to exhibit spatial reasoning. Or something like that, whatever it is, or to work out some sort of complex reasoning task. And then bam, GPT 4 comes out and it does it. And that's just one example. And there are countless examples of this, of where people say, oh, but this is the thing that that GPT 4 won't be able to do and it does it. And, you know, in many ways, this is the story of AI over the last decade or so. Is that people set these tough bars for it to clear. And then it clears that, and we're stunned for about 5 minutes. And then we say, well, but it can't do this and just sort of forget how, you know, it has done incredible things. So, I was surprised by GPT 4. If I push myself and, and play with it and other systems, it still stuns me. But it's also just interesting how quickly we adapt and how quickly we get used to something that 2 years ago was a technological impossibility. And that's probably a bit of a lesson there.

Peter Harrison: And do you want to just spend a moment speculating on what it means for work forces for society, because that's the first question I know. How disruptive is this going to be and over what time frame?

Michael Bhaskar: To me, there's no doubt it is enormously disruptive.

It you know, what we're doing is we're taking one of the core attributes of humanity, the basic thing of our species intelligence, and we're commoditising it. We're making it almost kind of infinite in in plenitude. Everyone is going to have an extraordinary amount of capability do things, to know things, to achieve things, a huge amount of support. And that that's just one of the biggest transitions ever. You know, I I think the way to think about this is, you know, as a CEO of a large company, you've got access to a lot of things that most people don't have, a legal team, an HR team, advisors. It's all, you know, enables you to do a job that, that has found a huge global scope. But right now, there are a handful of CEOs and world leaders who have access to this. Now imagine everyone has access to something like that. Is very different worlds to the one that we live in today. And the reason it's disruptive is that we have no idea what that means for the economy, for security, for creativity. Almost any area you think about can be supercharged by the ability for people to have a kind of team on their side that is helping, but then also by something that we think that is coming very quickly, artificial capable intelligence where it can do things for you, you know, just as, teams in companies can achieve goals for the company. What about if everyone has that? You know, their AI, their AI agent can go and do things. I think that's broadly an incredibly exciting thing. You know, we live at a time where growth is hard to come by economic growth. We live at a time where you know, health outcomes aren't really improving, cost billions of dollars to develop a new drug. There are all kinds of challenges that we know our societies are facing. And, you know, I think everyone can broadly sense that we need something to break us out of this, sort of local equilibrium that we've hit. And this is probably it. But at the same time, of course, it means that all kinds of goals stand a better chance of being achieved. And there are people who have bad goals. There are people who have goals that cut across purposes to, let's say, the mainstream of society, whatever it is. What I think that it does mean though is, is just disruption. The status quo one way or another, probably is going to change drastically. I think in a time scale of anywhere over the next 2 to 5 years to 20 years. It's certainly not more than that. And I think it's just worth saying. There's always a lot of hype around new technologies. There's a lot of hype around AI right now. Some of all of this is going to be that. But equally, there's a real danger in just dismissing all this as hype. I think there really is a very significant shift with these tools.

Nils Rode: Michael, what I find in mind is the progress us over the last 5 months Chat GPT was launched a bit more than a year ago. And then if you see what what has changed now, which additional features have been added and how it has the performance increase if you extrapolate that how it will continue to develop and other tools, and it's not only about texts, but videos, as well photos, as well music, it's everything. If if you extrapolate that development, what does it mean for education? So what should people learn today that will be that will help them ten years from now with this technology coming?

Michael Bhaskar: Well, so education is a really interesting one because there's just a sense now in education. This is the, you know, a sense in education, like in almost every vertical, everywhere, that this is the kind of main thing that they need to grapple with. And the other day I went to this conference about AI in education. And everyone was saying, how how are we gonna change the curriculum to address AI? You know, how are we gonna kind of build this into the curriculum?

And actually, the point I said is the whole thing with AI is that a curriculum goes out of the window. How we can do? It holds out the promise for kind of hyper customisation of an of of, education, right? You know, every child or or rather probably as it will be every human being because the idea that that education is just a phase in life is almost certainly finishing. We're all going to be educated all the time. And already, you know, probably all of us have used AI just to learn about things. And, it’s helped us understand something new, and, and we're already seeing that. So the idea of, a curriculum just sitting there and learning, one set of things is probably going to go, sooner or later. We're going to have far more tailored education it's going to be far more about what works for a given individual. I think there'll be some things that will agree, you know, just has to happen as a society. And we’ll still definitely need schools because, you know, what what's actually clear, and again, this is something we see with AI. I think it shows us what's valuable. What you learn about at school maybe isn't necessarily, the Romans or, you know, the photosynthesis what you actually learn is, how to sit down and be quiet for ten minutes, how to, you know, negotiate with your friends in the playground. To kind of become a human being in the broader sense, and all of that aspect of education is surely going to become even more important. The fact that, you know, the fact that you're just there, in a peer group. But what's definitely going to change is everything that you need to learn is going to be somehow probably mediated via an AI that is hyper tailored to you, and that essentially enables you to become the best version of yourself and that it it really has a very kind powerful ability to draw that out. So I'm actually really excited about education. I think ultimately this is going to be one of the most fruitful and interesting things. It’s going to be education right from primary school, right up until, you know, the very sort of end of life. People are going to be learning a way, and they're going to be helped on this journey by AI.

Charlotte Wood: I think that's really interesting because education is one of the industries, and I guess schools are some of the organisations that need to think about what are they bringing to the table in a kind of world of AI where expertise is getting increasingly democratised, you know, what is the role of a financial advisor, say, or a lawyer in a world where potentially people can access that sort of information much more easily themselves, and it kind of flips the value proposition of a lot of industries, I think, their head, and that's what we need to be thinking about as well going forward.

Michael Bhaskar: Well, yeah. I mean, I this is where it comes back to this idea of, artificially capable intelligence. So, you know, just to sort of talk about that, and this is something that, you know, Mustafa and I have really discussed a lot when writing the book and, and is an important idea. So, you know, what you have kind of at the bottom of this whole sort of revolution that we're in is is machine learning, which is the form of AI that, you know, is is dominant. So we've got learning. And then above that, you've got this idea of AI, you know, really which is the kind of basic artificial intelligence systems that we're seeing today. And by basic, I mean, GPT 4 and that sort of level, their frontier models, but they're still a huge limit to what they can do. And then typically, people have kind of gone from there, right up to AGI, artificial general intelligence, and then and of that superintelligence. So AGI is when you have an AI that is that can do the full range of tasks that a human do to the same to the same standard that a human would do it. And then superintelligence far beyond that, but a lot of people believe once you get to AGI, it can build an AI system probably better than a human that can build and so on, and you have an intelligence explosion up to a superintelligence. And, you know, there's a lot of discussion of that, and then there's a lot of discussion of AI as it is now. But what's the bridge? And the bridges artificially capable intelligence. So that's where we have these agents that are going to do things. And I would genuinely think that two of the first uses will be as lawyers and as financial advisors because, you know, there there's just clearly that these things, as such we all know, though, do cost money. And it's just the kind of thing that an artificial intelligence would be able to do very effective You know, there's no reason why you can't have an AI system that hasn't ingested every single law and precedent in the world and can have a very kind of profound and immediate understanding in a way that even the most, effective law firm could never hope to imagine. Of course, the difficulty is, you know, you might have that, but, you know, lawyers, financial advisors, all pretty good at kind of maintaining position in society. You know, you might be able to have, a system that gives you very good legal advice, whether that will be something that you'd ever trust, unknown. But I think you must start from the assumption that professional services like that, the first port of call will always be an AI, that all of what professional organisations are doing will be driven by AI, but that there's still going to be a role for the human, just as schools are not going to go away. I don’t think law firms are going to go away in the near term. Nor any kind of financial institutions. There's just going to be a very kind of huge change in in what a lot of people are doing on a day-to-day basis. Longer term, however, I think, you know, once things have just worked through, once that we have. I don't know sort of let's say regulatory turmoil and the full potential of the technology. Long term, I just don't know what the future looks like. And I think everyone always sort of feels like, you know, either everything's just going to be completely different or the status quo can hold on. What I actually think is that there'll be sort of change, change, change, change, change, change, change, things will look a bit the same, a bit different, but just at some point of AI, just the power of the technology does just mean that the models that exist today and that the sort of jobs that we have today, the high professional end, it is just going to totally transform.

Michael Bhaskar: I mean, I'd be interested to hear about what you guys’ sort of see as how in 20 years’ time what it is you'll be doing. Well, I would genuinely be interested in how you envisage that.

Alex Tedder: Well, I was going to I was going to ask you because I I thought it was a Bletchley Park thing that that it was Elon Musk who stole the show at the end. When, you know, when he said, the world without work is is is a reality in his view. I don't know if you agree with that. But based on what you said so far, that's that does suggest that you believe at some point that most tasks ultimately can be eliminated by artificial intelligence. Is that is that right?

Michael Bhaskar: Well, yes, a lot of top so this is where it gets interesting because tasks are not jobs and roles. And a role sort of transcends even the kind of tasks that make it up and that's this distinction that has given a lot of comfort to people, yeah, because tasks are far more automatable than roles. But roles kind of take the sum of a human to really work. But I think it's really, just wishful thinking to think that, there's something just ineffable about a role that that you cannot kind of recreate ultimately if you don't kind of, you know, essentially tasks are tasks in an organisation and the number of roles in an organisation that really are just about a kind of, you know, slightly nebulous, but nonetheless essential role is always going to be quite small actually. So, you know, maybe 10 percent, maybe 5 percent, maybe 1 percent of an organization staff have these kinds of roles that really can just never kind of be replaced. Whatever it is. Let's say even if it's 10 percent, that's still a sort of a shocking kind of social situation where you might see ninety percent of roles in a lot of organizations slowly disappearing over let's call it a 20-year time frame. You know, that's not enough time for our societal system to adapt. So, it is just an immense challenge.

Alex Tedder: I mean, you think you may think it could be that quick.

Michael Bhaskar: I think 20 years some certainly. I mean, just no doubt, you know, that in in 20 years’ time, you know, the idea that we're not going to have systems that could do most of what human beings can do. I would find it strange.

Peter Harrison: Given what's happened in the last 20 seems like an awfully long time. Sam Altman quote from a podcast he recently did was a long and beautiful expert actual curve of growing prosperity, which is probably quite an optimistic take on the world. We've got ahead of us because I think your point on social dislocation is quite an important part of that.

Michael Bhaskar: But equally, it's important to say that, you know AI could lead to sort of a phenomenal increase in economic growth. Even if we, from here, as a sort of world, just slow to and this is something that we researched in the book. If we had something like a third of the amount of economic growth over the next fifty years as in the last 50 years. You know, the world is still going to be a sort of 300 trillion-dollar economy in fifty years’ time. So, you know, even if we have drastically slowed economic growth, we're going to be a far richer planet than we are now. And, you know, that does create huge opportunities. And, you know, so there's a lot of, you know, AI could plausibly, therefore we're not talking about going to ten percent growth as some people are, which would, you know, see us quickly reach an economy, you know, in fifty years’ time just talking about sort of relatively modest productivity increase, which certainly seems plausible, AI could still be delivering trillions and trillions of extra into the global economy every year, that, you know, would presumably be going somewhere, and thus it is a kind of question for governments about how that money is distributed and where it sits.

Peter Harrison: And that I think is is that fascinating question because can the demands of a capitalist system and where that wealth will be concentrated cope with this change because clearly that that will potentially accrue to a relatively small number of peoples who the owners and architects of these AIs are and the correct deployers of them. And that, I don't know how far you get particularly, and I think we should bring in the great work on synthetic biology here because I think the two colliding. And we would add in things like blockchain and tokenization, which are also happening at the same time. The danger is this get very concentrated very quickly.

Michael Bhaskar: It's perhaps worth, just sort of going back with and thinking about this idea that in the book we're talking about of a coming wave of technology. And AI is, is the centrepiece of this wave of technology. But it is far broader. And I mean, I certainly always thought that, you know, just talking about AI is is an as is commonly the case in the world at the minute, everyone's focusing on it, is missing a huge amount of what's going on. So, the advances that are happening in in biology biotech right now are are staggering as well, often driven by machine learning. You know, the complexity of biology as a field is just so mind boggling. It's far beyond a sort of human brain to kind of, you know, conceptualise something like the immune system. It's just so intricate, but it's perfect for machine learning and AI driven approaches. And so we're really getting this incredibly fine grained understanding of biology, but now also the ability to use that to program and build biology. It's an extraordinary thought that life itself is becoming a kind of malleable platform for us to build on and to change. And of course, the implications of this are staggering for what it means as human beings that we could now start to edit our genomes and create new kinds of human or plug ourselves directly into computers, for healthcare and everything around that is going to be changing. And then also just the material world generally, we've got a kind of a new source of stuff and a way of interacting with the world through biotech. The other one that I would really highlight is robotic. A lot of people in AI have long believed that robotics is going to be a kind of another critical element in this, because if you think about human beings and our intelligence, it's embodied intelligence. And you can't we're not brains and facts, all our intelligence is is not just driven by a sort of processing engine, it's driven by emotions, it's driven by us as beings in the world and our relationship with the physical world. And a lot of people have always thought that a missing link for AI is going to be when they have something similar. Not saying it's going to be a direct analogy, but there'll be some relationship between this kind of physical instantiation. And, you know, what so what we're seeing now is that machine learning is is solving some of the hard problems in robotics just as it's solving hard problems in biology. So, you know, for decades, robots might be able to help build a car but ask a robot to pick up an egg off the floor, which, you know, one year old could do, and it would just crush the egg, incredibly odd problem feedback and dexterity and so on. But it's machine learning driven approaches that are managing to solve this. And so now we are starting to see that that robotics is falling in price very fast, not as fast as sort of a mall's law exponential process, but still fast enough to be significant. Even as, its capabilities are starting to kind of hit the promise of, you know, the 1950's dream of sort of, you know, humanoid helpers. I think it was, Vignor Kossler, the, investor the other day, said he thinks that in 20 to 30 years, there'll be 1 billion humanoid robots in the world. It's almost sounding science fiction, doesn't it? To talk about that? But imagine you were somebody, let's say out in the world in, I don't know, 1890. And, you know, there are a couple of motor cars in, you know, in Germany. And they're these very kind of funny things with, with three wheels. And someone said to you in 50 years’ time, there's going to be a billion of these. You might have thought that was crazy. But of course, the world we live in is that world, even though it's slightly more than 50 years. But, you know, the capability of the economy is is so much greater now. It's not to me absurd that there'll be a billion humanoid robots.

Charlotte Wood: Why would they be humanoid rather than specialised shapes or whatever they're doing?

Michael Bhaskar: That's very good. Question. And the answer is because the entire world that we've built and the economy that we've built is completely adapted to humanoid form. A robot that goes into a warehouse. You can create specialised ones. So can create a specialized environment, then it works. And there you will have nonhumanoid. But for so many tasks and things, it makes sense to have a humanoid robot only because that's the world that we've built. And that's why people think it's so important that the breakthrough is you know, we have ton I mean, there are tons of robots out there already, but they look like arms, or they look like just mobile platforms or whatever. But, you know, for it to become a kind of general-purpose help. Talk about disruption, you know, if there are a billion humanoid robots, then immediately, you know, a whole notion of war of peace, of a economically productive factory. Everything in this world has changed from the one that we knew.

Nils Rode: In your book you write about democratization and these new technologies leading to the empowerment of the individual and how that changes the power balance, but also some elements of centralisation. So, it's a bit of both, how will the world change if all this technology becomes available basically to everyone.

Michael Bhaskar: This is one of the big themes of the book is, and it's essentially a contradiction. We always sort of want to be open about this. The future looks contradictory because you have these different trends working at the same time. And we all just you know, the world contains lots of different narratives that all kind of slot together. And so on the one hand, this coming wave of technology could lead to, like, will lead to immense centralisation. You know, we're already seeing that the most powerful AI models exist in a handful of organisations, and the actual ability to kind of build a new frontier model It's not that widely distributed at all. There may be 5 to 10 organisations can produce a world leading AI model from scratch, and that's it. As the capabilities get more advanced, that may go down. But what we're also seeing is that within months, sometimes weeks, other people are recreating those models and making them open source, and then everyone effectively has those models. And we've never been in a I don't think in a technological environment where you have such kind of well, we've had that kind of extraordinary concentration of the the frontier in nation states, let's say with nuclear technology. But what we didn't then have was that just being open sourced and spread everywhere around the world within months.

Peter Harrison: And that brings us to to an important question on containment and the risk. It's because we're going to have many large language models, which are not properly tested potentially, and or or not properly contained. And one of the parts of the book, which I thought was important, but actually clearly quite challenging to write was what forms of containment, you know, how do we get alignment? How do we how do we manage this thing going forward because it seems like a step, and I think the book talks about how hard it was to contain nuclear weapons and how many near we've had along the way. And that's the best we've done, and that many examples are being a lot worse.

Michael Bhaskar: Actually, until, January of this year, the book's title was containment is not possible and we're trying to say that, containing all of this technology is just such a challenge, because one in history, the number of technologies, as you say, that have ever been contained is minimal. The dominant pattern of human existence has been technologies diffusing around the world uncontrollably. And even where they haven't, the nuclear case being the most obvious, we've not really done a great job You know, we're all still here, so that's good. But actually, there's a lot to be quite frightened about. And there's no way AI is ever going to be contained as nuclear was, because it's just a very different thing. You know, its ultimately software, and it can travel around the world instantaneously. The incentives behind building it is so huge as well. You know, if it's not going to be the US, it's going to be the Chinese. That leave, and if they back off, because, you know, it frightens the Chinese as much as it does anyone. Somebody else is going to come in. There are just too many things going on. So this idea of containing it is just extremely hard. It's a challenge truly without precedent in in that we have this sort of almighty prize that's there that is probably within reach, but it has all kinds of unpredictable effects that that, you know, could extend to the catastrophic. We have to find ways of keeping it boxed in. Saying containment is not possible is a provocation. It's trying to highlight how difficult this is going to be. But it also must be possible for us to have any kind of optimism as a species, we must make it work. We have done things in the past. That should give us some cause for, optimism here, you know, whether it's the Montreal protocol or whether it's the fact that, there are a limited number of nuclear weapons.

Peter Harrison: We haven't even got a common code of human ethics as the starting point. So and, oh, we agree on simple things like sustainability, which seems so the idea that suddenly we're going to come up with a prepack set of guardrails feels a long way off.

Michael Bhaskar: That we're not going to do, but, you know, what will hopefully happen is that we'll keep nudging progress forward on a number of axis. So, people will be making technical improvements. People will be creating new audit mechanisms, the regulation will kind of keep moving forward in in an intelligent way. You know, even a few weeks ago now, the US and China have opened a track to negotiate just on AI, which, you know, is a very kind of positive sign. These are human values. This is what we need to protect. This is what we need to put into the system to align it. That's just not going to happen. And we're not we're not also going to have this situation where it's like, right. Globally, we're going to pause AI development. That's it for synthetic biology. That's not going to happen. The fact is containment is like a puzzle with all these different pieces, and hopefully there are going to be enough people pushing forward different elements as we go, that it adds up to something meaningful.

Peter Harrison: Are you an optimist or a pessimist?

Michael Bhaskar: I truly vacillate. I don't speak for my co-author, but I think we'd both vacillate between being very positive, being, you know, somewhat less, take no optimist. I would say I'm more optimistic than not, but with serious caveats around that. To go back to an earlier point, I think the challenges that our society faces or grave, but if we didn't have this coming, I would worry about where we're going to end up. Without it we have problems.

Peter Harrison: Michael, it's a straw poll. Are you optimistic or pessimistic?

Nils Rode: Michael, how do you see the opportunity? You're optimistic and pessimistic in the book and I see it also changing from one chapter to the next. How do you see the opportunity to contain the technology with itself. So, to use AI and synthetic biology to contain itself. So you have some examples like a virus attack that can become much more intelligent and morph and but so so can the defences or deep fakes but you can have digital fingerprints or zero knowledge to check if it's authentic. So, can technology be one part of the solution?

Michael Bhaskar: Critical. And in two ways, one is that, you know, the more kind of advanced we get, the more, you can create defensive technologies, you know, so for every new sort of offensive actor that you might build, you can create something that stops it, you know, one kind of big worry about AI is, you know, you can create and manufacture any kind of viruses. But then you might be able to find ways of spotting that and eliminating it, creating vaccines incredibly quickly, so on and so on. So, you can create defensive measures. But then also you can start building in safeguards. So, you know, one thing that they're looking at now, for example, is kind of provably safe AI, where, you know, the safety features are not just kind of bolt, but they're mathematically encoded into the system from the start.

Alex Tedder: Actually, can I ask you, sorry? Going back to your point about the ethics, Peter, which I think is a super interesting question, and could you in theory code ethics into AI? For example, thou shall not kill if you put it that way, right? Could you code it in such a way that implicit in AI as it evolves is the assumption that you should not kill humans?

Michael Bhaskar: Well, that still, sort of the Isaac Asimov point which he came up with in the 50's or earlier, the three laws robotics, which is sort of this exact thing. What his stories are is how very quickly when you start encoding those kinds of things in, you start to run into problems and contradictions. And so, it's quite dangerous kind of put in these explicit instructions. But yes, you would do that, but there are kind of cleverer ways of doing it. Peter like Stuart Russell, who is a famous Berkeley, computer science professor, he's looking at ways for systems to learn on their own what is right. And then it's just as a system kind of, you know, learns what token to output, it learns not to do certain things. And so, in that sense, it is hardwired into the system. But you know, yeah, this is the great challenge of AI safety.

Peter Harrison: You don't want all the alignment to be done by Ivy League Preppy Boys who all grown up the same way. So, the world could become a lot more nuanced because we'll all have our own AI, but you don't want you want it to be nuanced for everybody, not for just an elite few.

Michael Bhaskar: But you still must find your kind of baseline things. One thing that I didn't really go into the book that we discussed is if you're a terrorist, you don't wake up thinking, I'm a bad guy - I'm a terrorist. You wake up thinking I'm fighting on the side of right, and what I'm doing is ethical. You know, everyone believes that what they're doing is ethical. If people can code, you know, their own ethics into AI, that is a problem.

Peter Harrison: This is the problem with alignment.

Michael Bhaskar: Whose ethics is right? And this is is the challenge of alignment and it's a human problem and a technical problem.

Peter Harrison: And it brings down the cost of war massively.

Michael Bhaskar: Oh, exactly. The costs, the risks, the triggers. What I think is the greatest risk in the war side is the costs go down but so does the potential for misunderstanding. If you have systems that are lethal autonomous weapons, and they have kind of strike capability autonomously. You never quite will be able to unpick what was the trigger for an event. Imagine there are entire national arsenals is that are effectively controlled by AIs. We don't know whether on the one hand, that going to eliminate war because they just decide who's going to win, and there are various science fiction novels promised on that. So, they just run all the calculations and say, well the Americans have got it. Or does it kind of just create opaque decisions that are never fully transparent to anyone where, you know, suddenly the missiles are out of the sidelines. We don't know. It's so dangerous.

Peter Harrison: Nils, you invest in a lot of these businesses you know the sector well from an investment perspective. Just talk to me about how you see the investment opportunity evolving and you know, and I want to conclude whether you're an optimist or a pessimist.

Nils Rode: I tend to be an optimist, and so think one analogy that I would like to use is if you think about a time and you also refer to that in your book when nearly everybody was working in agriculture, and then suddenly came automation. And what happened is that, of course, people did other things and, what they some things that were not valued at the time suddenly they were valued and suddenly you could be a scientist and that was not possible before. So, the optimist, in me, wants to think that yes, there will be huge change and will be faster than ever because these other changes, they took centuries or decades. And now here we talk about the next 10 or 20 years. So, the the speed, will be the biggest challenge in my view. But the optimist in me wants to say that, yes, there will be huge changes in task and rules on how they will be done. But we will have things that will be valued up like a kindergarten teacher or a nurse, which really should be valued much more today, and I I see the opportunity that with these technological changes. These types of roles that exist already, they will be valued up but then other rules where people do things it's not even a job today and then they're not paid for it. Maybe it becomes a job, and they will be paid for it. So that's that's what that's what the up to me wants to see. What do you think about that?

Michael Bhaskar: Yeah, I do broadly agree. I think the right it's just there's going to be lots of changes that come at a pace. Any society struggles to adapt to huge changes. So it's just going to be the speed of it. And and I completely agree that roles that are currently not valued, highly enough could become more valued, and that new roles can be created. But just getting the equation of it all to work out at a kind of net positive. It's going to require immensely skilful governance. We're going to need incredibly competent governments everywhere around the world, acting in an extremely enlightened, informed way. That's a big ask.

Charlotte Wood: Do you think the right people are kind of aware of the level of disruption and risk and also opportunity in this because I think it's great that AI safety comes up more and more and clearly there's more and more research going into it, although not the level it probably needs to be. But do you think in, for example, like political and business circles, there's that appreciation yet?

Michael Bhaskar: We were watching all of this as we were writing the book. We're watching the story gather pace. And, you know, things were moving very fast on the tech side. One of the things that surprised me the most since we started writing the book in a very positive way is how AI has rocketed up the agenda. And if you'd asked me two years ago whether, President Biden would be signing executive orders on AI. I wouldn't have believed that. I wouldn't have thought it was possible. That we'd see such a sort of policy response that, you know, suddenly we'd have governments and tech companies all around the world, you know, at one summit where know, this is really on the agenda of world leaders. It's something that every government is actually talking about. You've got the EU really pushing things through. Then suddenly you've got the French and German governments maybe putting the brakes on, but it just shows because they're so now engaged in AI. So I think actually one of the most extraordinary stories is not just that the technology has moved fast, it's that the government policy and business response from everyone has been far bigger than I would have thought. So that is a cause for optimism.

Peter Harrison: Alex, pessimist, or optimist?

Alex Tedder: Oh, optimistic, Peter. For public markets? Public policy, and it one must be. But I I definitely am. It's so exciting. I found the book. I love the book because it is really balanced. On the one hand, and you talk about pessimism aversion, I think, in the book. That was a concept that I I really do think is important. My inclination is obviously just to focus on the positives, the innovation, the dynamic, but need to think about containment a lot. But you've got to come away thinking that we will find a solution to this. I had a huge argument with my family at the weekend about climate change because they said that wouldn't AI ultimately conclude that the worst thing for the planet were human beings and therefore eliminate human beings. And my view was it will never go that far because we'll find a way to resolve the climate crisis, and the system will recognise that. And it will be circuit, you know, the circularity will be positive. It was an interesting discussion. And obviously, I lost that discussion, you know, family got together. Joining is is puts and takes but you got to be optimistic about the opportunity set, I think, ultimately.

Michael Bhaskar: If we built, an AI system that sort of looks at the the climate crisis from that perspective, then know, that that is a huge failure of design. I think it's far more likely that if we have a kind of runaway AI, you know, would it really care about that? It's probably off doing other things. It's at that point, it's, you know, one would hope that it it sort of gone to conduct some vast cosmic experiments and so on, and humanity is the detail. It's far more likely, I think, that in climate terms, AI is a huge positive because, you know, it's clear that trying to sort of adapt our our economy and lifestyle at the speed that perhaps is desirable or even necessary, it's just such a tall order that unless we have new kind of techniques and ideas and and means to implement things coming in. It's just such an almighty challenge. So, you know, we need we need fusion power, but it's this incredibly intricate technology a puzzle. So, if some if we can start having tools that help deliver that, bam. Yeah. You know, we might have solved the energy problem. We need to increase that. So in in almost every way, I think AI makes me feel more positive about how we're gonna fare environmentally. Even if a lot of it is still nebulous, it's not quite clear what it'll do. I just have this sense that we need we need things that are going to move our beyond where we are now. And the fact is we can't really envisage all the solutions to these questions, and that's where AI will help.

Nils Rode: Michael can I ask you a question, the coming wave that you described, would that be the last wave of technology progress ever because, if we think about the next 20, 30 years, there can be this point of ever-increasing artificial intelligence, superintelligence, singularity, progress, and innovation becomes, instantaneous, everything happens in 1 day? Everything will be invented in one day. So is this the last wave?

Michael Bhaskar: Very good question. And the truth is we just don't know. And, you know, it's always interesting to me that, that the term that people use, about superintelligence is the singularity which brings to mind a black hole, and the whole point in a black hole is around the event horizon. You don't really know. You have no information. So I think it's similar with really super advanced AI. Once we get to that point, it's just pure speculation. And, you know, in the book, we're quite careful not to go to that point. We're always staying this side of the sort of superintelligence. However, we do slightly pause it that the next wave after this one would probably be some kind of nanotechnology. So, you know, where it's not just biology. It's kind of all, all kind of atomic level matter becomes a platform for creation. And something to do with that level of manipulation of physical matter would be the wave beyond the wave. But it's so spectacular.

Nils Rode: It takes more time. That can't be done in a safe.

Michael Bhaskar: You kind of need this wave to get to a place where that is even conceivable. Again, it's so speculative that we don't know. My hunch would be more likely is if this wave is is arriving as as we think is extremely probable, then the world is essentially changed. And rules that have seemed like sort of almost innate rules of history are probably going out of the window. And so perhaps to think in terms of these technological waves that have defined human existence for 1,000s of years, since the very beginning, in fact, to think of it in those terms is no longer appropriate.

Peter Harrison: Charlotte, optimist, or pessimist?

Charlotte Wood: I think on an existential level, I'm an optimist. I think we'll find a way to make it work because I think we must, and I genuinely believe that we will. I think on the pessimistic side, there will be casualties along the way, when you think about things that are written about in the book of kind of bad actors using AI to be able to kind of enact negative events at such a big scale. I do think that that is likely, and I I don't know how we'll avoid that. See, I hope we can avoid it to the greatest degree possible, but I think bad things happen in the world today and AI means that that can be.

Peter Harrison: But the social challenges we've got in the world today are going to be the, for me, the thing which comes to the fore. Michael, can I ask to conclude a slightly more parochial question? DeepMind was a was a UK company. Fantastic, you know, based just down the road and that's what we're thinking that that went in. UK is ranked third but behind two huge superpowers of the world in AI. Do you think it could have been different? And had AI been able to raise the money in the UK to start growing itself properly that this could have been much more a UK driven business?

Michael Bhaskar: If I'm commenting on DeepMind, really is just an observer. I mean, so I think the answer is probably not. I don't think there was a route whereby the UK alone would have ever hosted one of the world's leading AI labs just because the talent, the compute, everything, it requires, you know, enormous amounts of capital, and most of that exists in the US. So, I think it's a nice thought, but it's not necessarily a surprise that until, you know, recently France, Germany, the UK, even Japan, South Korea, none of these countries has had one of the leading labs. So I think it seems implausible to me that it would have happened. A great shame because I think it would be brilliant for Britain if we did have a sort of homegrown thing, DeepMind is still based in London, so I think it's a huge positive for Britain anyway. And, all the other big AI labs are opening here, and there are still homegrown companies coming through. We should just be encouraging that doing everything we can to make sure that more companies start that there's scale up funding available. I don't think we're going to necessarily expect to have open AI founded here, existing here. But I think we should have companies that are competitive with the next layer down and beyond.

Peter Harrison: Thank you for a fantastic conversation. It's been just touching on both timeliness of the publication of the book, but also many of the thoughts inside it. But lots of food for thought. So grateful for you coming in today.

Michael Bhaskar: Thank you very much for having me. Cheers.

David Brett: Well, that was a show. We very much hope you enjoyed it. If you want to find out more, please head to Schroders dot com forward slash insights, and we're endeavouring to record as many of these shows in the studio on video. If you want to watch them in their full unabridged version, then go to Schroder's YouTube channel. If you want to get in touch with us, it's Schroders Podcast at Schroders dot com. And remember, you can listen, subscribe, and review the investor download wherever you get your podcasts. New shows drop every Thursday at five pm UK time. But above all, keep safe and go well. Cheers.

Disclaimer: The value of investments and the income from them may go down, as well as up. Investors may not get back the amounts originally invested past performance is not a guide to future performance.

Information is not an offer, solicitation, or recommendation of any funds, services, or product or to adopt any investment strategy.

Subscribe to our Insights

Visit our preference centre, where you can choose which Schroders Insights you would like to receive.

Topics

Follow us

Please remember that the value of investments and the income from them may go down as well as up and investors may not get back the amounts originally invested.

This marketing material is for professional clients or advisers only. This site is not suitable for retail clients.

Issued by Schroder Unit Trusts Limited, 1 London Wall Place, London EC2Y 5AU. Registered Number 4191730 England.

For illustrative purposes only and does not constitute a recommendation to invest in the above-mentioned security / sector / country.

Schroder Unit Trusts Limited is an authorised corporate director, authorised unit trust manager and an ISA plan manager, and is authorised and regulated by the Financial Conduct Authority.

On 17 September 2018 our remaining dual priced funds converted to single pricing and a list of the funds affected can be found in our Changes to Funds. To view historic dual prices from the launch date to 14 September 2018 click on Historic prices.