The Systems Will Change Before the Mindsets Do: Professor Richard Susskind on AI, Law, and the Limits of Tradition

Summary
In Episode 12 of the AAAi Podcast, Bridget McCormack and Zach Abramowitz spoke with Professor Richard Susskind about his new book How to Think About AI: A Guide for the Perplexed. The discussion ranged from judicial transparency to the future of legal education and provided a rare, clear-eyed perspective in a field often clouded by hype and hesitation.
Susskind, a longtime commentator on the intersection of AI and law, emphasized that the greatest challenges to progress are not technical but rather professional, cultural, and institutional.
Key Takeaways:
1. The legal system doesn’t just need AI—it needs to be rebuilt with it.
Most of what’s called innovation in legal is just automation, layering tech onto inefficient, outdated processes. There’s a critical difference between substituting tasks with technology and redesigning systems from the ground up. Susskind is clear: the most impactful gains from AI in the legal sector will not be about productivity but about creating entirely new systems that prevent disputes before they happen.
2. AI is not replacing lawyers—it’s replacing the assumption that lawyers are the only solution.
Susskind resists the “will AI replace judges?” conversation. Instead, he reframes the question: “will governments or markets create AI-based alternatives that offer faster, cheaper, more trusted outcomes than courts?” Many legal professionals still believe their roles are immune to disruption, especially in areas like dispute resolution, where human judgment is considered irreplaceable. However, access to legal services is already out of reach for many, suggesting an urgent need for alternatives.
3. The real issue isn’t AI—it’s legal education.
One of the sharpest critiques in the episode came not about technology, but about law schools. Many institutions continue to prepare students for a version of legal practice that no longer aligns with emerging realities. Future legal professionals will need skills in systems design, risk analysis, and data science—areas that are largely absent from current legal education. Without this shift, law schools risk producing graduates unprepared for a rapidly evolving profession.
Final Thought: The main obstacle to AI in law is the belief that law is exempt from change.
Lawyers are trained to be cautious, which can come at the expense of creativity and adaptation. Meanwhile, most dispute resolution today happens outside the traditional legal system—and often without lawyers. AI presents an opportunity to rebuild public trust in justice, but only if legal professionals are willing to reimagine how justice is delivered.
How to Think About AI: A Guide for the Perplexed is available now on Amazon.
Watch Episode 12
Transcript
Below is the full transcript of Episode 12 of The AAAi Podcast.
Bridget McCormack: Very fortunate today to be joined by Professor Richard Suskin, author of a new book, which we're going to be discussing, but I think a lot of people probably know him.
Zach Abramowitz: From the end of lawyers, also from, the future of court and professor, we're thrilled to have you with us.
Richard Susskind: It's great to be here. Thank you very much, Bridget. Zach, nice to see you.
Bridget McCormack: Nice to see you as well. I've been a super fan for a long time. Richard, before I took over as the CEO at the AAA was the Chief Justice of the Michigan Supreme Court through the pandemic and I was regularly pushing Richard Susskind material out to judges all over the country.
Richard Susskind: that's a frightening concept.
Bridget McCormack: Well, it’s the right thing to do. We're really excited to talk about your new book, which Zach and I both had on pre-order and finally got access to this weekend. And thank you for writing it. Thank you for being here. I'm going to start with what I hope is an easy question. Who are the perplexed? Your subtitle as a guide for the perplexed? Who did you have in mind? Was it the public policymakers, regulators, lawyers, judges? Who is it?
Richard Susskind: Well, the subtitle actually is taken from the title of a book by 12th century Rabbi called Maimonides. So I've always liked his book it was the Guide to the Perplexed, but I always thought that was a good title. So, slightly Childishly. I thought it'd be a useful subtitle. But the reality is, I think we're all pretty perplexed by AI, and the more one delves into it, I think in a sense the more hesitant one becomes of the possibilities and the probabilities.
And so, unlike most of my work, which has generally been for lawyers and for professionals, this is intended for a general audience. And I had hoped. I think the public debate just now very much is being driven by technologists and tech entrepreneurs, and I'd hoped to make a contribution that was rather different. I wanted to say, put to one side the detailed technicalities. Let's think about the implications, the social, the ethical, the legal, the economic implications of this technology, and hopefully encourage everyone to start discussing this defining technology of our time. But not from the point of view of technology, rather from the point of view of impact.
Zach Abramowitz: Professor, you've obviously been writing.
Richard Susskind: You can call me Richard, only my wife calls me a professor.
Zach Abramowitz: You've been writing about this topic really since before legal technology was a thing really before, since AI in the public, conscious. Would you say that. Your view, especially after interacting with generative AI, has changed since you wrote those earlier books?
Or do you look at this development and say, Yes, this is what I was telling you about all along. This was sort of the moment. How did it resonate with you?
Richard Susskind: The history is that I wrote my PhD at Oxford on AI and law from ‘83 to ‘86. And then developed what I think is the world's first commercial or co-developed, the world's first commercial AI system for lawyers, which was our rule-based expert system. So a very different kind of technology where essentially to get systems to perform at a high level in law, you put together a huge decision tree. The system, we developed it over 2 million paths through it. Just to give you an idea of the complexity, this was the idea really of taking complex areas of law and reducing it to a form of code. And we could see then that there was the possibility of systems offering legal counsel, legal advice, legal guidance, but we didn't really, in the eighties or indeed in the nineties, conceive of anything like the power and the flexibility of generative AI. And so it's all happened I think, far more quickly than I and others had anticipated.
The coming of ChatGPT at the end of November, 2022 was a defining moment I think for humanity. I don't think we can overstate it. And suddenly here was a system that was doing much that we'd all been dreaming of for many years. Drafting documents, summarizing documents, creating visualizations of documents, analyzing documents, answering legal questions, offering legal guidance, undertaking legal research, producing legal timelines, and I know doing so often in error. But it just gave us a glimpse of the future, and I think this therefore accelerated many people's, including my vision of how technology might impact in the law and lawyers, and more importantly, how it might offer access to justice and change the way that most people could interact with, the law and the justice system.
So the answer is yes and no. To some extent we could predict in broad terms that we would've, systems that would help offer. Legal guidance to people who are not experts, but the speed at which the technology descended upon us, I think has left most of us gasping.
Zach Abramowitz: And I want to pick up very quickly on something that you said, because you said it was a major moment for humanity.
So, I know that Bridget has said something similar to an audience recently. I've talked about this a little bit. I know there are many lawyers who want to nitpick ChatGPT or nitpick large language models and focus on hallucinations. But I think that if you frame it and understand that this was a major moment for humanity, it really shifts how you think and interact with the tools and, and your expectations along the way.
Richard Susskind: That's right. People who focus and I understand why people do it on the shortcomings in the short term are suffering from what I, in my new book call Technological Myopia. That's to say somehow judging or predicting the long-term potential of these technologies in terms of today's faults and shortcomings.
But it seems to me, if you look at the massive effort and massive investment being made, the chances of the market not overcoming, many of these problems are minimal. It seems to me very clear that these systems are the worst to liver be from now on, and they're likely to improve and improve. Think there are two different ways of looking at AI, the short term perspective and the long term perspective.
In the short term, people are asking, how could this be a useful tool for me today? So if you're a lawyer and you want a productivity or efficiency tool, you're right to say, well, hang on a second. It makes mistakes. It doesn't always get it right. How good is this? And so there's a whole set of questions one can ask about its short term impact. In the long run, if you're thinking strategically, you have to look beyond these, in my view, temporary shortcomings and think what will the impact be when these systems are performing at the level of humans and doing so very reliably. And I think that'll change the entire landscape for legal and court and justice services.
Bridget McCormack: I do too. But I think, what you call not us thinking is very, comfortable for lawyers. And maybe especially for, judges and arbitrators, I would have to say. Where do you see that not us thinking most stubbornly in the dispute resolution ecosystem in particular? Where do you see evidence of that?
Richard Susskind: Explain the concept, first of all, not us thinking. We saw, first of all, my son Daniel and I, when we wrote our book in the Professions in 2015, and we looked at eight different professions and we came to see that every profession saw far greater scope for AI and professions other than their own. But this was evidently in coherent. But we all do our special pleading. We all think we are in our own way. Beyond this technology. Oh, yes, I can see how it might apply in other professions or in other areas of law, but what I do could never be prejudiced by, or affected by, or threatened by technology, because I'm creative or I'm empathetic, or I exercise my judgment in a way a machine never could do.
And, of course, judges also engage in this kind of not us thinking, these are a fundamental public service. We are operating often at the hardest end of the cases. It requires our refined knowledge, insight, expertise, our sense of judgment and experience, and none of that will ever be replicated by machines. I don't find the discussion. Will AI replace judges a terribly fruitful one because I think we should be asking a different question. I think we should be asking whether or not the state or the market will introduce an entirely new way of resolving disputes based on AI rather than based on human judgment. And I think it's a very interesting question, the extent to which states want to bring these systems into the portfolio of offerings that they provide to the public, or whether or not we're going to just leave it to the market, but. If we're seeing that machines can't judge in the way humans do, I think that's entirely right.
Neurophysiologically and Neuropsychologically, these machines are not set up like humans, but can they deliver the outcomes? Can they provide the determinations? Can they build the confidence in their users that a judge and the courts can? I think the answer to that is yes. So, I see them to some extent in the medium term, at least as complimentary. I think we get involved in unhelpful debates when we start comparing humans and machines, because the key point in all of this is these machines do not attain their high levels of performance by replicating human reasoning processes. These machines have their own distinctive capabilities based in huge amounts of data.
Remarkable algorithms, phenomenal processing power, and the outcomes they generate are at a high level. But not generated in the same way as humans do. And some people find this entirely intolerable. They would say, hang on a second. If we don't know how these machines work, or if they're not transparent, then how on earth can we rely upon them? But I'm interested to hear your view Bridget, but it seems to me that, judges themselves are not entirely transparent. Judges themselves don't know the reasoning processes in, as I say, neuropsychological, neurophysiological terms that leads them to their conclusions. Judges often deliver what I think it's safer to say are ostensible reasons for the decisions rather than necessarily their actual reasons.
So, we shouldn't be expecting more from our machines than they were already getting from our humans. What I try to do in my books is poke and prod, around these various ideas. because I think people come with the very fixed views. Machines can't do things that judges can't do. Judges shouldn't be replaced by machines and so forth. And I say, well, hang on a second. And for example, I distinguish between poorly met need and unmet legal need. So unmet legal need is where the legal and justice systems are beyond usually the affordability of most people. We have inaccessible legal and court services, and for people who are suffering from that lack of access to justice, then an AI based solution is way better than the nothing they have just now. I'm not saying that anything is better than nothing, but we're seeing the emergence of systems that soon will certainly be better than nothing. If you want to compare this to poorly met legal needs, perhaps that's to, legal advice or court based service that's regarded as too slow or too costly or disproportionate.
Then there's perhaps a more. Informed argument or a more intricate argument. We can have balancing, as I try to do in my book on online courts, the future of justice, the benefits of an online method of resolution, or the benefits of a human method, of resolution. But what I'm discouraging and through my book where I say how to think about AI is being too dogmatic in any way about any of this.
We have to keep an open mind.
Bridget McCormack: Yes. As somebody who reviewed the decisions of about a thousand judicial officers for 10 years, I can tell you they, are not perfect. And, I certainly couldn't explain why they came to certain, some conclusions, from one day to the next. And obviously we all know the literature on, you know, what judges do before elections and what they do before and after lunch, and you know, how, how their decision making changes based on, what's at stake in their own personal lives or just whether they're hungry. So I couldn't agree with you more. I mean, you're preaching to the choir here. And I often say to legal audiences, you know, as opposed to what I. You know, most people in, in Michigan, you know, we adjudicated between three and 4 million, million cases a year, and most of them had to navigate civil legal disputes without lawyers.
And it's a game changer for those folks. And that's just about everyone. So, you certainly don't have to convince me. I can give you lots of examples I think for your next book if you want. We'll, we can set up another problem.
Richard Susskind: Thank you. There's a related point I wanted to bring out, and that is that. And I think this is a difficult challenge for many of us, that we as professionals, whether judges or lawyers, we often believe there is intrinsic worth in how we work today. Rather than thinking what we do or how we do it is simply of, as philosophers say, instrumental importance.
That's to say it's important because of the consequences it brings. And I often say that people don't want doctors, they want health.
Bridget McCormack: Yes
Richard Susskind: We confuse how we deliver the outcome with the value itself.
Bridget McCormack: We're building, AI native dispute resolution processes at the AAA. People often say, well, who's going to want that? And when I talk to lawyers, they're really convinced that nobody will want a dispute resolution process that's immediate, inexpensive, and efficient. When I talk to business owners, they ask me how quickly I can finish building it.
Richard Susskind: Precisely that. Yes. I'm not too critical of large in this because you devote so many years of your career to learning to work in a particular way for polishing that. Yes, and I do think many lawyers believe that the way that they handle issues of fact and law are absolutely. In the interest of clients and so forth. Yes. But what we are seeing is just an alternative way of looking at and thinking about dispute resolution and AI causes us to think very differently because mm-hmm. In many ways AI is all about outcome. And there's interesting questions about can you adjust outcome from a system that has no sense of justice? And I'm not sure our children and grandchildren will answer that question the same way as we might.
Zach Abramowitz: I'm curious because, you talked about, the not us phenomenon and, I am sometimes quick to point out where others are guilty of it, but recently, it was pointed out to me, I was, writing my newsletter and someone said to me, I see you haven't used AI at all in putting this newsletter together. And I said, well, for the newsletter, I really want it to be deeply human and connect and they said to me, well, aren't you guilty of the same thing? So I have to ask you, did you feel writing this book, conflicted of over how much to use AI, given, not us, sort of applying it as well to, to you as an author. Obviously you didn't write your first books with AI, but you know, there may have been part of something here where you felt conflicted. How, what was that experience like for you?
Richard Susskind: Say in the book right to the outset, that none of this book was written by AI for better or for worse, and I really didn't use AI intentionally. There’s a little bit of, perhaps quite a big bit of me feels it's cheating as an author to use the technology. I'm quite upfront about that, and that's probably my romantic vision, my irrational side, because the truth is I could have taken the whole draft and dropped it into the hopper and asked it to tidy it up a bit, and it probably would've improved it. But I didn't feel inclined to do that, and I felt strongly inclined to tell readers I hadn't done that. Can I conceive in the future of writing books with the strong help of deep research, an AI system that could be acting as a team of researchers for me, world class proofreading. World class editing, sharpening my language, reducing duplications and so forth. I can completely imagine that, and I just didn't feel quite ready for it. And I suppose I'm reflecting a very similar attitude of which I'm critical in lawyers when they say they're not ready to use this technology. But I think there's more at stake when you are offering services such as public dispute resolution than there is when you're writing books.
I can indulge myself and readers can take it or leave it when we're talking about something so absolutely socially vital. As the public resolution of disputes, we should be using all possible tools and techniques to make it more affordable and to maximize its quality.
Bridget McCormack: I’m a little worried about, I'm going to go back to the lawyers, that I agree with you. It's understandable given how they were trained. But a little bit worried we're not training them still for, what I think is going to be a radically different future in how, you know, we do justice delivery. Maybe it's better in the UK. I hope it is. In the United States, I taught a class on generative AI at Penn Law School to 12 LLMs and three other students and nobody else in the law school had any access to any legal AI tools or any other AI tools. And I think that's true across the board. In the United States, there are very few law schools that are thinking about whether they need to completely change their curriculum. What do you think, what do you say to, to lawyers who want to future proof themselves? Lawyers who are interested in trying to figure out how to make sure they are relevant to a market, that's going to change dramatically.
Richard Susskind: I think the concept of future proofing is now I'm afraid incoherent because the future is so uncertain. And in my book, as you'll see, even within the AI community, there's six different hypothesis about where AI might end up. So there's no way one can future proof across these six hypothesis or these six possible scenarios for the future. I think we are. In a time when we simply cannot predict how the future is going to unfold. And being flexible and agile and willing to learn and retrain is going to be fundamental. But I can't offer anyone the comfort. Neither businesses nor young people, of future proofing I think on education are, there are exceptions, but our law schools here are no better than in the US and around the world.
I think they're still generating. 20th century lawyers and I asked the question, what are we trading young lawyers to become? And we really are equipping people for a 1990s version of legal practice. If you imagine that our systems get to say a level where the current. Generative AI systems become reliable, entirely reliable, and we're comfortable that the data has been cleansed and improved. And I think that's entirely foreseeable given the scale of investment over the next, I believe by 2030 will have that. That alone will fundamentally change the need for junior lawyers and for senior lawyers as well. And I think we're shying away from this. And our law schools continue to insist that the core curriculum is the one that should be taught. In fact, how they teach and what they teach hasn't really changed since I was at law school in the late seventies, and the early eighties. My single biggest worry about the sustainability of the legal profession is the way in which most law schools are ignoring this vital technology.
And so when people say to me things like their 16-year-old, they say. Oh, she's just going to be like her mother or a lawyer. I'm saying. Well, I'll tell you one thing. By the time she emerges 10 years, hence, the world is going to be a very different place. And even if she's broadly in the law, it'll be nothing like her mother. And so we are simply, blinkered to this future. We're hanging onto these old ways of working. So that's why I say in my book tomorrow is large, that the future, it is not Grisham, it's not suits, it's not rumple. It's something very different. And it's hard to see that AI won't be at its heart. The fundamental option, I think, available to people going into law and all the professions is either to be of the mindset where you think. I think I can compete with these emerging systems. I know these systems are becoming increasingly capable. I know there's trillions of dollars being invested in them. I know we're seeing breakthroughs every six to 12 months. I know we're seeing some of the most talented people on earth going into AI. I know we've got thousands of startups were trying to do to law what Amazon did to book selling. But I think I can still be a traditional lawyer that's choosing, essentially to compete with these AI systems is to say, I think I can outperform them, and I think that's a daft career plan. A daft business strategy too. In the long run. The alternative to competing with these systems is building these systems. Yes. To say that. I as a lawyer am still a content provider, but I'm no longer delivering the content in a one-to-one consultative advisory service.
I'm involved in building the systems and ensuring the content is there, the systems that will replace our old ways of working. So in the short term, I see the impact of AI as essentially turbocharging lawyers in the long run, I see the impact of ai. In actually empowering non-lawyers, organizations, individuals to do the legal work for themselves.
But these AI systems they use, I don't think will be general purpose AI systems. They will be legal specific. Mm-hmm. And the building of these systems and the maintaining of these systems will be for some time at least, the job of tomorrow's legal professionals. And we are simply not training. Our young lawyers to be these young legal professionals.
We're not training them in data science or knowledge engineering or system design or process analysis or design thinking or risk management. All the various skills that that one needs to build these new systems. So we're off the pace of the truth now. Just to make one point, a lot of lawyers react to AI by saying, what about courts? I think it's interesting when there's a breakthrough in medicine. We don't say, what does this mean for doctors? And we have, I'm afraid in law, there's remarkable orientation towards ourselves. You know, I'm a lawyer myself, I do it myself. I can't help it. But AI in law is not AI for lawyers. It isn't about us. This is about making the most fundamental social institution, in my view, more accessible, more affordable, more pervasive. We live in a world where in some ways, so many people are alienated from the law and can't use the laws, the resource that they deserve.
We live in the world where the rule of law is prejudice, at least in part by its inaccessibility. And so we have the promise here of solving the global access to justice problem of radically deepening the rule of law. Now, this may be at the expense of traditional lawyering, but I'm more concerned about the law and justice than I am about preserving an old model of work.
Bridget McCormack: I feel like looking into my soul. The access to justice crisis in America is fundamentally, reason why people are losing trust in institutions. Imagine if we told people they had to pay to use the highway or, yes, you can go to public school, but only if you can afford to pay for a special person to go sign you up and take you there and, you know, communicate with the principal or, sure you can have electricity, but only if you can afford the private electricity Sherpa to get it for you or it's outrageous and at a certain point when, you know, 92% of Americans can't access any help with their legal problems. They might stop caring about the rule of law, right? If the rules don't really account for them, they're not going to care about it. So, you know, there's nothing more important to me than breaking through in this way. So I don't care that much about lawyers either. No, I do. I love lawyers.
I love you guys, listeners, but I want us to be thinking about
Richard Susskind: some of your best friends are lawyers. Yes.
Bridget McCormack: Yes, same. Yes. But I do think that we have bigger problems than figuring out our, you know, future pay checks. Right.
Richard Susskind: Yes, in in one of my books, I can't remember which one I was said, but I know it was a deeply unpopular remark. I said, the law's no more there to provide a living for lawyers than ill. Health is there to provide a living for doctors. It's not the purpose of law to keep lawyers in a living. We have to, if we want to continue as legal professionals fit into the emerging and new and better ways of making the law and justice successful. This is nothing personal about lawyers. It absolutely isn't. And I think we can see more clearly when we look in other areas. We wouldn't be saying that the reason we shouldn't be finding a cure for cancer is because that would mean oncologists would no longer have a job. That would be patently absurd. But a similar kind of argument, I'm afraid in the United States is run under the heading of the unauthorized practice of law and everything that surrounds that there's a sense in which lawyers are surviving and thriving, not by bringing value that others don't, but actually because the drawbridge has been brought up, and I think that's intolerable.
Bridget McCormack: It is interesting, even with AI, you do see the difference though, you already see, across the medical profession, randomized control trials, using AI to determine when it's better than doctors, right? And when it's better, when doctors use it or even when it's better on its own. Lawyers didn't do randomized controlled trials even before we had this new technology, but you certainly see lawyers, being more successful at resisting, trying to learn about the ways in which it could impact, our profession. It's stunning. I have a slide I use that has, a surgical suite from 1890 and a mod modern surgical suite and the Iron County Courthouse in the upper peninsula in Michigan from 1890. And the same courthouse today, and it's like one of my favorite slides. But lawyers have been able to resist any disruption for a very long time. But it seems like this time the jig might be up, right? Is this it, is it time that lawyers are going to have to figure out what's next?
Richard Susskind: I, I think that'll be the thirties. I do think, that people are overstating how much will happen over the next two or three years. Yes, because I think the focal point, and this is why this market interest in AI will not be in the find of disruption and transformation that you and I are talking about here. I think the focus of the next two or three years will be law firms saying, how can we become more productive, more efficient, more profitable by using this remarkable technology? So that's the short term thinking of AI. The longer term thinking, which will, I don't think will click in until the thirties will be, as I say, less about lawyers and more about the law and excess.
So it's not an imminent challenge. I keep on saying this to lawyers. This is, not urgent, but it's deeply important because I think the legal professional got a few years to think through where it sits in this emerging landscape. And as I say, I think a lot of the short term claims often by people who are quite new to the field of AI and law, they will say things like, we've got 18 months and then legal professional will be finished. That seems to me to confuse the advance of technology with the adoption of technology. The advance of technology undoubtedly is following this well discussed exponential curve. But the adoption of technology is a far more jagged affair and is less predictable. And we can see a whole bundle of reasons why it is that even if the technology is available, it wouldn't immediately be used. But I find it hard to imagine as we're in the thirties, that the kinds of vision I think, that we share wouldn't be realized.
Zach Abramowitz: Yes. And I think that it's going to be very hard for lawyers. To imagine that future without starting to experiment with the tools. And it's one of the things that I say for adoption. In terms of an argument for adoption, it could be that a, specific tool like chat, GPT doesn't help you, with some specific workflow today. But by not using these tools, you're not going to force your brain to sort of imagine what comes next. And I think that's so critical. You talk in your book about radical, structural change rather than layering AI on top. Can you speak to that difference and, and why it's so critical to understand?
Richard Susskind: I think this is a fundamental issue, and I distinguish in the boot between automation, innovation, and elimination. Automation is what most people think of when they think of technology that we take some kind of inefficient task or process or activity and computerize it or optimize it. And the first 65 years of legal technology has been a story of automation. We've grafted technology onto our existing legal procedures, processes, and working practices, and hopefully made them more efficient at the same time. I use innovation as an alternative to this, and we've seen it in so many other sectors, use this word in a very specific sense to mean the use of technology to allow us to do things. That previously were not possible. And that for me is the great excitement of technology, not simply computerizing what we already do, but trying to use the technology to offer access. For example, where access historically has not been possible and elimination is this idea and it's relevant for disputes where. The technology in one way or another eliminates the need for the professional service in the first place. So this is my move towards dispute avoidance rather than dispute resolution, the fence at the top of the cliff rather than the ambulance at the bottom.
And if we get the tools right, we can actually fundamentally change the number of, and the stage at which disputes actually arise. So I talk about dispute avoidance and dispute containment. We want to find ways, it seems to me, of preventing disputes escalating and preventing them from arising in the first place. So the role of AI. It is not simply one of what economists call task substitution, where what you do is you look at all the tasks we're doing and we say, oh gosh, a machine could do that. Human tasks, take out the human plug in the machine, and away we go. That's the way I'm afraid. Many economists and management consultants look at AI and interestingly, a lot of the quite impactful and influential studies and the impact of AI. Are purely based on test substitution and automation. The far more profound effect of AI in the long run will be in innovating. And the example I gave in the book in surgery is of non-invasive therapy. Automation is robotic surgery where the robot does some of the work of the surgeon. Non-invasive therapy is providing the patient what they want. But in a way that's far less painful, far less invasive, far more convenient, far lower cost. So it's not automating an existing way of doing surgery. It's fundamentally changing the way the outcome is generated. That's the collective challenge for us in law.
Can we deliver the legal outcomes that people want by using different techniques that AI enabled? That's the mindset of many of the disruptive startups you meet. They're not saying, how can we graph technology onto our old ways of working? They're saying, how can we do what it is that people want of law? How can we deliver that in fundamentally different ways? And of course, the preventative health analogy in surgery works in law too. How can we, as I say, put that fence at the top of the cliff so it's another example of what I say, another way of thinking about AI, I say that even some of the greatest technologists and the greatest economists are still talking about test substitution. It’s a weird way of looking at this remarkable technology. Because It seems to me entirely evident that at some stage it will be delivering outcomes using its strength and its distinctive capabilities rather than replicating what humans do.
Zach Abramowitz: So the name of the book is How to Think About AI, the Guide for the Perplexed. It's available on Amazon. We have a link in the show notes. Highly recommended. And Richard, thank you so much for joining us. It was, such a pleasure.
Richard Susskind: No, the pleasure is mine. Thanks very much Bridget. And Zach. Great to see you.
Bridget McCormack: Wonderful to see you. Thank you very much.