Episode 5
The Legal Landscape of Autonomous Vehicles: Insights from Lucy McCormick
Will Charlesworth welcomes Lucy McCormick, a commercial barrister with expertise in advanced driver assistance systems and automated vehicles, to explore the evolving intersection of law and artificial intelligence.
The conversation delves into the implications of the Automated Vehicles Act 2024, which introduces new legal concepts such as the “user in charge” and the “authorised self-driving entity,” addressing the regulatory landscape for self-driving cars.
Lucy discusses the importance of clarifying marketing terminology to prevent public confusion over vehicle capabilities.
The episode also highlights ethical dilemmas surrounding AI decision-making, especially in critical scenarios, and the challenge of ensuring accountability while balancing safety and explainability.
Listeners will gain insights into the future of product liability concerning AI and the pressing need for a comprehensive legal framework to keep pace with technological advancements.
Takeaways:
- The podcast emphasises that the information provided does not constitute legal advice, highlighting the importance of consulting a professional for specific legal issues.
- Lucy McCormick discusses her journey into the intersection of AI and law, noting her early interest in automated vehicles.
- The Automated Vehicles Act 2024 introduces new legal concepts, such as 'user in charge' and 'no user in charge' vehicles.
- Generative AI is being cautiously adopted in legal practices, with varying acceptance among junior and senior lawyers.
- A significant challenge in AI regulation is ensuring accountability while maintaining the technology's effectiveness and safety.
- The marketing regulations for automated vehicles aim to clarify the distinction between assistance systems and fully autonomous driving capabilities.
Companies mentioned in this episode:
- Ondison Chambers
- Henderson Chambers
- DeepMind
Transcript
The information provided in this podcast is for general information purposes only and does not constitute legal advice.
Although Will Charlesworth is a qualified lawyer, the content of this discussion is intended to provide general insights into legal topics and should not be relied upon as specific legal advice applicable to your situation. It is also Will's personal opinions. No solicitor-client relationship is established by your listening to or interacting with this podcast.
Lucy McCormick:There's another one where a programmer was seeking to program a robot vacuum to drive around without bumping into things, and so it's set up as an internal reward system to encourage speed and to discourage hitting the bumper sensors. So the vacuum learned to drive backwards because it didn't have any bumpers on the back.
So the bumper sensors weren't hit and it was just zooming around backwards, just wildly hitting everything.
Voiceover:You're listening to WithAI FM.
Will Charlesworth:Hello and welcome to the Law with AI podcast. I'm your host Will Charlesworth. I'm a solicitor specialising in intellectual property and reputation management with a keen interest in AI.
I'm also a member of the All-Party Parliamentary Group on Artificial Intelligence. This podcast is about breaking down and understanding how artificial intelligence is changing the world of law, policy and ethics.
Every couple of weeks I'll be looking at important topics such as how AI is impacting on established areas of legal practice, how it's challenging the law itself on issues such as privacy and intellectual property rights, how it's raising new ethical concerns, and how it's reshaping the regulatory landscape.
To help me in this task, I have been having some candid and engaging conversations with some fascinating guests, including fellow lawyers, technologists and policymakers to gain a real insight into what it means not just for the legal profession, but the commercial landscape and society as a whole. As always, this podcast is for general information purposes only does not constitute legal advice from myself or any of my guests.
It's also the personal opinions of myself and my guests. So whether you're a lawyer or just someone curious about how AI and the law mix, you're in the right place.
So let's jump in, keep you informed and ahead of the game. So today, I have the pleasure of being joined by Lucy McCormick. Lucy is a commercial barrister at Henderson Chambers.
She undertakes a variety of product liability and property damage work and is particularly known for her expertise in relation to advanced driver Assistance Systems, ADOs for those in the know, and automated vehicles. She's an author and a contributor to various legal textbooks including the Law and Autonomous Vehicles.
ficial Intelligence, both the:So, after much fanfare, it is a privilege to welcome Lucy as we dive into AI. Thank you for coming on the podcast. Lucy, how are you today?
Lucy McCormick:Yeah, I'm very well. Thank you so much for having me.
Will Charlesworth:Thank you. Well, thank you very much for joining with such impressive background and knowledge.
I mean, I've known you for, for quite a while and the area of law in which you practice is absolutely fascinating to me.
But I suppose if we could just take kind of a step back from where you are right now and let's have a look at how did you first become interested in the intersection of law and emerging technologies such as AI and what specifically drew you to automated vehicles as an area of focus?
Lucy McCormick: utomated vehicles since about:So, originally, I just wrote a single article, and I set up a Twitter account. And that was the early days of Twitter, when having a well positioned Twitter account somehow actually did make a difference.
And it sort of ballooned from there. So eventually, I was invited onto Radio 4's Law in Action, and I was commissioned to write a legal textbook, and here I am.
And of course, there's now, you know, a decade later, quite a lot of work in the area.
Will Charlesworth:I can imagine, yes. I mean, going back to the start many years ago, did you foresee things evolving as quickly as they have done?
Were there ever any moments where you were thinking, this is not, this is not particularly good, this is not going anywhere, or not going as fast as I thought it would go?
Lucy McCormick: d the Brown decisions back in:And these were an absolute watershed moment in litigation because that was when the High Court approved for the very first time the use of technology assisted review as part of the disclosure process.
And I'm sure your listeners will be familiar with tar, but TAR is an electronic tool which combines lawyers' subject matter expertise with a type of artificial intelligence to predict the likely relevance of documents to a particular case. And disclosure is a bread and butter for junior lawyers everywhere.
And for me, it was a bit of a wake-up call to make sure that my practice was sufficiently specialist and that I was not going to be replaceable. So that was a big thing for me. The other one is just the rise of Generative AI, which I started using cautiously a couple of years ago.
ore the Civil Procedure Rules: t, I was called to the bar in: to know what happened before:And they're very difficult to Google because that is not the same as the rules that happen to apply to the Supreme Court. And so you end up with the rules of the Supreme Court and the rules of the Supreme Court, and it's the sort of thing supremely difficult to google.
So, in desperation, I resorted to Generative AI, and it gave me an absolutely pitch-perfect response.
It signposted me neatly to a blog by a judicial assistant at the Supreme Court, specifically engaging with exactly what I needed because of this frequent confusion. So, I suppose I really mostly use it as a fancy search engine and sometimes to help me unpack expert reports.
But I do think there's a real role for that and there ought to be in everybody's practice, used cautiously and sensibly.
Will Charlesworth:Yes, and certainly using. Using cautiously is the operative word.
We're aware of previous cases that do tend to get a bit of press, which, you know, invent cases and, as they say, hallucinate something that.
Lucy McCormick:Yeah, that's lovely, isn't it?
Will Charlesworth:Yes, exactly. Imagine it's on some sort of very, very trippy route to hallucination.
Something I, something I've found is that obviously the more you use it, the better and more accurate it is. And knowing how you like things presented as well, because my initial, initial use of it, what it was producing was, was extremely.
It just wasn't my tone of voice particularly, and I think you're right that you have to approach it with some with some caution, and it can be, but it can be particularly useful. And how do you find them?
In as much as you know, the adoption of AI tools such as for generative AI amongst your colleagues, both more senior and more junior.
Is it split, as we might suspect, is that younger members are more willing to adopt or have the more senior people grasp the nettle and in as much as you know, anyway?
Lucy McCormick:So there's a silk at my set, Henderson Chambers, Patrick Green KC, who's an absolute advocate for using generative AI in law and he runs internal seminars making sure that everybody knows how to best exploit it safely. And so he's an absolute leading light on that. So it is coming from more senior people in our set, interestingly.
But I think, like many things like this, it's quite a spiky profile because, obviously, at the barristers' chambers, we often superficially look a bit like a company or a firm, but we're not; we're just a selection of self-employed individuals who happen to share a building. So there's real variation in adoption at the Bar.
Will Charlesworth:Yeah, no, I can certainly I can imagine that.
And in terms of the court as well, how you, what's, what's been your experience of that, do you have any insight into how, how the courts are approaching this?
Lucy McCormick:Well, I should say that although I find legal tech really interesting, it's not a real specialism for me. I'm more into embodied AI like robots and cars.
But one thing I did see, which was quite interesting, is the German courts are starting to use generative AI to assist them with court decisions in certain very high-volume, very repetitive cases.
And there's a sort of program on that where they're doing it for; I think it's called no pay replay type cases, those ones where your flight's delayed. So they're running a pilot on that, which I understand is live now.
And it's something I find particularly interesting because I've been involved in the Dieselgate car software litigation for eight years.
And apparently, one of the reasons for the impetus towards the German courts starting to look at generative AI is the fact that because German courts don't have a group action system, they're having to deal with all of these individual cases about car emissions on a one by one basis. And so they're getting absolutely deluged by that, and it's been a real problem for them.
So that's why they've had this motivation to get ahead on generative AI, helping them with court decisions. And I think perhaps there's less impetus here because we've got a functioning grief actions class action system.
Will Charlesworth:Yes. Interesting how the German courts are looking to adapt that to assist the administrative element of it, or at least processing claims in terms of.
of the Automated Vehicles Act:And I'm not going to ask you to get a complete rundown on it. I'm not going to put you on the spot like that.
But I understand that the regulatory system, the regulatory framework for, say self-driving cars, has been slowly and incrementally moving on, but perhaps you are able to give us an idea as to where we are, kind of how far have we come, and where are we at the moment? Because certainly from.
It's one of the things that's oft-quoted, particularly as Elon Musk received a lot more publicity in recent weeks you promised us self-driving cars, you promised us the Jetsons, and we're not there yet.
And they're keeping promises made particularly in the US, which I won't ask you to comment necessarily on their jurisdiction, but there's promises of self-driving cars, and we're still not there yet. So I mean, where are we, and how far have we come on that?
Lucy McCormick:So I think the key is, as you said, the keyword is incrementally. And that's been a real hallmark of this field in this jurisdiction.
So the way the government has approached it in this jurisdiction is to look at what's the most urgent hole that needs patching and try to patch that first and then move on to the next urgent hole and try and patch that. And so where it went first was insurance. It wanted to get to a position where it doesn't matter if you're a pedestrian and you run over.
c Vehicles Act as long ago as: is the Automated Vehicles Act:But what the AV Act does is it creates a number of new, really quite exciting legal concepts. And just to outline a few of them, there's the idea of the user in charge. In short, this is because you can't call someone a driver anymore, can you?
So a user in charge is a Person who's in a position to control the vehicle but isn't doing so because a user in charge feature is engaged.
And then it also creates the concept of the authorised self-driving entity or ADSE, which is an entity responsible for the way the vehicle drives and for meeting other regulatory obligations.
And then there's a concept of the no user in charge or NUIC vehicle, one that can drive itself for an entire journey and doesn't require an individual to be capable of taking control. And then there's a corresponding idea of the license NUIC operator, which is the entity for solving problems arising in a no-user-in-charge journey.
And in terms of the general framework of the AV Act and what it does, part one is the regulatory framework for authorisation and use. Parts two and three, criminal liability policing investigation.
Part four, arguably my personal favourite, deals with the marketing of automated vehicles. Part five, automated passenger services. And part six is pretty dull. It's just adaption of existing regimes.
But stepping back, I think the key feature of the act is that only vehicles which satisfy what is called the self-driving test are going to be authorised under the framework.
There was a lot of debate about what that means, but they're required to meet the test; they're required to be capable of driving safely and legally.
And some of this the can's been kicked down the road of because the active visit is the Secretary of State preparing a statement of the principles to apply in assessing whether a car can be driven safely and legally.
But the idea is that the principles will be framed to achieve a level of safety equivalent to or higher than that of careful and competent human drivers. So that's been extremely controversial, that wording and there's been interesting debates in Hansard about it.
And what the government has stressed is that the standard of a careful and competent human driver is, in practice, a much higher standard than that of the average driver. So they've said no, no, no, you've got it all wrong. That isn't a low standard, that's actually quite a high standard.
And they've said they're open to raising the standard as we go on. And, of course, they're the fairly eye-catching provisions about liability.
So I mean, in terms of liability, the real, as I said, really eye-catching provision is section 47, and that provides that an individual doesn't commit an offence arising from the way the vehicle is driven if the individual is the user in charge of the vehicle at the time of the act, that would constitute the offence.
So, it effectively gives the person in the vehicle who isn't the driver immunity from anything criminal if they're not really driving it at the time? If they're merely a user in charge at the time. And I think that works well.
I have some quibbles about how the Automated Vehicles Act deals with so-called transition demands.
So transition demands are those liminal points where the vehicle says to you, I've come across something I can't deal with; I'd like you, the human, to take back control. And they're always difficult; they're always the most controversial bit.
And I do think there's some oddities in how the act deals with it, but I think that's probably a bit technical to make good listening.
Will Charlesworth:Not at all. I mean in terms of liability and self-driving cars. Self-driving vehicles, sorry, making real time and very and very difficult decisions.
I mean, it goes into ethical discussions as well in terms of split-second decisions made during emergencies and things like that. And that's possibly one of the go to points of discussion when people, when people raise this because, you know.
Lucy McCormick:The good old trolley problem.
Will Charlesworth:The trolley problem, yeah, exactly, exactly. I've seen it too many times, too many times. So it always, always pops up in my, my social media feeds.
Perhaps it's trying to give me some guidance on ethical decisions. But how does it seek to approach that?
Or do you think, do you foresee there could be issues with that as people become a lot more obviously reliant and think less in respect of using the vehicles?
Lucy McCormick:I think it's all about outputs because AV and, indeed, a lot of AI technology are programmed by machine learning.
So nobody is sitting down and, and actively writing a line of code saying if you're driving along and there's a baby on one side of the road and a granny on the other side of the road, kill the granny if you have to make a choice. Nobody writes that into the code.
So the way it works, because the machine is effectively programming itself, is you have to run the vehicle through lots and lots of scenarios, and then you have to check if you like the outputs. And that's really what it's anticipated. The safety case process coming out of the Automated Vehicles Act will be about.
But so I'm not overly troubled by that, that core classic ethical issue. I do think there's a very neglected ethical issue about AI which is about explainability.
So it's become quite fashionable to say that AI must be able to explain why it's made a decision, and there must be explainability by design and that in A car context, if it's run into someone, it's got to be able to sort of playback to you its internal logic as to why it did that.
And that, at the surface, sounds absolutely sensible, but I've talked to a number of engineers now, and they say if you require an AI to be explainable to humans, you're effectively sort of trimming its wings because it then can't make those really intuitive leaps of judgment, which is what makes AI so much more effective than other techniques. And so using expecting it to be able to explain what it's done, using human logic makes it actively less good at being safe.
And so I think that issue comes up most starkly in automated vehicles because you can either have an automated vehicle, and inevitably, they'll kill someone sooner or later. All technology does.
But you can have one that's less likely to kill someone, but in the rare occasion it does, it will not be able to explain to you why, or you can have someone that will be able to explain to you why, but it'll kill more people. And I think that's a really fascinating ethical dilemma because, sort of on a pure logic basis, you ought to go for the one that kills less people.
But is that going to get public acceptability? I suspect not, no.
Will Charlesworth:I suspect yes, because you're right, because accountability is core to all of this and how we've come so far. And you want assurances if you're giving over control to something that there has to be some sort of ability to explain why it did.
Lucy McCormick:Yeah, it's a really difficult one.
Will Charlesworth:So where do you think it's possibly going to land? Do you suspect that people, that inevitably, I suppose, because governments and policymakers are held accountable, they are going to have to push that accountability onto the task?
Lucy McCormick:I think accountability is going to stick. I mean, it's odd. People accept that people die on the roads. It's obviously in any individual case, it's tragic, but we accept that.
And in fact, it's probably the most dangerous thing any of us ever do is go on the roads. So public acceptability of people having road accidents is acceptable.
Public acceptability of a robot inexplicably killing people sometimes is not going to happen, even if overall less people get killed. So I think that's the direction we'll come down on. But I think it. People don't often appreciate that tension, and I think we need to be.
To try and be honest about the fact there is that tension. And it plays into a lot of other issues about AI and the Fact that it just isn't intrinsically very human in how it thinks.
I mean, even calling it thinks is of course a shorthand in the same way that say it hallucinates is. I mean, I was very influenced by a. I do some work in relation to sort of medical AI as well.
And there was a deep learning model that was being trained to detect pneumonia in chest X rays. And it did it really effectively, but they then kind of worked backwards and worked out belatedly how it was doing it.
And it was working out which X ray machine was taking the picture.
It was basically doing the equivalent of working which printer had been used, because that was predictive in itself was whether the image contained signs of pneumonia because certain X ray machines and certain hospital sites tended to be used for sicker patients. So it was basically cheating the test, but it was doing what you wanted it to do.
And I think this is one of my big kind of quasi professional and non professional interests is what I think of as monkey paw incidences with AI. It gives you your wish, but you've got to be really careful about delineating what you wish for.
Like there's another one where a programmer was seeking to program a robot vacuum to drive around without bumping into things. And so it's set up as an internal reward system to encourage speed and to discourage hitting the bumper sensors.
So the vacuum learned to drive backwards because it didn't have any bumpers on the back. So the bumper sensors weren't hit and it was just zooming around backwards, just wildly hitting everything.
There's a great list of these, these sort of things maintained by a research scientist called Victoria Krakovner, who I think works at DeepMind and she sort of collects these sort of unintended consequences of AI programming examples. There are lots of really good ones.
Will Charlesworth:So we need to be, we need to be aware that the AI is, is thinking very differently from us, which is something that perhaps people aren't necessarily factoring in certainly at the outset anyway.
I suspect that it's, it's perhaps often assumed and because it's easier to write about in the press as well, that it's just a cleverer version of, of what we have now or what we're trying to, what we're trying to achieve now. So it's just somebody that's a little bit brighter than you, a little bit quicker than you.
But in fact, as you say, fundamentally the way it processes inputs and then produces outputs is, can be entirely different. And the often quoted phrase of the black box Nature of AI is that we understand that it breaks things down into tokens, etc.
But we're not quite sure necessarily how it's processing that.
But as long as we like the output, as long as it produces something that is a relatively well reasoned piece of work or something that's created, then we're quite happy to do that.
Lucy McCormick:Yeah, I think you're right about the black box.
There's a sort of, there's the outputs and checking the outputs are okay and then checking the inputs are okay because obviously there's this huge issue with bias in the inputs and obviously you've got to, you know, enormously check that you're not basically feeding it existing human biases.
There was a hospital project to work out who got extra resources directed to them, and I can't remember the exact details of it, but effectively it ended up reflecting existing inequalities in the US towards black patients in terms of how it applied itself because it triggered intervention when a patient had had a certain amount of money spent on them. But black patients ended up having to be much sicker to have that amount of money spent on them.
So the whole system ended up baking in existing biases. And obviously even, even those sort of image sets and data sets can bake in existing biases.
And if you try and ask a generative AI to create a picture of a warrior or a CEO, it naturally tends to do a man. And so they've had to bake in all of those things, saying, well, every third image just randomly makes someone a woman.
And you know, it's, it's really tough.
Will Charlesworth:Yes, it's, it's incredibly difficult. And yeah, biases that you do introduce to try and offset as well as to who's. Whose biases are being introduced as well.
It's, it's a very tricky area and something I suspect we'll be grappling with for a long time. One of the, one of the areas of the act, I think you mentioned earlier was around the marketing of vehicles.
So what, what was, what's particularly interesting in that area? What, what can we be expecting in, in, in that. Because I have as, as a driver of an, of an ev, not, not an av, but an ev.
I have, I have a lot to say about estimated actual mileage and range, which I'm not going to understand why it is what it is. But yeah, so around, around, around AVs. So what's, yeah, what's happening with the marketing side of that?
Lucy McCormick:The marketing stems around the fact that there's long been Confusion in the population's mind about what's really a self driving vehicle and what isn't, what's just an advanced driver assistance vehicle. And I must say, in some respects, Tesla hasn't helped with this, with their calling vehicles Autopilot.
And they used to, until quite recently, have some material on their website which said that their vehicles were full self driving ready. And then a little footnote saying what this means is that it's got all the hardware to do self driving, but it isn't fully self driving yet.
And things like this have very much muddied people's expectations about what the technology can and can't do.
And you get these examples of people thinking, well, you know, I've got my Tesla, I'm on an empty road, I'm going to climb into the backseat and have a snooze. And so the government has identified, absolutely correctly, that there is this real problem with clarity in people's minds.
So what it's going to do is it's going to make it illegal to use certain controlled terms to present a vehicle as autonomous or self driving or driverless or anything like that, unless it has been specifically assessed as meeting the self driving test under the Automated Vehicles Act. I mean, it's one of those things that's just absolutely obvious. It obviously should exist.
You shouldn't be able to confuse the public about whether it's something that, sure, it's very smart, it's a great assistance system, but you're fundamentally in charge of the car versus no, when it's in an automated mode, you have no legal responsibility for how it is conducting the driving manoeuvre, because it is driving and you are perfectly permitted to watch telly on the screen. By the way, one thing that really brings home how far this has all come is they changed the highway code last year to reflect that.
To reflect that when you're in an automated mode, you are going to be allowed to watch things on the screen on a vehicle, but interestingly, not use your phone.
I think the reason for the distinction is that you can program the system to cut out the screens for a transition demand, but something freestanding like a phone, you can't do. But yes, I think the marketing is absolutely essential, really important.
They're consulting on the exact terms at the moment, so it's not come into sort of practice yet.
I do think that that is one thing we can take from the law of automated vehicles, potentially applied to the AI industry more generally, because there is this huge confusion.
The idea of general artificial Intelligence and artificial intelligence that thinks like a human can apply it to anything and we just don't really have that yet. We've still got specific AIs that can do small things very well. And obviously generative AI is fundamentally a prediction engine.
It looks like it's thinking but it's not really thinking. So I think it'd be interesting to see if we end up with if that's a success in the automated vehicle space.
ction from unfair Trading Act:And it's quite a. And it's just been replaced by another series of legislation but the provisions have been kept the same.
And it's quite a useful all purpose piece of legislation because it means even if something's literally true, but it's misleading, there's potentially a cause of action there.
So it's not that you couldn't do it under the existing rules, but I do think some sort of legislation more widely about being careful about how you market AI would be useful.
Will Charlesworth:And of course the most creative people are marketing people. So it will have to be very carefully considered because people will find a way around it or they will try and find a way around it.
And I suspect that there will be probably miss selling claims and inevitably I suppose no matter what you do, I suspect there will still be people pushing it to. Pushing it to limit. I suspect it's a bit like have a Tesla and very excited but obviously very realistic as to what it can and can't.
Can't do, particularly when it's not in California for example.
And so when you go through some of the menus, there are whole menus that are just completely blanked out because within your territory this is not, you know, this is not allowed. This level of automation, even slightly enhanced level automation is just not. Is not allowed whatsoever.
So it'd be interesting how we manage people's expectations. And that's, and that's where the law should, should step in is to.
Because the investment in such a vehicle is going to be so considerably huge with car prices and things increasing. It's not something you can afford to get wrong I suspect. So what's going to be the next steps in terms of AI, robotics and automated vehicles?
So what's particularly on the horizon from your point of view? What's exciting or what's to be looked forward to?
Lucy McCormick:I'm not sure exciting so much as inevitable, but we need to see the civil piece of the picture. We've had insurance, we've had criminal and regulatory. Come on, bring us the civil bit.
And I think that's going to be, rather than being AV specific, that's probably going to be a wider product liability regime review like they've done in Europe. So our existing product liability regime is a very poor fit. And you were saying about how expensive cars are.
The existing Consumer Protection act, which is our product liability statute, doesn't encompass liability for damage to the product itself. So if your car damages itself, you get absolutely nothing under that regime.
And it doesn't encompass liability for damage to property not ordinarily intended for private use. So it's not going to work for commercial vehicles. And it only applies to claims in the first 10 years of the product lifespan. But.
But the average age of cars on our road are about 12 years. So there are a number of reasons why our existing product liability regime doesn't work for AVs and really doesn't work for AI generally.
There's still a question mark about whether the Consumer Protection act applies to pure software and therefore. And obviously that's. That's a big issue and including for AVs, because some of them have sort of over the air updates.
But I think that's going to be a wider piece of work, not an AV specific piece of work. There were some noises being made by the Law Commission about getting on that quite quickly, but those noises seem to have died down.
So I think we were all just waiting for the next stage in the story on that.
Will Charlesworth:Yeah, we shall sit by and watch. Watch with definite interest. Thank you very much for those, for those insights, Lucy.
It's been really interesting to see into this area which I can see we'll get more and more prominence and more and more focus, I suspect, as it evolves and develops.
Before we wrap up, if people want to contact you and to effectively follow your updates, because certainly I've seen on LinkedIn you're invited to speak at some very interesting events. How do they do that? How do they contact you? And whereabouts are you on social media?
Lucy McCormick:Well, the good @lawofdriverless is still going on Twitter, but I must admit Twitter isn't what it was, and it's a very Wild West place now, so I'm not vigorously updating it. So I think probably the best place to come is to contact my clerks at Henderson Chambers.
Will Charlesworth:Absolutely fantastic. And we will include a link in the description of this episode. As well, so people can go. Go straight to that.
So as we come to the to the end of this episode of the Law with AI, I want to extend my deepest thanks to Lucy for joining us, for sharing her expertise on the intersection of AI, AVs, robotics, and technology in general. So thank you very much, Lucy, for that. Thank you.
Lucy McCormick:Thank you very much, Will. It's been a real pleasure.
Will Charlesworth:No problem at all. And thank you, everybody, for tuning in. And don't forget to like and subscribe if you haven't already, and I shall catch you in the next episode.