But what if the likes of Microsoft and Google do manage to overcome the technical challenges to improving generative AI even further? We’ve heard about the varied and impressive use cases for AI across a broad range of fields, but these examples all involve complementing the work of humans, not replacing them. What if this changes in the future? Gillian Hadfield is a law professor and economist at the University of Toronto…
Gillian – We don’t know if we’ll get there, but we need to think about, as possible, that we could have artificial agents participating in our world with us, a bit like an alien species. It could be quite wonderful because this could be new members of our society that bring us lots of intelligence and ways of solving big problems. I think when I get worried is when I think about, do we really understand what it would mean to have new actors, new agents, new entities, new species among us? Have we thought carefully about how we make sure that they play well with us?
Chris – You’ve got a policy forum that you’ve just put together in ‘Science,’ one of the world’s leading science publications. What were the points that you bring up there and why have you picked on those things?
Gillian – One is to really emphasise how quickly the technology is advancing and the trajectory we might be on to really quite fundamental change. My contribution in particular is to draw attention to, we really haven’t thought about how to adapt all the complex structure that we bring to making sure that our human societies are, by and large, safe, cooperative, happy places. Of course there’s no sense in which they are completely. A key point of the piece from my perspective is to say, we need to raise the urgency with which we are thinking about how would we in fact integrate, like I said, this alien species into our world.
Chris – You are alluding to a lack of urgency on something that you think is a high priority. Usually people deprioritise things they don’t think are going to happen. Do you think, really, there’s a lot of hope and hype out there, but the people who really know, know different and that’s why they’re not really pushing the envelope on this.
Gillian – I don’t think that’s the case. We have people who are at the heart of the technology, who are raising the concern about how rapidly things are going and the potential for it to achieve levels of transformation, to change our world. People disagree about that, but the disagreement is not between the people who know what’s happening and the people who don’t. I think the lack of urgency is actually coming from the general public as well as our regulators, politicians and governors, etc., not really having a good idea of why this would be such a challenge to how we do things. I think we take for granted a lot of the basic, invisible ways in which we make sure that people behave themselves and they’re not there for this alien species, they wouldn’t be in place. So I don’t think the lack of urgency is because people who know best know that it’s not going to happen.
Chris – Whose problem do you think it is? Do you think it is the problem of the companies? Because a lot of this technology is in the hands of commercial entities who are profit making organisations that are multinationals. Do you think it’s down to them to sort this out or do you think it’s a policymaker thing, but of course with that comes differences of geography, culture, etc.?
Gillian – I think the fact that this technology is being built almost exclusively inside private technology companies is precisely one of the reasons we who are not inside those technology companies need to be paying a lot of attention. I think we are potentially watching the decision making, the power to decide what our lives look like, shift from our councils and our towns and our governments and our communities into technology companies. I think that’s a bad thing. So, I definitely think that this is a job for all of us who are not technologists to really understand what’s happening and to be paying enough attention to say, hey, wait a second, we should be in charge of how this new world evolves.
Chris – Is that your wishlist? There needs to be some rules. At the moment it’s the wild west and anyone seems to be able to do anything, and that’s your concern.
Gillian – We need to be putting in place the mechanisms for us to even understand what’s going on. A proposal that I’ve made with some colleagues is to say, we need basic visibility from governments into what’s being built. We don’t have that right now, so we should have national registries that require that the governments have a right to know what’s being built, what we know about what it can do, and what we know about the safety measures. We should have that in place quite urgently. I think that we should be recognising, for example, that the billions of dollars that are being poured into development right now are driving towards creating increasingly autonomous AI agents. Agents that can go out and book you airline tickets and make reservations for you and maybe then start designing products and contracting to have them built and sold to really start participating in our society and our economy.
But we don’t have any of the rules that that we have around ordinary humans being able to participate, like work authorisation or an address, some way of finding out who somebody is, tracking that down. An AI agent should have a legal identity that allows us to say: that one is not allowed to operate anymore or this one has to pay for the damages that were created and, oh, by the way, that means it needs to have assets behind it, a bank account basically. There’s that kind of infrastructure that I think we need to rapidly be putting in place.