<< All Episodes

Episode 307 – Rights for Machines? AI’s Ethical Future with James Boyle

Ep 307 Rights for Machines? EI's Ethical Future with James Boyle

FOLLOW THE SHOW

Steve talks with Duke law professor James Boyle about his new book, The Line: AI and the Future of Personhood.

Can you imagine granting personhood to AI entities? Well, some of us couldn’t imagine granting personhood to corporations. And yet… look how that panned out.

In this episode, Steve talks with Duke law professor James Boyle about his new book, The Line: AI and the Future of Personhood.

James explains the development of his interest in the topic; it began with the idea of empathy.

(Then) moved to the idea of AI as the analogy to corporate personhood.  And then the final thing – and maybe the most interesting one to me – is how encounters with AI would change our conceptions of ourselves. Human beings have always tried to set ourselves as different from non-human animals, different from the natural universe.    

Sentences no longer imply sentience. And since language is one of the reasons we set human beings up as bigger and better and superior to other things, what will that do to our sense of ourselves? And what would happen if instead of being a chatbot, it was actually an entity that more plausibly possessed consciousness.

Steve and James discuss the ways in which science fiction affects our thinking about these things, using Blade Runner and Star Trek to look at the ethical dilemmas we might face. As AI becomes increasingly more integrated into our society, we will need to consider the full range of implications.

James Boyle is the William Neil Reynolds Professor of Law at  Duke Law School, founder of the Center for the Study of the Public Domain, and former chair of Creative Commons. He is the author of The Public Domain: Enclosing the Commons of the Mind, and Shamans, Software, and Spleens. He is co-author of two comic books, and the winner of the Electronic Frontier Foundation’s Pioneer Award for his work on digital civil liberties.

Steve Grumbine

00:00:43.285 – 00:01:45.751

All right, this is Steve with Macro N Cheese. Today’s guest is James Boyle.

James Boyle is the William Neil Reynolds Professor of Law at Duke Law School, founder of the Center for the Study of the Public Domain, and former chair of Creative Commons.

He is the author of the Public Domain and Shaman Software and Spleens, the co author of two comic books, and the winner of the Electronic Frontier Foundation’s Pioneer Award [2010] for his work on digital civil liberties. I’m really excited about this, quite frankly, because we’ve been talking about AI [artificial intelligence] a lot lately.

We’ve taken it from different angles, but today is kind of a snapshot of personhood and the future of AI and are we going to be willing to grant personhood and grant rights to artificial intelligence? This is a subject that fascinates me. I am not the expert, but I do believe we have one of them on the show here.

James, welcome so much to Macro N Cheese.

James Boyle

00:01:45.903 – 00:01:47.239

Thanks so much, Steve.

Steve Grumbine

00:01:47.407 – 00:01:53.479

Tell us about your new book. Give us kind of the introduction, if you will, give us the Cliff Notes.

James Boyle

00:01:53.647 – 00:04:25.655

Well, the book is called The Line: AI and the Future of Personhood.

And it’s a book about how encounters with increasingly plausibly intelligent artificial entities will change our perceptions of personhood, will change our perceptions of those entities, and will change our perceptions of ourselves.

The book started with me, or rather the project started many years ago, back in 2010, with me writing a speculative article about what would happen if we had artificial intelligences that were plausibly claiming to have consciousness of a kind that would require us morally and perhaps even legally to give them some kind of personhood.

And the next stage was thinking about the possibility that personhood would come to AI not because we had some moral kinship with them or because we saw that under the silicone carapace there was a moral sense that we had to recognize as being fully our equal, but rather that we would give personhood to AI in the same way that we do to corporations for reasons of convenience, for administrative expediency. And then the more difficult question is whether or not you have corporate personhood, which, after all, just means corporations can make deals.

And including corporate entities, including unions, for that matter.

What about constitutional protections, civil liberties protections, which in the US have been extended to corporations, even though they don’t plausibly seem to be part of we the people for whom the Constitution was created. So started with the idea of empathy, moved to the idea of AI as the analogy to corporate personhood.

And then the final thing, and maybe the most interesting one to me, is how encounters with AI would change our conceptions of ourselves. Human beings have always tried to set ourselves as different from non human animals as from the natural universe.

That’s the line that the title refers to.

What will it do to us when we look at a chatbot or have a conversation with it and realize this is an entity which has no sense of understanding, that isn’t in any way meaningfully comprehending the material, but yet it is producing this fluent and plausible language. Sentences no longer imply sentience.

And since language is one of the reasons we set human beings up as bigger and better and superior to other things, what will that do to our sense of ourselves? And what would happen if instead of being a chatbot, it was actually an entity that more plausibly possessed consciousness?

So that’s what the book’s about.

Steve Grumbine

00:04:26.115 – 00:04:40.571

Fascinating.

You know, I remember, and it’s funny because the first page of your book speaks of the guy from Google, Blake Lemoine, who kind of thought that his AI was sentient. And I believe he was fired immediately for saying this out into the public.

James Boyle

00:04:40.763 – 00:04:41.841

Yes, indeed.

Steve Grumbine

00:04:42.003 – 00:04:49.053

Let’s talk about that for a minute. Paint this picture out for the people that maybe haven’t heard of this and tell us what it informs us.

James Boyle

00:04:49.229 – 00:06:23.555

Well, Mr.Lemoine is remarkable, as I say in the book, he wrote a letter to the Washington Post in 2022 saying that the computer system he worked on, he thought it was sentient. And you know, the Washington Post is used to getting letters from lots of crackpots and people with tinfoil hats and people who think.

And people who think there are politicians having sex rings in basement of pizzerias that have no basements. And so, you know, he could have been written off as one of those, but he wasn’t this simple delusional person.

This was a Google engineer who was actually trained to have conversations in quotes with Google’s chatbot LaMDA (Language Model for Dialogue Applications) at the time to see if it could be gamed to produce harmful or offensive speech.

And instead he started having these conversations with it that made him think, wrongly, as I say in the book, but made him think that this was actually something conscious. He would read it koans and then ask it to comment. And its comments seemed wistful and bittersweet, as if it was searching for enlightenment too.

And so the thing that fascinated me, because I’ve been thinking about this issue for many years, way before that. Mr. Lemoine was wrong, but he’s only the first in what will be a long series of people whose encounters with increasingly intelligent or increasingly convincing AIs will make them think, is this a person? Am I acting rightly towards it?

And so while he may have jumped the gun, which he clearly did, chat bots are not conscious, I think that he’s the harbinger of something that’s coming and something that will actually quite profoundly change our conception of ourselves as well as of the entities we’re dealing with.

Steve Grumbine

00:06:24.135 – 00:06:54.693

As I think about this, you know, I watched a show called The Expanse and I’ve watched a lot of these kind of futuristic space worlds, if you will, where they kind of, I guess, try to envision what it might look like in a future maybe far away, maybe not so far away.

But one of the ones that jumps out is one that you’ve used in the past, and that is the Blade Runner, a sci fi movie. That was really a phenomenal movie. I loved it big time. How does this relate to that? How can we roll into Blade Runner?

James Boyle

00:06:54.829 – 00:10:01.165

So Blade Runner, and the book it’s based on Do Androids Dream of Electric Sheep by Philip Dick are two of the most remarkable artistic musings on the idea of empathy. I’m sure your listeners, at least some of them, will have seen the original Blade Runner.

You may remember the Voigt Kampf [A polygraph-like machine used by blade runners], which is this test to figure out whether or not the person being interviewed is really a hu Wild Robot man being or whether they’re a replicant, these artificial, basically androids or artificially biologically created super beings or superhuman beings. And the test involves giving the suspect a series of hypos involving non human animals.

Tortoises, butterflies, someone who gives the interviewee a calf skin wallet. What would you do in each of these cases?

And the responses are, well, I’d send them to the police, or I’d call the psychiatrist, or I’d, you know, I’ve had my kid looked at to see if he was somehow deluded because he has insufficient empathy for these non human animals.

And the irony which Dick sets up and which Blade Runner continues is of course that we’re testing these non human entities, the replicants, by asking them how much they empathize with non human animals and then deciding that if they don’t empathize as much as we humans think they should, then we need to have no empathy for them and can, in fact, kill them.

And so this test, which is supposedly about empathy, is actually about our ability radically to constrict our moral circle, kick people outside of it and say, you’re weird, you’re different, you’re other. And so we need feel no sympathy for you.

And so the thing that, yeah, it’s a test of empathy, but whose empathy? Seems like it’s a test of our own empathy as human beings, and we’re failing. At least that’s the message I take from the movie.

So what fascinated me in the book was that how easy it was in the movie to trigger different images that we have. Priming. You know, this is the sense of psychological priming where primes you to have one reaction.

There’ll be a moment where the replicant seems to be a beautiful woman. It’s like, oh, my God, did I just, you know, voice a crush on a sex doll?

Moments when it appears to be a frightened child, an animal sniffing at its lover. You know, like two animals reunited, a killing machine, a beautiful ballerina. And the images flash by, you know, in, like, just for a second.

And immediately you can feel yourself having the reaction that that particular priming, the ballet dancer, the beautiful woman, the killer robot, produces. And you can feel your sort of moral assessment of the situation completely change depending on what image has been put in front of you.

And I say that it’s kind of the moral stroboscope.

You know, it’s designed to induce a kind of moral seizure in us to make us think, wait, wow, are my moral intuitions so malleable, so easily manipulated? And, you know, how do I actually come to sort of pull back from this situation and figure out what the right response is?

And to be honest, I think that fiction has been one of our most productive ways of exploring that. And science fiction, obviously, in particular.

Steve Grumbine

00:10:01.665 – 00:10:10.103

You know, I have children, small children. And there’s a new movie that’s just come out called the Wild Robot. I don’t know if you’ve had a chance to see this yet,

James Boyle

00:10:10.119 – 00:10:10.719

I have not.

Steve Grumbine

00:10:10.847 – 00:12:09.799

but it’s fantastic. So this robot is sent to Earth to gather data for some alien robot race, I guess, outside of our normal ecosystem.

And this robot ends up developing empathy, develops empathy for all the wild animals.

And the power brokers, if you will, on the spaceship want that robot back so that they can download all of its information, reprogram it for its next assignment. Well, this robot says, no, I want to live. I like my life here. I like these animals. I don’t want them to die. I don’t want bad things to happen.

And the mothership comes back and puts some tractor beam on this robot. It’s a cartoon, it’s an animated show. But it’s really deep thinking, you know, there’s a lot going on here.

It’s very, very sad on so many cases because you watch various things dying, which is not really the norm for kids shows, you know what I mean? They’re showing you the full kind of life cycle.

And as the machine gets sent back up, somehow or another they didn’t wipe the memory banks completely. There was just enough in there that it remembered everything that it had learned while on Earth. Somehow or another reconstitutes its awakening.

And some of the birds and other animals came up to the spaceship to free this robot and the robot falls out and they win, of course, and the robot becomes part and parcel with them and so forth.

But it was very, very much a child version of teaching empathy for non human beings, if you will, in terms of AI, in terms of robots, in terms of kind of what you’re talking about within the Blade Runner, for kids. I don’t know if it’s a good analogy, but it sure felt like one.

I just saw it the other night and I was like, wow, this is going to play well into this convo.

James Boyle

00:12:09.967 – 00:12:14.239

It sounds like exactly the themes from my book. I’ll have to check it out. Thank you.

Steve Grumbine

00:12:14.327 – 00:14:05.555

My wife looked at me and she goes, oh my God, this is so depressing. And my kid was like crying because it was sad parts, but he loved it and he was glued to it the whole time.

I think it does say something, you know, I remember watching Blade Runner and feeling that real kind of, I don’t want her to die. You know, I don’t. I. What do you mean? Right? And I did feel empathy. I did feel that kind of, I don’t know how to define it, but I don’t want it to die.

You know what I mean? And I don’t know what that says about me one way or the other, but, you know, it definitely resonated with me. How do you think that plays out?

Today we talked a little offline and obviously we have Citizens United that has provided personhood to corporate interests, to corporations, providing personhood to robotics with AI based, you know, I hate to say sentience, but for the lack of better term, sentience. I mean, what about today? What should people be looking at that can help them start making some of this futuristic cosplay, if you will.

How do you think you could help people tie together their experience today in preparing for some of these kind of thought exercises? Because this is a real existential type thought exercise.

I don’t know that you go into this, but I can tell you I come from a certain era where people were not afraid to dabble in mycelium and the fungi of the earth and had, you know, their own kind of existential liberation, if you will.

And seeing that kind of alternative universe, I imagine there’s a lot of cross pollination in terms of leaving one’s current reality and considering a future sense of what life might be like in that space.

James Boyle

00:14:06.495 – 00:19:46.555

One way for me that’s interesting to think about it is I think people automatically, when they confront something new, they say, what does this mean for my tribe, for my group, for my political viewpoint, for the positions, for my ideology, for the positions I’ve taken on the world? And so if you take someone who’s, you know, thinks of themselves as progressive, I’ll use that vague term.

On the one hand, you could think that that person would be leading the charge for if – it is an if, as I point out in the book, but it’s a possibility and one that seems more likely if we get increasingly capable AI, not in the generative AI chatbot mode, but actually AI, which is closer and closer to the kinds of aspects of human thought that we think make us special. You would think that the progressive would be there leading the charge because this is the next stop in the civil rights movement.

You know, these are my silicon brothers and sisters, right? You know, we are the group that has fought for previous. I mean, we’ve been very good at denying personhood to members of our own species.

And the expansion of moral recognition is something that we, most people at least see as an unqualified positive. Isn’t this just the next stop on that railroad?

And I think that is depending on how the issue appears to us, that might well be the way that it plays out, that people would be presented with a story where, let’s say they are interacting with an ever more capable AI sort of enabled robot that is, let’s say, looking after their grandmother as their comfort unit, which I think is a very likely future, regardless of whether or not they’re conscious. And then they start having conversations with it. They start thinking, whoa, am I acting rightly towards this being?

You know, can I treat this as just the robotic help? Doesn’t it seem to have some deeper moral sense? Or doesn’t that pull on me to maybe recognize it, to be more egalitarian towards it.

So that’s one way I think things could play out. But then if instead of doing that, I started by mentioning Citizens United, as we did, and I talked about the history of corporate personhood.

And I’ll say often in sort of progressive, leftist, radical circles, people are kind of loosey goosey in the way that they talk about corporate personhood.

I actually think that it’s probably a very good idea for us to enable particular entities which take particular risks at particular moments, because not all plans that we have will pay off that come together in a corporate form, whether it’s a union or a corporation, and try and achieve a particular goal. And the fact that we allow them to sue and be sued seems like actually a pretty efficient way of doing something.

And something that enables some benign innovation, risk taking, that kind of stuff. The next question, of course is, and what about political rights? And that’s the next stage.

And what happened in the US was that you had the 13th, 14th and 15th amendments passed after the Civil War, some of the amendments I’m personally most fond of in the Constitution, and that offer equal protections to formerly enslaved African Americans.

And what we saw instead was that the immediate beneficiaries of those equal protection guarantees, as I lay out in the corporate personhood chapter, were not formerly enslaved black Africans. They were corporations. Black people brought very few suits under the equal protection clause and they lost most of them.

The majority were bought by corporate entities. So if I tell you that story, you’re going, oh my God, they’re doing it again.

This is another Trojan horse snuck inside the walls of the legal system to give another immortal superhuman entity with no morals an ability to claim the legal protections that were hard fought by individuals for individuals. And that’s absolutely anathema to me.

And so if you think of yourself as having those two people inside you, the person who feels moral empathy, the person who is aware that in the past we collectively as a species have done terrible things where we foreclosed our empathy to groups and said, you don’t matter, that that’s among the most evil periods in our history, in our collective history, you could be, oh, wow, this is absolutely rights for robots.

And if you started on the other track that I described there, the one that follows what happened with corporate personhood, you might see this as an enormous cynical campaign that was just here to screw us one more time with another super legal entity.

What I’m trying to do is to get there before this fight begins. And everyone’s got their toes dug in, in the sand going, this is my line, damn it. This is what my tribe believes, you know?

I’m not going to even think about this seriously, because if I do, then it challenges my beliefs on all kinds of other things, from, you know, fetal personhood to corporate personhood to animal rights, what have you. And look, guys, we’re not there yet. You know, these aren’t, in fact, conscious entities yet.

Maybe now’s the time to have the conversation about this kind of stuff so that we could think about, for example, if we are going to have corporate forms specifically designed for these entities, what should they look like?

If we’re going to have a test that actually says, hey, you know what, you graduated, we actually have to admit that you’ve got enough human-like, qualities that morally we have to treat you, if not as a human, then as a person. Well, what would those tests be?

And so I want to sort of preempt that, get ahead of the doomers who are saying they’ll kill us and the optimists who think they’ll bring us into utopia and say, let’s have a serious moral discussion about what this might do to us as a species. And so that’s what the book’s trying to do. Whether or not it does it, that’s up to your listeners to describe.

Steve Grumbine

00:19:47.135 – 00:20:12.409

Yeah. So this brings something very important to mind, okay? We are in a very, very odd time in US History right now.

Some of the debates that are taking up the airwaves sometimes feel a little crazy that, like, why are we even talking about this? But one of the things that jumps out is the fear of immigrants. Okay. Othering people.

James Boyle

00:20:12.497 – 00:20:12.865

Yes.

Steve Grumbine

00:20:12.945 – 00:23:18.185

And calling them illegals. And anytime I hear someone say illegals, no matter how colloquial it is, it makes me lose my mind. I can’t stand it. No one’s illegal.

Everybody is a human being. But then you start thinking about. It’s like, well, what are some of the conversations as to why people are afraid of immigrants?

And you’ve got the cartoon flavor that is put on the television during political ads where everybody that’s an immigrant is going to murder your wife when you’re not looking and they’re going to rape your daughters and, you know, all this horrible, let’s just be honest, fascist scapegoating, right? But then you flash forward to the other factor there, and it’s like, there’s a cultural norm.

The more people you allow into the country, or any country for that matter, that are different, that have a different quality of cultural perspective, the more the existing culture is challenged for hegemony, for challenge for the dominant culture, challenge for, you know, what do we value? And so forth. And rightly or wrongly, that is a real debate that is happening.

Now I happen to come down on that much different, more open borders, let’s use the power of our nation’s currency issuing power to make everyone whole. There’s no need to pit each other against each other.

But if you flash forward, you think about the Supreme Court even, when Donald Trump appointed a bunch of federal judges, he was able to affect the rulings, the laws of this land, even out of power, the long arm of his appointments and so forth really had an impact. And we just saw Roe overturned. Well, just as easily, and this is in your other wheelhouse, being focal on law, Biden could have reformed the court, he could have stacked the court. He could have altered the outcomes there by filling the court up with more judges that had his persuasion, whatever that is. As I think about robots and AI, you know, robots don’t just happen autonomously. Robots are created.

And as I think about these different entities, I mean, maybe someday they find a way to self create, I don’t know, but maybe they do now and I just don’t know it through some microbiology or micro technical stuff. But ultimately they have to be created.

So what’s to stop a faction from creating a million robots with sentient abilities and empathy and so forth that have some form of central programming? Should we give them voting rights, should we give them person rights, et cetera? What would prevent them from being stacked like the SCOTUS?

And again, this is all pie in the sky because right now we’re just talking theoretical.

But from those conversations you’re advocating for, I think that there’s something to be said there because again, robots don’t happen spontaneously, they’re created. So how would you affect the balance of that?

And you know, just off the top of your head, obviously this is not something that I’ve given a lot of thought. It just came to me right now. But your thoughts?

James Boyle

00:23:19.085 – 00:25:16.935

Well, I think that one of the issues you raise here is that we would have a fundamentally different status vis a vis robots, autonomous AI, what have you, than we have in other moral debates. We didn’t create the non human animals.

You know, we weren’t the ones who made chimpanzees as they are, or the cetaceans [whales] and the great apes in general, the whales and so forth. Those non human animals that have really quite advanced mental capacities. But in this case, we will be, we are designing artificially created entities. And that brings up a lot of issues. What should be. What are going to be the limits in that?

There’s the let’s make sure they don’t destroy us. is one thing in AI circles. This is the idea of alignment, that we can ensure that the interests of the entity that we create align with our own, and that it is, in that sense obedient to us. But of course, there are other issues are raised here. Say there is a line that we come to decide. It’s like, okay, this is the line.

If you’re above this line, then we really have to give you at least a limited form of personhood. We’re perfectly capable of designing entities that fall just short of that line or that go over it. So what are the ethics there?

Is it unethical for us to say, okay, well, we don’t want sentient robots because we don’t want to face the moral duties that might put on us? If you think of the Declaration of Independence, it says, all men are endowed by their creator with certain unalienable rights.

They say unalienable, not inalienable. And of course, in this case, we will be their creator.

And so does that mean we say, well, that’s great because, you know, we created you, so we didn’t give you any rights. Will we ever be able to recognize real moral autonomy in something that we’re conscious we made a deliberate decision to create?

So I think that those issues definitely are worth thinking about.

Intermission

00:25:18.755 – 00:25:38.975

You are listening to Macro N Cheese, a podcast by Real Progressives. We are a 501c3 nonprofit organization. All donations are tax deductible.

Please consider becoming a monthly donor on Patreon, Substack or our website, realprogressives.org. Now back to the podcast.

James Boyle

00:25:41.635 – 00:27:59.985

I think that our experiences right now show that both we can be terrified by others. That’s as you described in the discussion of immigrants. We can also feel great empathy towards them.

And I think that right now our attitude towards AI is a little of both. What’s going to happen in the future is fundamentally uncertain because we have no idea how the technology is going to develop.

For example, a lot of it is going to develop in places outside of the United States. Obviously it already is.

And the ideas of the people who are creating artificial entities in another country may be entirely different than ours and their goals and morals. Even if we could agree, they might not agree with us.

So I do think that there are definitely issues there And I think they’re ones that are quite profound.

For myself, just thinking about all of these things, for example, thinking about the interests of non human animals, doing the research for this book actually changed my ideas about some of it. I came to believe that we actually do need to give a greater status to the great apes and possibly the cetaceans.

Not full personhood, not, you know, that the chimpanzee can walk into the voting booth and vote tomorrow, but that we need to treat them as something more than merely animals or pets or objects. And that actually that moral case has been very convincingly made. So for me, thinking about this very, what seems to a lot of people sci fi, an unrealistic issue, the issue of how we will come to deal with AI, as well as how we should come to deal with AI, actually made me reassess some of my moral commitments elsewhere.

And I found it kind of useful because it gave me a vantage point. And so I guess I’m slightly idealistic in that regard.

I do love the tradition of the humanities that asks us to look back at history, to look at literature, to look at speculative fiction, to look at moral philosophy, and then to say, okay, once I take all this on board, are all my views the same? Because if they are, then I’m really not thinking.

I’m just processing everything into the whatever coherent ideology I had before these things came on board.

For me, this book was a process in that process of exploration, that process of self criticism and that process of, I would say, anxiety about where we might be going.

Steve Grumbine

00:28:00.525 – 00:29:32.029

You know, it speaks to another thing. And I want to kind of touch on Star Trek, because I love Star Trek.

I’ve watched it forever and seeing some of the humanoid androids that are part of this from Data on through. I mean, there’s always the dilemma. And you can see real genuine love from the people towards them. They treat them as equals to some degree, right?

There is a sense of equality with Data, for example. But you brought up chimpanzees and, you know, animals of today, right?

And I think of things like killer whales, orcas, and they’re brilliant animals, they’re extremely smart, and yet their livelihood is based on hunting down and killing other animals to eat, to survive. And I mean, we as humans, we hunt, we do all sorts of stuff like that.

But yet at the other hand, if we shoot another human being, we call that murder. Whales eat each other sometimes. I mean, they attack different forms.

Like, you know, a sperm whale might get attacked by an orca or, you know, a shark so obviously we want to protect, I mean, desperately want to protect our ecosystem from a survival standpoint. How do you look at that in terms of like organic entities today?

Non human, organic, biological entities that are created by procreation of their own species. How would you view that? And then I want to pivot to Data.

James Boyle

00:29:32.197 – 00:31:54.135

Okay, excellent.

So, as you know, human beings throughout history have attempted to justify their special moral position that they have or think they have above non human animals and above mere things. And sometimes it’s done through some holy text. Pick your favorite holy text. That we’ve been given the earth in dominion.

And sometimes it’s done because we are supposed to have capacities that they lack. So we reroot it in language. Aristotle says language allows us to reason about what’s expedient. I’m hungry. There’s some grapes up there.

There’s a log that I can roll over and stand on. I can reach the grapes. But also to reason about morality. Wait, those are Steve’s grapes. And he’s hungry too.

And if I take them, he’s going to stay hungry. And maybe that’s wrong and I shouldn’t do it. And so Aristotle goes, that’s what puts us above other non human animals.

And I have to say we’ve obviously made some improvements to the philosophy in the 2300 years since he wrote that, but not that many. The basic idea is still there.

And I do think that one reason to believe that there is a difference between us and non human animals is that I don’t think there are any lions having debates about the ethics of eating meat around the Thanksgiving table.

Thanksgiving, there’ll be lots of vegans and vegetarians sitting down with their families having to explain yet again that they don’t think that this is right. That’s just not a conversation that one can imagine in the world of non human animals.

So I think we do have some reason for thinking that we do have capacities that perhaps mark us out as using moral reasoning in a way that is really relatively unique on this planet. The difficulty is, of course, that that’s why we say if you kill another human being, it’s murder.

But if the lion eats an antelope, it’s like, that’s just David Attenborough telling you a story, right? And sort of like we draw that line. And I think it’s a perfectly good one.

But I think that it also shows us that if those same reasoning capacities, in particularly the moral one, start being evidenced by artificially created entities, we’re really going to be in a cleft stick.

Because then the very thing that we said entitled us to dominion over all of them, the non human animals, is suddenly something that we share with or potentially share with some other entity. So absolutely, I think those issues are real. But you said you wanted to turn to Data from Star Trek.

Steve Grumbine

00:31:54.255 – 00:33:28.015

Yes, One of the things that I found fascinating, and there’s several episodes throughout that kind of touched on this. But what happens when a human being’s right to self defense is passed on to an android, a non biologically created entity in this space?

I mean, there were certain fail safes built in, certain, you know, protocols that were built in that were hardwired to prevent xyz. But what happens to, you know, anyone’s right to self defense? I think we all at some level recognize that it is a human right. We say human right, right? A human right to self defense.

If you’re an oppressed community being dominated by a colonizer and you decide that your freedom is worth more than their right to colonize you, you might find the roots of war. You know, people fighting back.

I mean, I imagine slaves, and I never would condemn one of them for going in and strangling their master, so to speak, because no man should be a master of another in that way. And that’s a judgment, I believe that’s fundamental.

And yet at the same time, though, to your point earlier, where it’s like, well, hey, we don’t believe in slavery, but yet here we have these autonomous entities that we’re treating as slaves. What happens to their right to self defense? I mean, is that pushing the boundaries too much? I mean, what are we talking about?

Do they have a right to exist?

James Boyle

00:33:28.375 – 00:35:39.625

Right. You bring us back. You mentioned Star Trek, but it also brings us back to Blade Runner.

As you may remember in the penultimate scene where Roy goes back to the Tyrell Corporation to meet Mr. Tyrell himself, the person who created the replicants. He says it’s not easy to meet one’s maker. And it’s, of course, play on words.

You know, you’re talking about meet one’s maker, like meet God or die. Are you talking about meet one’s maker? Meet the person who actually created you?

And then Roy passionately kisses and then kills Tyrell in an unforgettable, and really kind of searing scene.

And I think that the film really makes you think about whether or not Roy was wrong to do that, or at least maybe both parties were wrong in what they did. I’m kind of still upset about the way that Roy seems to have treated J.F. Sebastian, who was really nice and just made toys. And I don’t think J.F. Sebastian survived that. So I was like, I’m kind of down on Roy on that for that reason. But it does raise the problem.

I mean, one of the things about human beings is we make moral decisions poorly at the best of times, but very poorly when we have been told we must be afraid. And if someone can tell you you need to be afraid, they’re coming for you, they’re going to kill you, then our moral reasoning all but seizes up.

I mean, it’s interesting. The person who invented the word robot, which it comes to us from the Czech, from Vroboti. It was this play by Karl Capek Russen’s Universal Robots.

It was in the 1920s. And he immediately imagines, as he coins the word that would become the English word robot.

He also invented the movement for robot rights and imagines people both passionately protesting against robot rights and the robots attempting to take over and to kill those who oppose them. So we literally have been thinking about the issue of robot rights and robot self defense as long as we have been thinking about robots.

You can’t really have one without the other. And I just think the fact that the very word was born in a play that contemplated the possibility of a robot uprising is just quite wonderful.

Steve Grumbine

00:35:40.125 – 00:35:44.173

The Movie what? Space Odyssey 2001.

James Boyle

00:35:44.189 – 00:35:45.093

2001: A Space Odyssey.

Steve Grumbine

00:35:45.189 – 00:35:45.677

Yes.

James Boyle

00:35:45.781 – 00:35:46.813

HAL. The computer.

Steve Grumbine

00:35:46.869 – 00:36:46.959

Yeah, yes, you’ve got HAL. And then take it a step further. I mean, HAL’s made some decisions. I’m going to live, you’re going to die kind of thing. And we’ve got Alien.

I, I remember the movie Prometheus. It was really, really good. Michael Fassbender was in it.

He was also in Alien Covenant where he was an android and, and they sent him to do things that didn’t require an oxygen breathing individual to do that. He could go where they could not. But he had been twisted as well.

He was there basically as both, you know, a bit of a villain and also a bit of a hero at times. There’s so many people poking or taking a bite off of this apple through these kinds of fictional stories.

But they’re really good at creating kind of the conversation, if you’re looking to have the conversation. They create that kind of analytical framework to give you pause to say, Hmm…

James Boyle

00:36:46.967 – 00:37:04.095

I think that’s exactly right. And one of the things I address in the book is some of my colleagues said, you know, look, you’re supposed to be a serious academic.

Why are you writing about science fiction. Why didn’t you just do the moral philosophy? Why didn’t you just come up with the legal test? And so that issue I think is a fundamental one.

Steve Grumbine

00:37:04.595 – 00:38:06.973

Back to Mr. Data and Spock, right? So Spock being an alien and you know, a non, you know, humanoid. I mean, I don’t even know how you describe Spock. Right.

Vulcan, obviously. And then Data, who is definitely an AI, you know, sentient being.

I mean, at some level Data goes way out of his way to say he doesn’t have feelings or oh, I’m not programmed for that.

But I remember all the times of him being on the holodeck, you know, I remember all the different times of him crying and feeling things and trying to figure out what is this thing I’m feeling and so forth. Help me understand the relationship there.

Because these two, both were deeply integrated and highly vital to the success of not only the different starship commanders that, you know, Star Trek and all the other spin offs, but these two are key figures that should cause all of us to ask some pretty important questions. And I think that falls right into the work you’re doing.

James Boyle

00:38:07.119 – 00:43:25.575

Yeah, I mean, Star Trek was great at and I think was very honest about taking the best of science fiction treatments and basically transferring it onto the Enterprise. So there are themes that many other sci fi writers explored.

Star Trek, we’re all grist to the writer’s mill and I think that was acknowledged and everybody thought that that was a fair deal. And they are there, probing two different versions of the line.

One is the thing that gives us rights, that makes us special, is the fact that we belong to the human species, that we have human DNA. And Mr. Spock is in fact half a Vulcan, half human. And so some of his DNA isn’t human.

And yet obviously you said, well, therefore, you know, we could kill him for food or send him to the salt mines to labor for us for free. Then most people would find that a morally repulsive conclusion.

So what the writers are doing there is they’re changing one of the variables, the DNA, and making you see that it’s not just DNA that actually animates your moral thinking. And as for Data, it’s like, well, it’s, he’s not even a biologically living being, he’s actually robotic or silicon based. And so there it’s showing.

And again now it’s not the DNA issue. Now it’s like, this isn’t even something that is like you in being actually a biologically based human being.

And again, we wouldn’t say, oh well, that means that’s frees us of all moral obligations.

Moral philosophers, particularly ones who’ve been thinking about the non human animals, have argued that to say that humans get rights because we’re human is speciesist.

That’s as bad as being a racist or a sexist, saying that I deserve special rights because I’m white or I’m a guy or whatever other spurious tribal affiliation I wanted to base myself conception in. And they say saying humans should have rights just because they’re human is just as bad. And instead they argue no, we need to look at capacities.

It’s not the fact that you and I are having the conversation now and we share a biological kinship that gives us moral status.

It’s the fact that we have a series of abilities from empathy to morality, to language, to intuition, to humor, to the ability to form community and even make jokes. And that those things are the things that entitle us to the moral status.

Maybe because we make free moral choices ourselves and we are, we are willing to be moral subjects, pay moral patients as well as moral actors, and that we should just turn around and recognize, give moral recognition to any other entity, regardless of its form, that has those same capacities. And there I kind of – two cheers for that point of view, right?

So as to the point that if something else turned up that was radically different for us but had all the capacities we think are morally salient, sure, I agree we would be duty bound to offer moral and perhaps even legal recognition regardless of whether the person shared our DNA, regardless of whether the entity was in fact biologically based at all. So cheer, cheer. The downside, the thing that I don’t like, is that it attempts to abstract away from our common humanity.

And I think that the fight during the 20th century for an idea of universal human rights based merely on our common humanity, not our race, not our religion, not our sex, not our gender, just the fact that we’re human. That was a great moral leap forward. That’s one of the great achievements of the human race, in my view.

And the thing that makes me unwilling to say, oh, it’s unimportant whether you’re a human. That’s a morally irrelevant fact, it’s as bad as being a racist, is that if we root moral recognition solely in mental capacities, what does that say about the person in a coma? What does that say about the anencephalic child? They don’t have any reasoning abilities, at least right now.

Does that mean we have no moral obligations to them? I would frankly think that claim was literally inhuman.

And so I think that we do share a rightful kinship of this idea of human rights for all humans, regardless of race, sex, yada, yada, but also regardless of intellectual capacities. Because the movement for universal human rights was partly the fight against eugenics, and in particular, Nazi eugenics.

And so I think being so influenced by what happened in the animal rights debate that we say, oh, it’s just primitive and irrational to think that there’s anything special about being human and to give rights to you just because you’re a human. I think I want to get off the bus there.

I want to say, no, I think we should give moral recognition to every member of our species, regardless of their mental capacities.

But I also think that we have a moral duty to take seriously moral claims coming from entities who might in the future be very unlike us and who we would have to confront and go, wow, you know, you don’t look like me at all. You don’t sound like me at all. But do I nevertheless have a moral duty towards you? And I think that is a conversation that is not just a good thing to happen, it’s going to happen. And so I guess I wrote this book to get the conversation started a little early.

Steve Grumbine

00:43:25.895 – 00:43:42.915

One of the things that brought us together was the fact that, you know, we’ve had Cory Doctorow on here before, and there’s a there’s a great synergy between both of your worldviews and some of the writings you’ve done. And.

God, have you ever seen anybody more prolific than Cory in terms of writing Cory’s just produced?

James Boyle

00:43:43.255 – 00:48:36.545

I mean, I have some doubts about him being an android myself. He is, he’s one of the most wonderful people I’ve ever met.

And if your listeners haven’t gone out and bought his books, they should go out immediately and do so, both his fiction and his nonfiction. He’s also just a really frustratingly nice person, too.

You know, you want someone that prolific to maybe have some other personality flaws, but he’s just a really good guy. So, yeah, Cory. I’ve definitely wondered about Cory.

I mean, possible android, no doubt, but I think, you know, Cory quite rightly would be someone who, looking in this current political environment, would go, what I see is a lot of AI hype. I see people, particularly the people heading companies developing AI, making claims that are absolutely implausible.

I see them doing it largely as an attack on workers because they want to replace those workers with inferior substitutes, whether it’s the scriptwriters or whether it’s the radiologists, and that you should see this as a fundamental existential struggle in which these new technologies are being harnessed and being deployed in ways that will be profoundly bad for working people. And that’s what we ought to be focusing on. And I think Cory’s absolutely right. I think that that is an incredibly deep concern.

I do agree with him about the hype.

So for him, I think this book is kind of like, well, Jamie, why are you doing all this philosophizing about the idea that these entities might one day deserve rights? Shouldn’t you be focusing on, you know, the far more real struggles that actual human beings are having right now? And my answer to that is I don’t find that I need to pick one or the other.

And in fact, I personally, I don’t know about you, I find that the more one gets morally engaged or engaged in serious moral reflection, it’s not like you have limited bandwidth. So you’re like, okay, wow, now I’m sympathizing with non human animals. Sorry, granny, there’s no more disc space for you.

I’m going to have to stop thinking about you. I don’t find that that’s the way my brain works. So I think all the concerns that Cory raises are actually very real ones.

I just think that they’re not the only concerns.

And I think that we very much should worry and agitate to make sure that whatever form this technology takes, it’s one that isn’t driven solely by the desire to disempower working people. And there’s lots of precedent for that. I was talking to someone recently who’s doing a study of the development of the history of capitalism.

And one of the things that he was talking about was the Arkwright water loom, which was a way of spinning thread that was being developed at the same time as the spinning jenny.

And the thing is that the spinning jennies and the other technologies which were superior were still hefty enough so that they really needed a lot of upper body strength and thus tended to need a male workforce at the time. But the Arkwright water mill could be worked by women and even children. Produced crappier thread. Worse thread count.

You wouldn’t see it on Wirecutter as your recommendation for sheets, but a great advantage in that this isn’t a group of people who are unionized. This is not a group of people who are organized, and so this is if we can manage to use the technology to push production into a less politically organized group, then hell yeah, let’s do that.

And so I think that that’s the kind of concern that Cory has, to be honest, it’s one that I share. That’s just not what this book is about.This book is about the moral issues. And I personally don’t think that one has to choose between thinking about those two things.

And I guess I’ve been around long enough that I’ve seen confident claims that this is a distraction from the real fight, turn out not to age very well.

I remember during the Campaign for Nuclear Disarmament, which I marched in lots of rallies in Britain campaigning for nuclear disarmament, people would be talking about climate change, which back then we call global warming, and would be shouted down. It’s like, that’s ludicrous. Why are you worrying about the weather? You know, we could all be dying in a nuclear war.

Or when people started talking about the rights of non human animals. My God, why are you talking about dogs and cats when there are people, etc., etc. You know the argument, right? Which is a familiar one.

You may not be worried about this thing because there are more important things to be worried about.

And I just personally have seen people be wrong often enough that I am now have a little bit less confidence in my ability to pick winners in terms of what’s going to look in the future like it was a really good allocation of my moral energies.

Steve Grumbine

00:48:37.365 – 00:50:00.705

Let me just tack this on. During this past election, I mean, for the last year, our organization fought to bring about awareness of the slaughter of the Palestinian people.

Watching estimates of 40,000 children killed in Palestine and Gaza and, you know, bringing that up. And I’m not going to mention the name because I’d probably get in trouble at Thanksgiving. But individuals that are Democrats.

So why are you worried about that? Our family’s here in this country, what are you worried about that for? I need to know that I’m not going to lose more rights for my daughter.

So I don’t really care. And hearing that was almost like nails down a chalkboard to me. That line of reasoning did not work for me at all.

And I’m hearing you as you’re going through this. And I’m saying to myself, you know what? There is room to walk and chew gum. There is room to do that. So I really value that.

Listen, folks, the book we’re talking about is The Line: Artificial Intelligence and the Future of Personhood with my guest, James Boyle. James, I’d like to thank you first of all for joining me today.

I know we’re running up against time, but I’m going to give you one last offer to make your final point. Is there anything that we didn’t cover today that you would like our listeners to take out of this, aside from buying your wonderful book?

James Boyle

00:50:00.825 – 00:50:57.455

Well, funnily enough, I’m going to argue against interest and tell your listeners, who may have much better things to do with their money, that if they don’t want to buy the book and they’re willing to read it electronically, I wanted this book to be under a Creative Commons license, to be open access, and so that anyone in the world would be able to download it and read it for free. Because I think the moral warrant for access to knowledge is not a wallet, it’s a pulse.

And that’s something that’s guided me all through my time as being a scholar. It may seem sort of an abstract or pompous way of putting it, but I believe it very deeply.

And so basically, everything I write, everything I create, whether it’s books like this or comic books about the history of musical borrowing, they’re all under Creative Commons licenses and people can download them for free. So if you can buy it, please do so.

MIT was kind enough to let me use a Creative Commons license, but if you don’t have the dough, then just download it and the book’s on me. I hope you enjoy it.

Steve Grumbine

00:50:58.035 – 00:51:09.967

Fantastic. Thank you so much, James. And it was nice to get to know you prior to the interview here a little bit, and I hope we can have you back on.

There’s so much more I know that you bring to the table that I think our listeners would really enjoy.

James Boyle

00:51:10.071 – 00:51:13.919

Well, thank you, Steve. I very much enjoyed chatting to you, and I hope you have a wonderful day.

Steve Grumbine

00:51:14.087 – 00:52:39.625

You as well. All right, folks, my name is Steve Grumbine with my guest, James Boyle.

Macro N Cheese is a podcast that’s a part of the Real Progressives nonprofit organization. We are a 501C3. We survive on your donations. Not the friend next to you, not the other guy, your donations.

So please don’t get hit with bystander syndrome. We need your help. And as we’re closing out the year of 2024, just know that all your donations are indeed tax deductible.

Also remember, Tuesday nights we have Macro N Chill, where we do a small video presentation of the actual podcast each week, Tuesday nights, 8:00pm Eastern Standard Time.

And you can come meet with us, build community, talk about the subject matter, raise your concerns, talk about what you feel is important, and we look forward to having you join us for that. And again, on behalf of my guests, James Boyle and Macro N Cheese. My name’s Steve Grumbine, and we are out of here.

 

Extras links are included in the transcript.

Related Podcast Episodes

Related Articles