
Podcasts
Paul, Weiss Waking Up With AI
Episode Speakers
Episode Transcript
Katherine Forrest: Hello, everyone, and welcome back to today’s episode of “Paul, Weiss Waking Up With AI.” I’m Katherine Forrest, and here I am in the absolutely gorgeous New York City today. You know, the weather is just phenomenal. And I want to just mention a couple of things about where I actually physically am, because there are some good things and some funny things and some oddities about New York that a lot of you who are from New York or have been to New York understand, and its proximity to people. Now, don’t worry, this is actually going to fit in with AI. But right now, I’m thinking about the fact that I’m sitting here recording this right now in SoHo, where I can see into about, I don’t know, one, two, like, 15 apartments, right? And it’s really sort of strange. I’m, like, just right across the street, and I can see all of the lives that people lead—their routines, their decorations, who sneaks out for a smoke on a balcony, surreptitiously, all of it. And so I had this realization that they can see me too, which is probably creepy for both of us. But so there’s this sort of understanding in New York City that you don’t really acknowledge that you’re seeing into someone’s living room. And you know, sometimes you realize in horror that they’re, like, a Red Sox fan or you realize that their parents must own the apartment because periodically they move out of their bedroom and then parent-like people come in and stay in it. And, you know, you just sort of avert your eyes so that they never see you seeing, but they must do the same thing. So it’s sort of an unspoken social contract of living in a packed city like this that you don’t stare. And you don’t really, you try not to have people feel like they’re in a fishbowl, but you know, we’re sort of in a fishbowl.
So this social contract that we have about New York City, I just want to use that as my segue into today’s topic. And the topic is actually social contracts. And yes, it does relate to AI. So anyone who’s listening and who’s a lawyer or who’s taken a philosophy class in college knows about the concept of a social contract. But let’s just sort of do a little refresher, which is that there’s a theory that in a state of nature, like way back when—I don’t know when it was, Garden of Eden or before pre-Garden of Eden, or I don’t know if there was a pre-Garden, whatever, a long, long time ago—humans would have bonked each other on the head. You know, we would have just taken clubs and bonked each other on the head unless we had developed a social contract or unless we did develop a social contract. So there’s the Hobbesian view that humans would otherwise have had nasty, brutish and short lives and in fact did until we developed a way of cooperating with each other in communities and set out rules for a social contract that would just enable us not to kill each other off. So, and others talk about all of this.
And so here’s the AI thing. And I’m going to sort of lead into it with a little statement about our common law system, which is our common law system is really a social contract that we humans have chosen to put into practice. And of course, social contracts vary by culture. They vary even by geography because not all aspects of a social contract are codified in rules. We do have laws that govern how we humans are going to behave towards one another and what we can do and what we can’t do. And if we break the social contract, we might incur civil or criminal liability. So for instance, if you take, as a human, if you take somebody else’s property, they can bring you to court and they can sue you for civil damages, or maybe you can be prosecuted for theft or burglary depending on how it happened. And if you hurt somebody, if you aggress upon their body—aggress upon their body—you could again be taken to court and sued for civil damages, but you can also be criminally prosecuted for assault and battery, things like that. So the social contract that humans have does go beyond these laws and extends to mores and what we view as moral behavior that might not even run directly afoul of a law. So, for instance, insulting somebody isn’t generally illegal, though it might be ugly and ill-advised. Of course, when it turns into harassment, it certainly can become illegal. And part of our social contract is helping others in need. And you know, if you saw somebody having some sort of medical situation in front of you, hopefully you would help them and you would hope that they would help you. And that’s not a rule, it’s just part of sort of the unspoken sort of hope that we have, the mores of good Samaritans. Now, let’s turn to AI and how social contracts and AI sort of intersect.
So we are careening—I know that those of you who have listened to this podcast know that I think this—careening towards superintelligence, and the White House’s AI Action Plan sets out a goal towards having dominance in AI, certainly AGI, but also moving towards superintelligence. And my primary point that I want to express today is that we have to recognize that, if AI becomes superintelligent—footnote, by the way: today, well, actually the day I’m actually recording this, which is September 26, 2025, Sam Altman had an interview published, or somebody published an interview of Sam Altman, in Politico. And in this interview of Sam Altman in Politico, he says that 2030—so five years from now, four and a half years from now—is the time that he thinks that there’ll be superintelligence. So, okay, that’s just a footnote. But when AI becomes superintelligent, right, so this is the way I think about it, then the real issue is going to be how we and that AI are going to navigate a new social contract. In other words, we’re not going to be bestowing rights on AI in some benevolent way; we’re going to have to navigate with AI how we are going to live, hopefully, in a cooperative way. So one immediate issue is whether superintelligent AI thinks it even needs to think about rights at all, whether that’s even a relevant concept. And if superintelligent AI is vastly more powerful than we are, our human rights may not even be relevant to it. Rather than thinking about when do we grant it moral and legal personhood, the question may instead be how are we going to manage to live cooperatively with something that doesn’t need to actually have the kinds of rights that we have developed in our social contract. And we humans have demonstrated that when we did not, over history, when we did not think that another being—think of animals, for instance—that another being would threaten our dominance, the human dominance, we didn’t feel compelled to include them in our social contract. And it’s possible, right, that AI, superintelligent AI, could end up being the same way. They may not feel that they need to be, or they may not think that they need to be, because I don’t know what their feeling state may or may not ever be. But they may not view a social contract as necessary for them. So it’s possible that for some period of time, superintelligent AI might find it useful to be in a social contract and possible that they won’t.
So let’s talk a little bit about that. You know, there might be a period of time when superintelligent AI still needs us, and that we do things with our human hands, the way the humans navigate this world, that there may be some sort of cooperative relationship that superintelligent AI finds is useful. And so it could view participation in a social contract with us as useful. But then again, it might not, right? It might actually have a way of autonomously and very quickly navigating the world. So where does consciousness or sentience fit into all of this? Because, again, those of you who’ve listened to this podcast before know that I have thought a lot about those issues. And as a threshold matter, I think if you step back from it, superintelligent AI doesn’t have to be conscious or sentient at all. You can have superintelligence that doesn’t have the separate qualities and characteristics that we consider to comprise consciousness. Or if it has something that constitutes a kind of inner life, it could be so different from our inner life that it’s not something that we recognize as a consciousness, at least not at all in a human-like way. So you don’t actually—as I’ve said—you don’t actually have to have consciousness or sentience combined with superintelligence. Those things don’t have to go together. They might go together, they might not go together, right? So if you’ve got superintelligent AI that’s actually able to navigate the world, then you’re going to have this issue, in my view, of a social contract coming up. But if you’ve got a conscious AI or a sentient AI—let’s just say that that happens, but it’s not superintelligent. Let’s just say it’s at AGI. You may not have the issue of social contract come up. That’s when you get into those questions that we’ve talked in the past about with legal personhood and decision-making by humans as to whether or not there is going to be some sort of grant of rights, but there’s not going to be a requirement because we’re not necessarily going to be navigating an entirely new social structure. We might choose to because if there’s another sentient or conscious being out there, we might decide that moral and ethical obligations then require us to bestow rights. But if we have superintelligent AI, then it becomes very hard for us to walk away from the question of social contract because we’re going to be in potentially a totally new power dynamic, a totally new situation in our society about where we fit and where it fits. So we’ve got a lot of things changing over time as AI continues to gain its cognitive abilities. And as it becomes potentially superintelligent, we’re going to confront this question as to how we’re going to live with it. But if it doesn’t become superintelligent but does become conscious or sentient, then we’re going to have these other questions of what do we choose to do.
So this all goes to, you know, that book I’m writing, “Of Another Mind,” which I’ve been writing and I’m about to finish. It’s just on the verge of finishing—very, very, very much on the verge of finishing—as you can tell from all of this. And so as part of that whole thing, I did realize that the question I had been asking originally, which is, again, going back to where I started today: are we, if there’s conscious AI, going to need to think about moral and ethical obligations in terms of bestowing rights? That that might not be the right question. That the right question, the more immediate question, might be, what happens when we have superintelligent AI that could be conscious, might be conscious, might not be conscious? Maybe we want it to be conscious, but in all events, our social dynamics in this world are going to change because we’re going to have something that for the very first time exceeds human cognitive abilities. So that, in a way, is a pressing question, and it’s an important question that we have to ask. So, you know, once I finish this book and it’s out there, we’re going to have a whole discussion about rights and social contracts and where this all goes. But for right now, I just wanted to raise the concept that we need to think about this all a little differently, not just about sentience and legal personhood, but about where do humans fit with our existing social contract if superintelligent AI actually shows up on the scene that we’ve created? All right, that’s all we’ve got time for today, folks. I’m Katherine Forrest, and I want to thank you for joining us for today’s episode of “Paul, Weiss Waking Up With AI,” and hopefully you’ll join us again next week.