Podcasts
Paul, Weiss Waking Up With AI
Superintelligence Status Report
In this episode, Katherine Forrest and Scott Caravello examine OpenAI's latest paper suggesting proactive policy measures to help society navigate the economic and social changes that advanced AI may bring. They unpack the paper's key recommendations and consider broader industry perspectives on managing the transition ahead.
For the sources referenced in this episode, please see the links below:
OpenAI: Industrial Policy for the Intelligence Age: Ideas to Keep People First
Dario Amodei: The Adolescence of Technology
Episode Speakers
Episode Transcript
Katherine Forrest: Hello everyone and welcome to today's edition of Paul Weiss Waking Up With AI. I'm Katherine Forrest.
Scott Caravello: And I'm Scott Caravello.
Katherine Forrest: Scott, I really feel like we're doing a sort of interruption of our regularly scheduled program to take on a topic that just came out today. It's a hot topic.
Scott Caravello: It sure is. And, I know you're excited about it, and so excited about it, clearly, that you're willing to skip our usual back and forth conversation about wherever I am in the world, who's getting married, will the bride make it on time and what's hanging on the wall behind one of us—all of it! And I am in New York. So, I'm just putting that out, laying that down for the listeners.
Katherine Forrest: All I can say is I know from a number of our listeners that they feel like your life has become a centerpiece of what brings them to this podcast. What do you have to say, Scott? Give me something. What do you got?
Scott Caravello: Oh my gosh, you're really, really putting me on the spot. I was sworn into the bar in SDNY this morning. So, that's really the only exciting thing that I've got going on for me today. I guess I am a little confused, Katherine, why you didn't want to take, you know, 90 minutes out of your day to come down and sponsor me, say a couple of nice words about how great it is working together.
Katherine Forrest: You didn't even ask me.
Scott Caravello: Oh, yeah, that’s true.
Katherine Forrest: That's true. And, let me also just say that I am appalled, a little bit, that you waited so long… like, it's not like you're some child.
Scott Caravello: Uh… yeah, no comment on that one.
Katherine Forrest: All right, all right, but we're going to come back to all that. And, it does remind me when you talk about what's hanging on the wall behind us, that we are exploring this possibility that some of our audience members had suggested, which is we go to a combined audio-video format and actually let people see what you and I see, which is this video screen where I can see you, you can see me, for better for worse in terms of this. And we can then have people have eyes on the backgrounds. They can see WB Post, you know, hanging on the wall behind me.
Scott Caravello: But also our serious faces as we talk about serious topics. And by the way, if we're going to be on camera, I do think this means that maybe the firm should start providing a separate wardrobe stipend. At least for me, at least for me. You've got the cool, I don't know if that's like a Bavarian jacket going on, that's very, very cool. But I need to be putting my best foot forward as we, you know, bravely embrace a potential combined video-audio format.
Katherine Forrest: Okay, one shirt.
Scott Caravello: One shirt… one shirt?
Katherine Forrest: That's all you get, one shirt.
Scott Caravello: One shirt… every week.
Katherine Forrest: I don't think—you don't need anything more than one shirt because nobody can see below the shirt, okay? One shirt, one shirt. We'll get you one. It'll have some swag on it. It'll say, “Paul Weiss Waking Up With AI.”
Scott Caravello: Oh my gosh.
Katherine Forrest: But I do take all of this very seriously, which is why now we're going to go back to our interruption of our regularly scheduled programming, which was going to be JEPA, the new architecture. We were going to talk about that, but we're not talking about that today because we're interrupting it to talk about a release that OpenAI just made that I read about in the Wall Street Journal and I read about it this morning on the subway. I'm like one of the few people on the subway who actually has a newspaper, and people look horrified by the fact that I actually carry a paper, a real newspaper in the subway, and God forbid I have to actually turn the page because you would think that I was taking up more space than anybody, like, has ever taken up on a subway before. People look at me like I'm doing something really strange and I'm just turning the page of the paper! But anyway, so I'm reading the Wall Street Journal today, minding my own business on the subway, and I read about an OpenAI release of a piece that's called, “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” And it is that that I want to talk about. I ran into the office, I printed it out, and I've been waving it around ever since.
Scott Caravello: And for our listeners who want to get a copy, it is both on the open web, but it's also going to be linked in the description of the podcast. And, so, let's go ahead and start talking about it right now.
Katherine Forrest: Right, so I want to first give an overall headline, which is why this is such a big deal and why I've been waving this thing around, because this OpenAI paper is about preparing our society for the coming of superintelligence. It's a really big deal.
Scott Caravello: You know, I think that's your book there, isn't it? Of Another Mind?
Katherine Forrest: Oh, yeah, acting like you don't know… acting like I don't talk about it all the time. Exactly, Scott! And, you want to talk about how it's open for pre-order on Amazon?
Scott Caravello: I sure would love to.
Katherine Forrest: Have you ordered a copy yet?
Scott Caravello: Um, well, I was actually going to see if maybe there was, uh, you know…
Katherine Forrest: No, you haven't. No, you haven't. You haven't. Okay, forget it. All right. Okay. Anyway, it's a topic that I am intensely interested in. It is in fact the topic of my book, Of Another Mind, superintelligence. And it's really about, in this paper that OpenAI has put out, it's about looking at the fact that high capability models are now reaching such a point that the paper is encouraging us as a society to take preparations seriously in this incredibly important moment to prepare ourselves both in terms of our jobs, our government—our very humanness—for this transformation that we're about to experience.
Scott Caravello: Yeah, it really is that significant. We are essentially having a major model developer tell us explicitly that superintelligence is not that far away and that it's time to start getting ready.
Katherine Forrest: Right, so let's talk about what they both predict and recommend. All right, so the paper starts out talking about how AI has progressed rapidly and that it's heading towards superintelligence. So this is really very significant because we've had a number of academic papers in the past that have mentioned this, that have talked about the capabilities that are heading towards AGI.
Scott Caravello: Let's do it.
Katherine Forrest: So, the paper starts out talking about how AI has progressed rapidly and that it's heading towards superintelligence. So, this is really very significant because we've had a number of academic papers in the past that have mentioned this, that have talked about the capabilities that are heading towards AGI, but here we have OpenAI talking about the time being now, right now, to begin our preparations, or undertake preparations, in a multifaceted way because superintelligence is soon to be upon us. And they have a line that I personally agree with, and that I say to audiences fairly frequently, which is no one really knows what this superintelligence is going to look like. But it will be systems that outperform the smartest humans. But they add something to it. They say it'll outperform the smartest humans even when humans are working with AI.
Scott Caravello: Yeah, and you know, that's a bit of a new definition or a flavor on the definition where superintelligence equals AI that exceeds humans with AI.
Katherine Forrest: Right, so what we have is superintelligence actually being something that is more than just human intelligence. It's more than human intelligence plus AI intelligence. And that's something really new. Five years ago, we would have called AI that exceeded human intelligence, or was at actually human intelligence, AGI. Now we think of AGI as needing to be at, or above, the domain of a PhD-level human. So, the definition of superintelligence is really, it's moving. It's moving as we move AGI higher, so is the definition of superintelligence.
Scott Caravello: Yeah, but with that definition, OpenAI states that it's important to prepare so that we can navigate through this transition with a democratic process and so that we have one that gives people power to shape the AI future that they want.
Katherine Forrest: Right, right. And there's a big focus right up front in the paper on the uncertainty around superintelligence and the need for humans to prepare for really a range of outcomes. The paper discusses some of the potential issues that superintelligence might bring, including some of the promises, lowering the cost of essential goods by creating those goods more efficiently and therefore more cheaply, new forms of work. Speeding up medical and scientific breakthroughs.
Scott Caravello: So, huge potential. And they also mentioned that AI may be able to take over some of the most boring work that humans do, reducing the time that some projects take from months to minutes. And that could reshape how organizations are run and could have big impacts on the workforce.
Katherine Forrest: The workforce impacts are spread throughout the entire paper. So on page two, OpenAI states that we should be “clear-eyed,” and they use that phrase clear-eyed a couple of times, about the risks and challenges in the workplace that superintelligence will bring with regard to jobs and that entire industries might be disrupted.
Scott Caravello: Though elsewhere, OpenAI does mention that in past technological transformations, humans have been able to weather the storm. But I do think that that needs to be qualified by what, you know, the more pessimistic folks would say about this transition, which is that this time is different. The transformation will have such an impact on such a wide scale that there just won't be the same type of opportunity for individuals to retool into new jobs. And so if you want to take one historical example, right, the Luddites, those English textile workers who were put out of work in the industrial revolution, a lot of them did fall on hard times for a lot of their life when technology overtook them, while others, though I think mostly outside of their industry, seized the new opportunities that were created. But the pessimists, and I keep calling them that, even though I don't mean to imply that they're unduly pessimistic, would say that the sort of compensation effect, those new roles, are just not going to be coming this time around. And, so, I just want to level set about that. And it's not like OpenAI is saying that they disagree that this time will be different. And, in fact, I think that that underpins the document in the first place because they're expressing this position that our society will need new systems and new solutions to successfully weather such a disruptive transformation, which again, I think we're going to get more into as we go along in this episode.
Katherine Forrest: You know, absolutely right. And another risk that they highlight is that there are bad actors out there that can misuse the technology, the same technology that can be used for such good. They can misuse superintelligent AI and there can be misaligned systems that can evade human control. Governments and institutions might deploy superintelligent AI in ways that undermine democratic values. And that wealth could even become more concentrated. So, I really want to applaud OpenAI for putting out what is truly a balanced paper, a paper that puts out all of the many advantages of superintelligence, but then does not try to shy away from some of the risks.
Scott Caravello: And, you know, all of it does need to be said, which they have, and you know, your book says all that, and you know, I'll actually just take this moment to clarify, Katherine, that even though I haven't pre-ordered the book, I do have the file of the pre-print that I have read through. So, I'm actually ahead of the game.
Katherine Forrest: You haven't read the whole thing.
Scott Caravello: I have!
Katherine Forrest: You have?
Scott Caravello: I have.
Katherine Forrest: Oh. All right, okay. Well, then you get a pass. You don't have to buy the book. You don't have to buy the book if you've read the book.
Scott Caravello: Well, I want to get it signed. So, I do have to buy it. I do have to buy the book. Okay, anyway, sorry.
Katherine Forrest: All right, clarifying that. Yeah, he was hoping to get something more than just one shirt, that's all! Okay, in this paper that OpenAI has put out, they also have a series of really interesting recommendations that, you know, I just imagined the team of people who they probably had working on this. And these recommendations are trying to come up with sort of a counterbalance to some of the risks that they highlight. And, again, I applaud them for the completeness of this. And they discuss finding ways as a society to be able to share the prosperity that superintelligence might bring more broadly. By, for instance, lowering costs to education and health care.
Scott Caravello: And by, you know, mitigating AI risks through the building of new institutions, technical safeguards, and governance frameworks.
Katherine Forrest: Right, and they put forward a case for what they call a new industrial policy and that's really what the title of the paper is all about.
Scott Caravello: Right, exactly. And, also, what gets into what I was previewing, the fact that they discuss the need for some proactive political choices by people similar to what was done in the New Deal or in the Progressive Era, and that history has shown that democratic institutions can respond to transformative change. But OpenAI says that we should not wait to start discussing the policies we need tomorrow. We need to start discussing them, now, today.
Katherine Forrest: Right, and, so, in this paper they have a list of suggestions that's really pretty ambitious. And they talk about the government using, for instance, its toolbox to align public and private interests in terms of what superintelligence can do.
Scott Caravello: And that part I'm not totally sure what they're getting at.
Katherine Forrest: Okay, all right, I agree. I'm not sure what that means either. What it means to have the toolbox align public and private interests. But anyway, it sounds good. And, so, they also talk about funding research, making AI accessible to everyone, at least to a certain extent, making market shaping tools and having targeted regulation.
Scott Caravello: And they also talk about building an open economy and put forward a number of suggestions for financial support.
Katherine Forrest: Right, because as the paper also acknowledges, “without thoughtful policies, AI could widen inequality by compounding the advantages for those already positioned to capture the upside.”
Scott Caravello: And, as part of the antidote for this, they suggest giving workers a voice in the AI transition to make work better and safer by eliminating unsafe and repetitive jobs, but also allowing for more AI entrepreneurs, modernizing the tax base, and creating a public wealth fund. And, so, then I actually want to quickly plug some of Dario Amodei's writings, specifically his January essay, The Adolescence of Technology. Because he and Anthropic have been out in front of these issues too, if we can sort of move away from the OpenAI perspective for a minute. There is some overlap, like with the suggestion that Dario has that a macroeconomic problem this large will require government intervention, including through changes to tax policy. But there was also an interesting point that I wanted to flag, which is the recommendation that companies think about how to take care of their employees. If businesses are seeing meaningful increases in value because of the effects of AI, maybe it would “be feasible,” in his view, “to pay human employees…even [long] after they are no longer providing economic value in the traditional sense.” And there's not a lot of detail on that. He goes on to say that Anthropic is considering a range of possible pathways for its own employees and that they'll share those pathways in the future. But it's creative thinking. And, you know, I think creative thinking is going to be warranted here.
Katherine Forrest: And, building off of that, and going back to what you were talking about earlier, Scott, with how this transformation might be different, these are really interesting solutions that are adding to the debate. And a lot of what's been out there about managing the impact to the workforce of AI disruption has focused on the supply issue, and retraining workers, making sure folks can be competitive in an AI-driven economy. But that only gets you so far if there's a wholesale transformation of society and the demand for employees, even retrained employees, just isn't there. So, there's a lot to think about. And, separately, one of the things that I find interesting are the statements about human-centered work that will still be around. And OpenAI mentions, as some ‘for instances,’ and it's not exclusive, child care, elder care, community services, you know, things that are human-centered occupations.
Scott Caravello: Exactly. So, then the last thing I want to mention in the piece is in section two of the paper and it's about building a “resilient society.” OpenAI says again, we should be clear-eyed that the new risks of AI “won't be isolated or suitable for addressing one at a time” because “AI will reshape how work is performed, how decisions are made, how organizations operate and how states interact.”
Katherine Forrest: Right, and they say that while some developers have spent a lot of time on upstream or technical safeguards for their models, these social disruptions are coming. And we, humans, play a big role in figuring out how we, as a society, are going to adjust.
Scott Caravello: Yeah, so I really recommend folks read this paper and undoubtedly we're going to see a lot of commentary about it.
Katherine Forrest: Absolutely. So, I know I've actually brought a lot of energy to this podcast. I've been so—this paper, I really find to be a moment for us to just all pause and think about what this means to have all of this put out there for us to begin a national dialogue, one which I'm really in favor of. But that's all we've got time for today. I'm Katherine Forrest.
Scott Caravello: And I'm Scott Caravello. Don't forget to like and subscribe.