Podcast Banner

Podcasts

Paul, Weiss Waking Up With AI

Hot Week in AI Safety Developments

This week on “Paul, Weiss Waking Up With AI,” Katherine Forrest discusses the latest rapid developments in AI, including Anthropic’s landmark copyright settlement, emerging threats from AI-enabled cybercrime and new insights into model alignment and safety from joint research by Anthropic and OpenAI.

Stream here or subscribe on your
preferred podcast app:

Episode Transcript

Katherine Forrest: Well hello, everyone, and welcome to today’s episode of “Paul, Weiss Waking Up With AI.” I’m Katherine Forrest, and I am flying solo today. Anna will be back next week, and we’re always so excited to have Anna because it just makes things always more interesting. But anyway, you can’t tell from just listening, but I am recording this episode from Maine. And as you’ve heard me say now for the last number of them, I’ve been spending the summer up in Maine at my place and then commuting back and forth to New York. But this is the last time I’ll be recording from Maine for the summer because I’m heading back, heading back south. And so that’s the news on my end.

Now let’s turn to AI, which is really where we want to be spending our time, not with my, you know, sadness and despair about leaving Maine. But in any event, I wanted to talk today about something that’s different from what Anna and I were going to talk about. We were going to talk about AI workflows, which is a scintillating topic. It’s actually a very important topic, but it’s one that she knows an awful lot about. And so I really want her to be part of that. So we’ll do that together when she gets back. But I want to talk about a couple of really important developments that demonstrate the velocity of change that’s happening in the AI area. And I want to start with first talking about some Anthropic issues, and then I’m going to talk about an OpenAI and Anthropic issue. Now, by the way, the fact that these issues are related to Anthropic is just the timing of things. There are model developers everywhere who will be struggling with maybe not the first issue that we’re going to talk about—’that’s all up to them—but in terms of the other issues that we’re going to talk about, these are issues that are issues for the model developers far and wide.

So the first issue that we’re going to discuss, it’s a lot of intro to the first issue, is that Anthropic settled a huge case that was brought against them for copyright infringement by a bunch of authors of books. And they, the authors, had brought this case in the Northern District of California. And as a refresher for you folks, there are cases relating to copyright infringement against model developers, many model developers, that are all over right now, but primarily have been pulled in and reined in for certain purposes to the Northern District of California and to the Southern District of New York. But Anthropic is by no means alone in being the subject of a suit, but they just settled a suit. And that’s an important development because it’s really the first settlement of this kind, and it’s called the Bartz suit, B-A-R-T-Z. And there are a couple of really interesting things about it, but it also, when I tell you about it, will make you realize that this case doesn’t necessarily set the precedent for other cases.

So a couple of weeks ago, to sort of set the stage, Judge Alsup, A-L-S-U-P, who is the judge presiding over that case, he gave a mixed ruling to Anthropic that was somewhat favorable to—very favorable to Anthropic, on works that were being, that had been copied and were being used for AI training. And he said that works that had been copied and were being used for AI training were…it fulfilled a transformative purpose and could be considered fair use. But he also said that works that had been copied but had not been used for AI training and that were being kept in a digital repository were not subject to the fair use defense.