
Podcasts
Paul, Weiss Waking Up With AI
Legislative Developments: Spotlight on California’s SB 53
This week on “Paul, Weiss Waking Up With AI,” Katherine Forrest and Scott Caravello dissect major AI legislative developments related to California’s SB 53, as well as New York’s RAISE Act and a newly introduced federal AI bill.
Episode Speakers
Episode Transcript
Katherine Forrest: Hey, hello everyone, and welcome back to today’s episode of “Paul, Weiss Waking Up With AI.” I’m Katherine Forrest, and I am very excited about the fact that today I have a special guest. I have Scott Caravello. Scott, say hello.
Scott Caravello: Hey, Katherine, how are you? And hello to everyone listening in. It’s great to join you all today.
Katherine Forrest: All right, so Scott is part of the Artificial Intelligence Group here at Paul, Weiss that I run, and so Scott is going to be chit-chatting with us today. And maybe on a few other days, Scott—only if the tryout works out, you know, otherwise forget it. You’re like, off with your head. Like, I’m making that neck—like, right across the neck sort of signal.
Scott Caravello: No, no pressure, no pressure.
Katherine Forrest: No pressure. All right. So, but we have to do a few initial questions because—and if they don’t come out, by the way, if they don’t come out right, then it’s like, it’s done even before we start. But like, what’s your favorite baseball team?
Scott Caravello: So it’s a tough time of year. My Mets have just completed a spectacular collapse, one for the ages. So all I can really say is that, even though the season’s ongoing, only 123 days until pitchers and catchers report to spring training, and we do it all over again.
Katherine Forrest: Okay, that’s great. The only problem you’re going to have is if you were a Jays fan, because then that was going to be a big issue because my team, the Yankees, you know, they had a sad occurrence. And so, you know, if you were a Jays fan that was going to be it, it was going to be the end. It’d be a very short podcast, you know, less than a dog-walk length as I used to call it. But anyway, tell our listeners how you got into AI law. How does somebody like you get into AI law?
Scott Caravello: Yeah, absolutely. So, to be honest, it was totally by chance. I worked randomly on a matter considering the overlay of lending laws onto AI, and I experienced, you know, just how fascinating the overlay of legacy statutes onto emerging technology could be. So I was just all in after that, and I think that the last few years have really borne out what a great decision that was.
Katherine Forrest: All right. So Scott, how much technology do you have to know to do this AI law, from your perspective? I know from my perspective what I feel is adequate and what I need to know, which is to sort of stay up on basically all of the stuff, you know, all of the tech developments as they’re happening and all of that. But how much do you think you have to know?
Scott Caravello: I think that’s exactly right. It’s a matter of keeping on top of where the technology is headed, having a deep familiarity with the fundamental principles of AI—not just where AI is today, but where AI is heading in the next six months to a year, to the extent we can say that with certainty. And that sort of allows us to keep our practice dynamic and serve our clients whatever needs pop up.
Katherine Forrest: Yeah, yeah, yeah, no, I totally agree. Although, if you can predict what’s going to be happening in AI over the next year, you’ll make a lot of money. If, you know, you have to clear all trades with our risk group. You can’t just trade, you know, you can’t just buy stock at a law firm. Okay, so let’s get into our topic today, which is, you know, we’re going to cover a couple of legislative developments. And why don’t you introduce them both to us, and then we’ll start down that road.
Scott Caravello: Absolutely. So, chief among them is last week’s enactment of California’s SB 53, which is really a first-of-its-kind law in the United States, and it focuses on frontier AI development. And then, even more recently than that, a new federal AI bill was introduced in the Senate, sponsored by Senators Durbin and Hawley, which would set a framework that treats AI as products and subjects it to a product liability regime, which is a really fascinating development. And I’m sure we could do a very, very deep dive into how product liability may overlay on AI. But I’ll take a step back and get into the California law.
Katherine Forrest: All right. So let’s just take a minute first and describe a couple of the events that led us here. And, you know, we covered the Big Beautiful Bill a while back. And that was a bill that removed the state law moratorium—it was originally going to have a state law moratorium on AI legislation. It was going to—sort of for 10 years, I think it was, and then it got cut down to five years, and then it got finally eliminated altogether. It was going to put a moratorium on various kinds of state law. And after that, or while that was sort of all in play, SB 53 started to wind its way through the California process, and then it gained traction, and now it’s actually been signed into law, right? So I think the fact that it gained traction on the heels of the moratorium’s defeat isn’t a huge surprise. I think that we expected to see some state laws actually come into play as a result of that, although we’ll talk about the AI action plan that came out in July in just a minute.
Scott Caravello: Yeah, absolutely. I completely agree. The state landscape was complicated by the release of the action plan, which was discouraging of states from enacting laws that the federal government might view as interfering with innovation. Perhaps outside of Colorado, which has delayed implementation of its own signature law, we really haven’t seen a major chilling effect on state activity.
Katherine Forrest: Yeah, it’s interesting. I was actually thinking that there would be more of a chilling effect as a result of the AI Action Plan in July because the federal government tied federal funding to whether or not tech companies found certain state laws to be chilling innovation or not, and sort of suggested that, you know, innovators could review state laws. And if they felt like there was too much inhibition on their development efforts by a particular state regulation, they could bring it to someone’s attention, although the process for that wasn’t clear. So SB 53 was sort of an outgrowth, or sort of gained momentum after that, but it was an outgrowth of really the SB 1047—which we talked about in prior episodes—which was a bill that Governor Newsom had vetoed back in 2024. And SB 1047, which is obviously not a law, would have imposed really far-reaching requirements on the most advanced AI models, and those are the ones that we call frontier models. And it also, among other things, would have required an off switch. But he vetoed the bill after it had gone through the California legislative process, and then he vetoed it. So SB 53 really came out of that.
Scott Caravello: Yeah, that is exactly right, Katherine. So SB 53 removes some of the more—requirements that were viewed as burdensome by industry and by commentators, particularly that kill switch that you just mentioned a bit ago. And other ones, for example, were pre-launch testing and third-party audits, which aren’t featured in SB 53 even though they would have been part of SB 1047, as vetoed. And so really what we’re left with is a much more transparency-focused bill, rather than testing compliance with affirmative governance and risk management obligations.
Katherine Forrest: You know, I think that the word “transparency” is exactly the right word for this because SB 53 is really a series of transparency requirements. For example, the developer has to define and assess thresholds to evaluate when a particular model poses, “catastrophic risk.” And those catastrophic risks refer to the most serious harms, which I think are defined as when a model contributes to the death or serious injury of 50 or more people or causes more than $1 billion in damages, stemming from a few scenarios. And so these transparency requirements really grow out of this concept of frontier models posing particular dangers. And so I think that the word transparency—if there’s one takeaway from today’s podcast about SB 53, it’s transparency.
Scott Caravello: Yeah, there are significant transparency requirements. There’s an obligation to disclose safety incidents. There’s an obligation to publish a framework that applies to the developers’ AI development activities generally. And then there’s also model-specific transparency reports that need to be published before or at the time of a new model’s deployment.
Katherine Forrest: You know, what is interesting about these transparency requirements is that they really are targeted at the largest of the frontier model developers. And you might ask yourself, who does that include? And it’s actually both clear and not clear. So, as we saw with the EU AI Act, sometimes there’s a desire and an attempt to define frontier models and the models that are sort of the most—the riskiest, the highest-risk models according to the kind of compute, the floating-point operations per second, the FLOPs that go into perhaps, like, training a model. And we see with the EU AI Act that it’s 1025 or 1026. I can’t remember if it’s 1025 or 1026. The U.S.—the Biden Act—the White House executive order used to be the opposite of whichever one that wasn’t. And in any event, so the FLOPs used to be the way that things were defined. And here, you know, it’s also defined in terms of FLOPs, 1026. But in addition to that, there’s also a definition of the gross revenues of the companies who actually make these models that are part of the analysis of who has to confront these transparency requirements. And they’re the large developers who have gross revenues in excess of $500 million in the preceding calendar year, which I actually find very interesting because many model developers have a lot of R&D—well, they might have gross revenues, but they may not have a lot of gross revenues depending upon their monetization model and things like that. So it’s interesting. It’s also interesting because Grok-5, which is, as we all know, relatively new, actually has—they don’t report the FLOPs that goes into how compute [is] used to train their models. But some are actually suggesting that it was 1027 or even higher. So I find this whole concept of where the FLOPs are today interesting because I think that the models are actually already meeting that threshold potentially. We don’t quite know. But also I find this revenue threshold really pretty interesting.
Scott Caravello: And so it’s interesting that we’re talking about the FLOP count because the report that had informed SB 53 had discussed the potential complications with setting a strict technical threshold to scope in coverage under a law. And so SB 53 does also strike some flexibility in allowing for the updating of that threshold periodically. So we’ll have to see if that gets revised upward or if anything else changes. And the one other thing I would just mention really quickly, Katherine: even though the majority of the law’s obligations are focused on those large developers, frontier developers, who are those who do not fit that large $500 million gross revenues threshold, are themselves subject to transparency reporting requirements for their frontier models.
Katherine Forrest: And, you know, there’s a whistleblower protection now in SB 53, which is interesting also. And so that was actually one of the carryover pieces that was in the SB 1047 that was vetoed. And while a lot of SB 1047 didn’t make it into SB 53—if you can keep those numbers right—the whistleblower protections did. So just for those of you out there in the compliance area, and if you’re in a compliance area with a frontier model, you know, you ought to just pay attention to the FLOP requirements, make sure you understand the threshold requirements and that you study up on the whistleblower pieces. But let’s go on to the new federal legislation that you just mentioned was being introduced.
Scott Caravello: Yeah, absolutely. So as I mentioned at the start of the podcast, the U.S. Senate introduced a new bill that would designate AI as products and then subject to a product liability regime. And so it’s very early days for that one, and we’re waiting for signal on whether it’ll gain traction. But I am sure that if it does, you will be hearing about it again on this podcast.
Katherine Forrest: Yeah, right now it’s just Senator Durbin and some folks with Senator Durbin. Am I right about that, Scott?
Scott Caravello: I think Senator Hawley is also a co-sponsor on the bill.
Katherine Forrest: Okay, so Senator Durbin, Senator Hawley. And so it’s really at the early stages, and it’s sort of—it’s going to be a little bit in tension with the AI Action Plan. So we’ll watch this one and see whether or not anything really comes of it. But then we have the New York State development of the RAISE Act. And we’ve talked about this a little bit in the past, but, you know, when we’re talking about frontier AI, we want to actually go back and remind people about the New York Responsible AI Safety and Education Act, which is called the RAISE Act. And, you know, we did talk about it in a prior episode but weren’t as focused on the frontier aspects. But, you know, at about the same time as SB 53 was sort of winding its way through the legislative process, the RAISE Act also was winding its way. And it has, you know, various kinds of safety plans as part of the RAISE Act relating to frontier models and incident reporting and also whistleblower protections. So it’s passed both of the New York chambers and is awaiting Governor Hochul’s decision or veto. So it’s interesting times for frontier models, but there are some differences between SB 53 and the RAISE Act that are really worth just sort of pausing on. They both require large developers to publish. There are certain pieces that are a part of the AI framework, a safety and security protocol, and they both require reporting safety incidents to various authorities and to forego retaliation against whistleblowers. But only SB 53 requires pre-deployment transparency reports relating to specific models. Now, the RAISE Act has just a more limited reach, at least the way that it appears right now. We’ll see as more information comes out about it. But, you know, there’s various definitional issues that are going to have to be sort of worked out. And I don’t know whether or not they’re going to end up being sort of the same or not. But we’ve got now California and New York transparency requirements for frontier models. Keep an eye on it. You know, this is sort of some relatively big developments for model developers who are out there dealing with, you know, high-capability models. And I think, Scott, I think that’s where we’re going to have to stop it.
Scott Caravello: Well, thank you so much for having me. This was a blast.
Katherine Forrest: Well, we’re going to probably have you again, and, you know, because nobody wants to listen to me forever. And—oh, I have to give the audience an update, though, which is that I have finished the book. I have finished the book that I’ve been talking to everybody about all the time, and that is co-authored with Amy Zimmerman. So the button for send is going to be pushed just as soon as the bibliography is finished, which is going to be like, you know, tonight or tomorrow. And then we’ll be able to—you’ll be able to interview me, Scott, on some next episode about the book.
Scott Caravello: I look forward to it, and obviously a congratulations is very much due, and I cannot wait to get a copy.
Katherine Forrest: Anyway, it’s done. Okay, folks, that’s it. I hope you enjoyed a little bit of a variation on our theme today, and we’ll be talking to you next week. I’m Katherine Forrest.
Scott Caravello: And I’m Scott Caravello.