
Podcasts
Paul, Weiss Waking Up With AI
Checking in on Recent AI Regulatory Efforts
In this week’s episode of “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel break down New York’s RAISE Act, exploring its approach to regulating powerful frontier AI models, how it stacks up against California’s vetoed SB 1047 and what it means for the future of AI oversight in the U.S.
Episode Speakers
Episode Transcript
Katherine Forrest: Hello, everyone, and welcome to another episode of “Paul, Weiss Waking Up With AI.” I’m Katherine Forrest.
Anna Gressel: And I am Anna Gressel.
Katherine Forrest: And, Anna, before we start, because we always start with like a little bit of extra something, I just want to tell you it’s a million degrees here. I am in Maine and we are having a heat wave beyond heat waves. So up in Maine, we are at 90 degrees and it is 10 o’clock in the morning.
Anna Gressel: I haven’t checked the weather here. Oh, 92, but it’s supposed to get up to 100 in New York today, so…
Katherine Forrest: Yeah, yeah. In the meantime, we’ll talk about AI and hope that all of the AI that’s out there is in nice, cool warehouses where all of the servers are being cooled down and kept nice and temperature controlled. And switching from that to some other developments, we thought we would talk today about one of the developments in the legislative area, which is New York’s recent regulatory regime that seems to be, you know, potentially about to be enacted. And it’s called the New York Responsible AI Safety and Education Act, also called the RAISE Act. And today we’re going to break down what this law means once it’s enacted, if it’s enacted, and how it’s going to actually compare to the California recent vetoed bill, and also what might be next on the AI regulatory horizon.
Anna Gressel: Yep. Let’s start with the state of play because I think, you know, getting to the bottom line is always a good thing. So the RAISE Act in New York just passed both chambers of our legislature. So that’s pretty big. And now it’s awaiting Governor Kathy Hochul’s signature. And if signed, it’s going to be the first law of its kind in its nation, aiming to regulate the training and development of some of the most powerful AI systems. And as our listeners know, we often call these frontier AI models. We’ve discussed these on different episodes in the past, and frontier models are really, you know, the most advanced models in terms of size and capability. Although, of course, there’s debate about how to measure what is really like at that frontier.
Katherine Forrest: Right. And we’ve talked about frontier models in the past as really powerful models that are often called dual-use models because they’ve got these both civilian uses and potential military uses or weapon-type uses. So frontier models end up with, I think, a different set of regulations, often because they’ve got these technical capabilities that are so sort of extraordinary. Now, frontier models are nothing particularly new, because we know from prior episodes that frontier models have been developing and have been out there in the most highly capable models. But there are a lot of complexities when you’re dealing with a frontier model, to the text to the rules, and also talking about how you actually define the threshold of what becomes a frontier model and whether or not there’s validity in the threshold that’s been chosen, whether the threshold actually maintain some semblance of relevance over time, or do they change? And so, what we’ve got here with RAISE is an attempt to try and create a regulatory regime around these frontier models with some thresholds thrown in.
Anna Gressel: Okay, so, you know, with all of that said, let’s dive into how the RAISE Act actually does define a frontier model. That’s really a critical question. So, the RAISE Act takes kind of a multi-pronged approach to this. It refers to frontier models in terms of both their technical capabilities and the amount of compute used. And so they’re looking at the final training run, as well as the cost spent on such training. So if it exceeds $100 million. But the definition also accounts for models created through what’s called distillation. That’s a really important term, folks following kind of some of the DeepSeek debate will have heard that. And distillation is the process of training a model, at least in part, on the outputs of another model. So you’re kind of leveraging one trained model to train another model—you might imagine that would take less compute. So this act really does take into account distillation as a part of the model training process. And it accounts for distillation that itself—so this is like the training from distillation—that itself costs at least$5 million. And the point of the definition is to really make sure that the act stays up to date and accommodates the latest technology technological efficiencies. And also, I think it’s trying to avoid targeting startups or academic researchers. And so it really specifically focuses on targeting its requirements only on large developers. So that’s kind of some of the definitional pieces up front. Katherine, let’s talk about what the RAISE Act would actually require.
Katherine Forrest: Right. It’s got several key requirements for those large AI developers. And first of all, you can imagine that there’s going to be a somewhat of a debate about what’s a large AI developer, but it’s in the act. And before deploying a frontier model, which will be defined, the companies are going to have to implement robust safety and security protocols. And the protocols are going to need to address risks like the potential to create—or not really themselves create but assist in the creation of—biological weapons, or assisting malicious users in creating cyberattacks, giving them instructions for cyberattacks, engaging in cyberattacks themselves or other forms of catastrophic harm. Now one thing I want to say is that it’s not the case that today large developers are abdicating their responsibilities for these kinds of safety tests; they’re already engaging in really significant testing of their largest and most highly capable models—actually, all models, whether it’s one of the earlier models all the way to the most highly capable models. So this act, it should if (and I don’t know exactly how the language is going to come out at the end of the day), if it’ll stick with what it is now or if it’s going to be adjusted in some way, but it ought to sync up with what a lot of large companies are already doing. That would be the best outcome.
Anna Gressel: And I think it’s important to note that the some of the requirements of the act, they’re different, but they have parallels in the EU AI Act, for example, and in particular in the general purpose AI code of practice, which is not yet final, and has kind of been percolating for quite some time, and the timeline has been pushed back. But those include things like transparency and disclosure requirements. So the RAISE Act specifically would require large developers to publish their safety and security protocols with appropriate redactions for trade secrets and national security. And the developers would also have to report any serious safety incidents, like unauthorized access to a model or evidence that a model is behaving dangerously. And those are, we can kind of compare and look at the differences with the EU AI Act, but it’s one of those trend lines that we see in Europe. And then, now New York is kind of picking that up and putting it in state legislation, which is, you know, very much out of the kind of Brussels effect privacy playbook we’ve seen in the past.
Katherine Forrest: Right. And the point that you just raised, where there has to be reporting of serious safety incidents, does get picked up in another part of the act that is different from what’s happening already. The New York AG can bring civil penalties against companies in the RAISE Act who fail to comply, and there could be fines that are in the millions of dollars—up to $10 million for an initial violation and $30 million for subsequent ones. So truly millions of dollars, but bear in mind, this is all geared towards the large developers. And the law does allow courts to pierce the corporate veil if the companies are trying to evade enforcement through creative structuring. But when we say piercing the corporate veil, it remains to be seen whether that would do anything more than impose just an overlay of the common law piercing doctrine that already exists, or whether or not it would be easier to go between affiliated companies, for instance. So we’ll have to see, or even separate companies that are potentially still have an alliance with one another. So there’s a lot to be seen in terms of how this is actually going to play itself out.
Anna Gressel: Yeah, and I think that’s true because Governor Hochul has until the end of the year to sign or veto the legislation, and so there’s still a possibility of further amendments. And as you can imagine, this would be a very, very significant development because it would be the first big, you know, frontier AI safety law in the US after 1047 was vetoed in California, and it would provide regulators and plaintiffs with new hooks to, you know, bring investigations or enforcement actions, etc., etc. So, we are really looking at a pretty important and a very significant piece of legislation that I’m sure is going to continue to be the subject of a lot of debate prior to a final decision by Hochul to sign or to not sign it.
Katherine Forrest: Right. And the RAISE Act is coming at a time when, as we’ve talked about in prior episodes, we’ve got states that are grappling with how to regulate AI. Certain things have passed, certain things have gotten vetoed. And when the Biden executive order had gotten pulled, and there’s now the what they call—it’s not our phrase, it’s the phrase that gets used in the media and actually by the White House—the “Big Beautiful Bill,” One Big Beautiful Bill, and that’s the federal budget bill. And put into the federal budget bill is, of course, as we’ve talked about it in prior episodes, that 10-year moratorium on enforcement of state laws relating to AI, except for very few carve-outs. And let me just pause on that, Anna, for a second, because I’ve heard people talking about that provision in the budget bill as preemption. And it’s not preemption; it’s a moratorium. And I think I mentioned this during that last episode, but if I didn’t, let me just sort of say it now, and if I did, then at least it can bear repeating. Preemption needs to be done by the legislative process, where you actually have the legislature pass a law that preempts usually specific laws. The moratorium would be something a little bit different. It’s just a provision that prevents enforcement of certain laws within states, but the laws are actually still technically on the books. And so it’s a different way of trying to achieve the same end, which is having the states stand down on legislation while the federal government then clears the playing field, really, for its policy priorities.
Anna Gressel: Right, I mean, I think, we take a step back and look at the potential moratorium. I think it’s not just motivated by the kind of issues we’ve seen in the privacy space around a patchwork of regulations, but really this fear that overregulating AI or regulating, particularly certain kinds of AI like frontier models, may slow the U.S. down in what is increasingly feeling like a real race to develop the most sophisticated models and capabilities, particularly in light of other countries that are very robust in their development environment. So, you know, this there are a lot of constituents here who want to make sure that the U.S. kind of stays at the forefront. And I think that we saw that to a certain extent in Governor Newsom’s veto of SB 1047 in California. He really cited concerns that the bill’s standards were too broad and the obligations too onerous, and that it would really stifle innovation and kind of chill the kind of environment around model development that we’ve seen expand and grow in California. So, you know, I think there’ll be an interesting question about whether we see that same kind of debate in New York, which is a little bit of a different political environment.
Katherine Forrest: You know, just SB 1047 just sort of rolls off your tongue.
Anna Gressel: Yeah, well, we’ve talked about it so many times.
Katherine Forrest: Senate Bill 1047, that’s that California bill. But, you know, that bill would have required frequent third-party audits, extensive oversight, even for models that might not pose the highest risks. And there were provisions that were removed from the New York RAISE Act after New York got industry feedback. And so, in some respects, the New York RAISE Act is considered to be a little less onerous, or somewhat less onerous, than the vetoed California SB 1047.
Anna Gressel: Yeah, and there are differences in other places in the bill. So for example, New York’s act sets a higher bar for liability than the California bill. And that’s expressed really in terms of this definition of critical harm, which the RAISE Act defines as either the death or serious injury of at least 100 people, or at least $1 billion in damages that are potentially caused or enabled by the AI system. And that standard of liability, like setting that at $1 billion in damages, is pretty significantly higher than the California bill, which would have been at $500 million. So we’re seeing kind of a doubling there.
Katherine Forrest: Right, and just to sort of tie up the California piece—and we did talk about this during that prior episode, but just to remind people—one of the things that Newsom did just to try and now tie up the California SB 1047 veto was he convened a group of really, leaders in the AI space to try and come up with what he called a science-based approach to AI regulation. And they’re individuals from Stanford and UC Berkeley and the Carnegie Endowment for International Peace. So that’s happening. So here, what we have is state regulation that’s still happening. It has not stopped happening. And we’ve got ongoing efforts. You know, whatever is going to come out of that process that Governor Newsom has implemented now in California and the New York RAISE Act and if Governor Hochul signs it. And then we’ve also got this competing, if you will, budget bill, federal budget bill that will have the 10-year moratorium. So, it all remains to be seen, you know, we’ll see. We’ll see how the RAISE Act does, and looking ahead, I think we can expect that California and New York are going to continue to shape the national conversation around all of this. And we’ll have a lot of talk, I think, if there is this moratorium, if it does go through in the budget bill. So we’ll be watching these developments very, very closely. And, I think, next week we’ll be talking a little bit about the new Anthropic decision that just came out on fair use and trying to unpack that a little bit. Very important decision, second decision on fair use. Of course, we had the first decision on fair use with the Ross case, but that was not generative AI. And then on June 23rd, just yesterday from the date of this recording, we had this Anthropic decision on fair use just come down. So we’ll talk about those in our next episode. But that’s all we’ve got time for today. I’m Katherine Forrest. And Anna, if she were here, she would say, she’s Anna Gressel, and we’re both signing off. Thanks.