
Podcasts
Paul, Weiss Waking Up With AI
The State of U.S. AI Policy
This week on “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel discuss the Senate’s removal of a proposed AI moratorium from the “One Big Beautiful Bill Act,” and examine new state-level AI legislation in Colorado, Texas, New York, California and others.
Episode Speakers
Episode Transcript
Katherine Forrest: Hey, everyone, welcome to today's episode of “Paul, Weiss Waking Up With AI.” I'm Katherine Forrest.
Anna Gressel: I'm Anna Gressel. If you tuned in last week, you know we dove deep into New York's RAISE Act and the really exciting, interesting, kind of hectic, evolving landscape of state AI regulations, especially around frontier models, which is a concept we've talked about a lot on the podcast. So I know we promised to talk about the big copyright decisions that have been coming out lately, but I think we're going to pivot today because we have some policy news out of Washington that I think is too big to ignore and really is deserving of a podcast episode.
Katherine Forrest: Yeah, it really is. I mean, so deserving that we even skipped our little funny intro, like where I talk about like, where's Anna? Is she in Dubai?
Anna Gressel: It's because we're both at home right now.
Katherine Forrest: Right. But no, we've mentioned in the recent past, the “Big Beautiful Bill,” as it's called in Washington, the budget bill, and the moratorium that was part of it. But this past week, well, we're recording this on the 3rd of July, so in the past week from the recording date, the Senate actually voted to take that portion of the budget bill, that moratorium, the AI moratorium, out. And the budget bill has gone on now to the next steps. So that's a game changer. And we thought it was really important to level set where we are now that the moratorium is, for now, gone.
Anna Gressel: Yeah, I mean, ,it is a really important signal about where things are headed in Washington and kind of the temperature in Washington around AI regulation, notwithstanding in some respects, kind of a step back on certain issues that we've seen at the federal level. And so, you know, the fact that Congress is saying, or at least the Senate is saying, “hey, we're not really willing to go along with this moratorium” is really important because there's so much activity at the state level as we've, you know, as we'll cover today. There's so much going on and really there was this looming question about whether Congress would pause state action or state enforcement with the moratorium, and that is now off the table.
Katherine Forrest: Right, and there's still some chatter about the concept of federal preemption, another sort of flip side of the moratorium. Not really a flip side. It's the same thing, but it does it in a different way. It actually could preempt state laws versus putting a hold on enforcement. But I think it's going to be frankly kind of hard to get any kind of preemption through if they can't get the moratorium through. And so let's talk a little bit about what that would have done, and then we'll start down the road of where we are. But the moratorium, as we had talked about before, would have prevented the enforcement of regulations. And I'm going to give a little quote here on some of the language. If those [regulations] were, “limiting, restricting or otherwise regulating AI models, AI systems or automated decision systems entered into interstate commerce,” and it [the moratorium] would have done so for a full decade.
So that's the quote from that. So when the bill moved through Congress, the moratorium was then significantly narrowed and then a proposed compromise that the Senate actually considered for a while, it would have lasted only for five years. That compromise only lasted a few days, but it went from 10 years to five years. And also there were some additional restrictions included. And the final compromise, before the whole thing was pulled, would have limited the applicability of the moratorium to states that were looking to access a certain amount of new broadband equity. There was something called a Broadband Equity, Access, and Deployment federal funding program. It was known as the BEAD program, and it had $500 million in a pool. And so it would have further carved out exceptions for state laws or regulations addressing unfair deceptive practices and acts, child online safety, child sexual abuse material, rights of publicity, the protection of a person's name, image, likeness, and that also did not go forward.
Anna Gressel: Yeah, I mean, these were real efforts. The exclusions were real efforts at carving out issues that we've seen kind of at the forefront of the AI policy discussion and debate, things like online safety and child protection. And so, you know, in a way, it's unsurprising that there was this kind of narrowing in an effort to, you know—you can't see my finger like whoever is not watching this, but Katherine can—to draw a line around some issues.
Katherine Forrest: Although your finger was making a—it was a circle. And so I just want to be clear that you were not drawing a line, okay? We'll do our geometry class some other time.
Anna Gressel: Katherine does not like my demonstrative.
Katherine Forrest: Right, right. Let it be known.
Anna Gressel: Yeah, so there were all of these 11th hour negotiations and an attempt to narrow scope in an effort to get folks on board in the Senate on this. And I think what ended up happening, which many of our listeners will know, is that on July 1st, the Senate ultimately voted overwhelmingly—and that's like really overwhelmingly, 99 to one—to remove the moratorium from the budget reconciliation bill. And there were some interesting politics behind this, including Senator Blackburn, who initially supported a limited moratorium with those kinds of carve-outs, and then ultimately advocated for its complete removal. So, you know, these were really discussions and debates around issues like children's safety, but also the need to address emerging AI risks, which is a real area of concern for a lot of states. That's my dog chiming in.
Katherine Forrest: Just let the dog bark, you know, he doesn't like, he doesn't, he's worried about the emerging risks. What can we say, right? And so the question for your dog, as well as our listeners, is what happens next? What do we do? And where does this go now when we're thinking about what kind of legal advice to give to a variety of companies that are engaged as either deployers or users of AI, developers of AI and developers of frontier models? And so while this bill is going to now head to the House, it's so unlikely that anything will be added back in, right? So we can sort of count the moratorium out for now.
And what we've really got—and we've seen this in other areas of the law—is we really now have the states being able, for the moment at least, and first probably a significant period of time because, you know, this could go on for a year, a couple of years, where the states are going to be able to do their own regulation, and how companies are going to navigate an environment where there's going to be a diverse set of state level requirements, each of which may have a particular focus in a particular area, many of which are going to share some similarities. And we'll talk about a few of these in just a moment. But how companies are going to navigate that is going to really be, I think, interesting to see and there'll be some interesting, I'm sure, cases that will be arising out of all of this.
Anna Gressel: Definitely. And so I think we're going to take you through, our listeners, through some of the key AI-related state legislation that we're seeing on the ground. But I want to caveat that, that of course this is not going to be a comprehensive discussion, including because a lot of state regulation that is relevant to AI is actually not AI regulation per se. It really regulates use cases. And as AI goes into different areas and can do different things and has different capabilities, we're always looking at the application of existing state laws to AI. But we're going to pick out a few of kind of the bigger, AI-specific bills. And some of those have passed, as folks know, at the state level. So I think, why don't we start with Colorado, Katherine? Why don't I turn it over to you to talk a little bit about what's going on in Colorado, as many of our folks will know.
Katherine Forrest: Yeah, well, let's talk about Colorado. We have so much detail on Colorado, and I think I'm going to truncate a little bit because otherwise we'll just sort of bore people to tears, not because these things aren't fascinating, but because we're going to go through a number of them. And by the time we get to Utah, people will be asleep. So the Colorado AI Act was enacted as part of the state's consumer protection laws. And it became, or will become, actually effective on February 1st, 2026. And it applies to “high-risk AI systems.” And that is further defined as high-risk AI systems when they are used in consequential decisions. So these are all going to be terms of art, right? What's a high-risk AI system? What's a consequential decision? And there's a parenthetical after that, which is education, employment, lending, housing, insurance—those can all be areas for consequential decisions. And the Colorado Act imposes duties on developers and deployers to use “reasonable care” to protect consumers, to prevent any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of, again, high-risk AI systems. And so even though the statute's primarily focused on regulating high-risk AI system use cases, it's really imposing a general requirement of disclosure for all AI systems that interact with consumers. And so developers have to provide documentation. The statute does provide for an affirmative defense for those who discover and cure violations through their own internal review. And there's enforcement through the AG's office, and there will be, and are, and have been some amendments to restrict the scope or delay the enforcement. But that's where we are with Colorado. So it's a pretty broad act for these consequential decisions.
Anna Gressel: Great. And I'm going to talk a little bit about Texas, which actually is kind of a fascinating story. It really looked like Colorado when it was originally proposed. I mean, we used to talk about it as kind of like a Colorado part two, and that has changed pretty significantly over time. And a lot of the Colorado-like requirements have actually dropped out of the draft. So it's worth, for folks who are kind of reading Texas before, it's worth taking a look at the updated version because of those differences. But you can think about Texas, which is called the Texas Responsible Artificial Intelligence Governance Act, or the, I guess it's pronounced, TRAIGA law. That'll become effective in January 2026 as really kind of like an AI misuse law. It's really focused on kind of the harm from AI, and it prohibits the development or distribution of AI systems that aim to incite engaging in criminal activities, or inciting harm to oneself or others, or the distribution or development of AI systems that would aim to infringe on constitutional rights, or be used for unlawful discrimination against protected classes, or be involved in the development or distribution of AI that's aimed to create sexually explicit content or child pornography, illegal deep fakes impersonating children, for example. So it's like a lot of these misuse scenarios that are talked about with AI that we sometimes focus on in our thinking about AI. But interestingly, it also has some third-party safe harbor. So basically it has a safe harbor if the violation is caused by a third party's misuse. So it's trying to kind of target the actual actor engaged in the misuse, and it applies to developers and deployers and government entities. There is no private right of action here, so it's enforced by the AG. So this is definitely worth a look if you're interested in these kinds of misuse scenarios and attempts to regulate that misuse, but not regulate misuse from like a third-party actor, and so it kind of draws that line.
Katherine Forrest: And so let's just jump to New York. And of course we have the RAISE Act that we talked about last in our last episode. It's a very broad AI act by New York. For those of you who are familiar with a lot of the New York laws, you'll know that there are a variety of other algorithmic bias, biometric laws that are on the books. So all of those, because the moratorium is now out, are continuing to be in full force and effect or some new laws that are being proposed in the employment area and other areas, there'll be no prohibition to those being enforced if they are enacted.
Anna Gressel: So I think beyond the ones that we've mentioned, it certainly bears mentioning that California has had a huge number of proposals and passed legislation on AI. And so they have laws that now are actually going to come into effect very soon related to disclosures around training data and all different kinds of things. So California had put into its own bucket because they have been the most active state on the legislative front, and we certainly can't cover everything that's happening in California. But I want to take a few other kind of thematic bites at this from other states. So we have states that are trying to regulate interactions with chatbots, for example, and Utah does that. It regulates the interactions that people are having with chatbots in a variety of different contexts covered by consumer protection laws. And that actually has been revised to create some additional clarity, there was some confusion around kind of the earlier way it was drafted. So that's really helpful. But we're also seeing just a number of laws being proposed at the state level about regulation of AI used in mental health interactions, including in therapy. And so the regulation of kind of therapy chatbots is one really interesting new theme that we're seeing come up across a number of states.
We also, from a long-standing perspective, have seen states really try to tackle the use of AI in employment decisions. And so, you know, New York has that, California has that, Illinois has that, and sometimes those are focused around kind of discrimination. Sometimes they're more focused around the use of specific tools like AI video interviews. So, you know, the whole AI and employment nexus is a big place where states have been active for a long time. And then finally, I mean, this conversation would definitely not be complete without mentioning the right of publicity overlay. And states typically regulate right of publicity kind of, they have their own legislation or their own common law dealing with this. It's really a 50-state framework. So certain states have been active in trying to kind of uplift their laws to deal with AI. And really at the forefront of this is Tennessee with the ELVIS Act. And the ELVIS Act is super interesting. Yeah, I mean, we could do a whole day on like right of publicity and the ELVIS Act and how it attempts to...
Katherine Forrest: I feel like I've listened to a whole day on the ELVIS Act from you at various times, if I put all the little bits and pieces of the ELVIS Act together.
Anna Gressel: Definitely. So, you know, there's just so much happening, so many different themes relevant to different kinds of companies. Not everything's going to be relevant to everyone, but it is a very interesting, very complex landscape right now.
Katherine Forrest: You know, one thing that I think is interesting, just before we talk and sort of tie this all up, is the fact that you're seeing disclosure rules in a lot of places where there has to be disclosure that the thing that you're talking to is actually a chatbot, not a human. And just pause on that because what it's suggesting is really pretty phenomenal. And I'm not using phenomenal in a sense of either positive or negative. I'm using it in the sense of it is a big thing that we're at a point where you can't always tell if what you're talking to is a human or a non-human. And so there needs to be disclosure because of the particular risks that are involved in dealing with a non-human, or at least the right to know whether or not you are dealing with a human or non-human. So anyway, the upshot of all of this is that we are entering this fragmented time when there's going to be a lot of different rules all over the country to be dealing with, and we're going to be helping companies navigate that. But it's going to be a challenge. I mean, part of it's going to go to the least common denominator and trying to find commonalities between different regulatory regimes. But then there's going to be these outliers that are going to have particular requirements. And so you're going to have to really be aware and track all kinds of things.
Anna Gressel: Yeah, and one of the things that we talk a lot about with companies is how to do that tracking. I mean, in a way, it's both completely overwhelming, and there are these lanes of travel. And so you want to be able to say, you know, we've seen this theme. Is this new bill different or similar to what we've just seen? And what's the incremental risk for us, or what's the incremental kind of compliance burden for us? And so a lot of companies are going to have to be really undertaking that exercise as the number of bills just increases in pace. I mean, we've seen all kinds of states get interested in this because they see it as a major consumer protection issue, whether in regulated industries or kind of across the board. So it's an interesting time. I would certainly say it's not a moment when we would call the U.S., I would say, less regulated than Europe because there is so much state-level activity. It's just differently regulated.
Katherine Forrest: Right, right, right. So we've gone from a soft touch country to much more of something similar to the EU because of the state regulation. It's soft touch at the federal level, but it's not at the state level. But that's all we've got time for today. And maybe even more than we had time for, Anna, if our audience has stayed with us. I hope you have. And I'm Katherine Forrest.
Anna Gressel: And I am Anna Gressel. Let us know what you're liking, and if you want us to cover new topics, and thanks for listening.