Podcasts
Paul, Weiss Waking Up With AI
The Race to Regulate AI in America
In this episode, Katherine Forrest and Scott Caravello reflect on recent developments in AI regulation from the past month, walking through the White House's newly released policy framework and Senator Marsha Blackburn's proposed AI bill. They consider the nuances of each, including comparing where the two approaches align—and also diverge—on issues like preemption, copyright, and child protection.
For the sources referenced in this episode, please see the below links:
The White House: A National Policy Framework for Artificial Intelligence
U.S. Senate – Office of Senator Marsha Blackburn: The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act
Episode Speakers
Episode Transcript
Katherine Forrest: Welcome back to Paul Weiss Waking Up with AI. I'm Katherine Forrest. And we're coming to you today from both New Orleans, right Scott, and New York.
Scott Caravello: And I'm Scott Caravello.
Katherine Forrest: And we're coming to you today from both New Orleans, right Scott, and New York.
Scott Caravello: That's right, I'm back. I am back in New Orleans.
Katherine Forrest: Alright, the bachelor party turned into it's gonna be a wedding now, right?
Scott Caravello: Exactly, exactly. So, I am on best man duties this weekend, but if I get lucky, maybe I can find some time between the events and go scarf down a po'boy or two. We'll see, we'll see.
Katherine Forrest: All right, we just got to get them to the venue on time, you know? Do you know I actually went to a wedding once where there was like a huge delay? And it was such a huge delay that we were like worried that we'd had like, you know, like a bride flight. Like she had like, you know… but anyway, I'm still to this day not quite sure what happened. It was some sort of dress malfunction, but there was great consternation as we waited for the bride.
Scott Caravello: That is crazy and you know, not gonna happen this weekend. Yeah, we're sending good vibes out to the universe. That is not on deck this weekend.
Katherine Forrest: Right, right. And we also have to tell our audience that actually you and I were both in California last week for a series of meetings and your proselytizing about Waymos got me into a Waymo—several times!
Scott Caravello: And it lived up to the expectations every single time. It was so easy, so great. I cannot wait for them to come to New York City.
Katherine Forrest: Right, and as we got into the car and we had some of our colleagues with us, it would say, “hello Scott.” And you would say, “I never will tire of that.” There you go, there you go. But anyway, the Waymo was fantastic and it was a terrific experience. This is in Los Angeles, and it navigated everything incredibly well. So, you know, I'm back in New York, Scott's doing whatever it is he's doing in New Orleans as we've said. And we're going to talk today about something that happened while we were away. And so, there was a big week, a really big week last week for AI regulation. And on March 26th, the Trump administration released its national policy framework for AI. And I've got it right here. Our audience can't see it. Those members of our audience who want to have the things that we mention actually hyperlinked to the links for the podcast—we're getting there! We're technically getting there, but I'm actually holding this up right now so Scott can see it, even though you can't. And it's only four pages long, but it's this sort of national policy framework that the Trump administration released. It provides a whole series of things in this four short pages. But they basically are a direct follow-up to the December 2025 and even the July 2025 executive orders.
Scott Caravello: Yeah, so seeing this document released by the White House was not much of a surprise at all, but I do also just want to say, Katherine, I don't want to over-promise and under-deliver, but it is my hope that by the time this episode is out, we will be able to link it in to the episode description. So we'll see.
Katherine Forrest: Okay, that's on you.
Scott Caravello: Yeah, that's on me. That's me, I'm putting it out there. OK.
Katherine Forrest: Okay, all right. Anybody who's disappointed, you're gonna personally send them a mug, right?
Scott Caravello: Exactly, exactly… you know how to reach me.
Katherine Forrest: Do you know I sent a mug, I wonder if this person is listening right now, I sent a mug all the way to Australia for one of our listeners.
Scott Caravello: I did, yeah!
Katherine Forrest: He was such a committed listener, I sent it off to Australia. All right. Anyway, but let's get into this national policy for the AI framework and start by saying that two days before this framework was issued on March 26th, Senator Marsha Blackburn of Tennessee had released a, you know, pretty wide-reaching bill to regulate AI that was in some ways really misaligned with the White House framework. And we'll talk a little bit about that. So we're going to start with the actual framework and then give you a little bit of a little bit of the Blackburn bill as we go along. Sound good?
Scott Caravello: Sounds great.
Katherine Forrest: OK. So, the main thrust of the national AI framework is that the White House, as it had sort of indicated both in July and December, is seeing a proliferation of this patchwork of state laws governing AI in the, you know, we've got California, Colorado, New York, and there are over 35 states with various kinds of algorithmic laws as well, all over the place. And the White House is seeing those as an impediment to innovation and an impediment to the administration's stated goal that America has to win the AI race specifically against China. So, what this national policy framework does is it actually uses the word now preemption. And I'm gonna read from section 7, which is, it states, “the federal government must establish a federal AI policy framework to protect American rights, support innovation, and prevent a fragmented patchwork of state regulations that would hinder our national competitiveness, while respecting federalism and state rights.” And then the first bullet underneath that is, “Congress should preempt state AI laws,” and then it goes on, “that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not 50 discordant ones.” So, preemption, and I actually am very interested in that because I've been thinking about preemption as a possibility here, in terms of the state and federal differences in AI policy for some time.
Scott Caravello: Yeah, and, you know, going back to the executive order from December, that was a key feature there. And so these legislative recommendations are just one component of the strategy that the White House put forward in December. And our listeners might recall from the episode we did on that executive order, other ones included, for example, a DOJ task force that would challenge state AI laws on the grounds that they violated the Dormant Commerce Clause, which is a constitutional principle that prevents states from unreasonably burdening or discriminating against interstate commerce. And then it actually also called on the Federal Communications Commission to determine whether it should adopt a federal reporting standard for AI models that would have preempted conflicting state laws. So, that, presumably, would have impacted laws like California's Transparency and Frontier Artificial Intelligence Act and New York's RAISE Act, both of which require public disclosures of information about certain AI models.
Katherine Forrest: Right. So, you know, the executive order called for legislative recommendations to be issued by the White House that would preempt state laws viewed as conflicting with the policy of the executive order, which is, you know, basically AI dominance by the United States. And that's how we then get to the framework—so, those initial statements in the July and December executive orders lead us to the framework that we're talking about today. But the White House made clear that the framework shouldn't impact what they're calling in an, as I've read, otherwise lawful state AI laws relating to certain other things. Now, among the things that the White House wants to make sure the framework doesn't impact is a whole section on child safety protections. And there's a whole series of pieces and I'm going to actually read you one. So section 1 of the National Policy Framework for Artificial Intelligence talks about AI services and platforms taking measures to protect children while empowering parents to control their children's digital environment and upbringing and then has one, two, three, four, five, six, seven sub bullets underneath that. So, there are provisions for place that state law, or the role that state law, can play.
Scott Caravello: Right. And so then, you know, looking at the framework broadly, it's broken into a series of pillars. And the first one is on that exact point that you had just raised, Katherine, other areas that it discusses, related to child safety, are requiring AI platforms and services that are likely to be accessed by minors to implement features that reduce the risk of sexual exploitation of minors. But in line with the executive order, which talks about, you know, respecting or rather just not preempting state level laws regarding child protection, it says that state level prohibitions on child sexual abuse material, or CSAM, should not be preempted even if that CSAM is generated by AI.
Katherine Forrest: Right, right. And one of the pieces of this pillar that I find really interesting is the focus on liability standards. So, the White House, in this national framework, urges Congress to avoid vague liability standards that could lead to sort of endless litigation over and over removal of lawful content. Now there aren't any details about how that's supposed to be accomplished, but it's very interesting that the White House is seeking to have the laws, if and when they're designed and then eventually perhaps passed, to find and then to implement such a balance.
Scott Caravello: Yeah, absolutely. But if we can just jump around for a minute and just go actually to the last pillar in the framework, that is what's squarely aimed at the preemption issues that we were talking about, right? The White House is clear that it believes AI development is inherently interstate and that the fragmented patchwork of state laws will hinder national competitiveness in the race to achieve global AI dominance. So, Congress is urged to preempt state AI laws that impose undue burdens like those you mentioned before, while preserving state authority over traditional police powers, zoning laws, like those determining the placement of AI infrastructure—so, you know, think data centers—and laws governing a state's own use of AI. So, take that alongside all the other mechanisms that the White House suggested in the December executive order that I mentioned, and it's a belt and suspenders approach to getting rid of that patchwork, especially since, you know, who knows if court challenges to the constitutionality of state AI laws will be successful.
Katherine Forrest: Right. And so let's also back up for a second and look at the second pillar in this national policy framework. And that second pillar, and of course, as we know, I've got it right in front of me, says, it's entitled, “Safeguarding and Strengthening American Communities.” And the heading is, “AI development, including data infrastructure buildout, should strengthen American communities and small businesses through economic growth and energy dominance, while ensuring communities are protected from harmful impacts.” And this piece of it is actually tied towards the intersection of AI growth, the need for its energy, energy dominance, and community protection, especially around all of the infrastructure, the data centers that have been, and are being, built. And it endorses the administration's rate payer protection pledge, which is an idea that residential electricity customers shouldn't have to subsidize AI data centers. And at the same time, it calls for streamlined permitting so that AI developers can build infrastructure quickly. So it's trying to, again, balance a protection for the American who's paying a higher electricity bill potentially, with the need for additional buildout of infrastructure capacity.
Scott Caravello: Yeah, and, you know, we've talked about this theory before that data centers are responsible for increasing electricity rates for consumers because of all the power needs and all the power that they're drawing. And I think, as we've also said, Katherine, there hasn't really been great evidence put forward on that point. But it's helpful to just situate the issue and mention that it's so prominent in this framework because it is a concern that is not going away.
Katherine Forrest: Right. We know that facilitating data center construction has been a big AI related priority for the administration going back to the Stargate announcement in January 2025 when OpenAI, SoftBank, Oracle, and others created a JV to invest up to $500 billion in AI infrastructure. And then the White House released, as part of its July executive order, a whole series of I'll say pronouncements and recommendations to remove barriers to data center construction and to streamline in various kinds of environmental reviews. So, you know, there's, again, a real desire to balance the consumer impact with what we have as the desire for sufficient infrastructure development.
Scott Caravello: But, how about that Pillar 3, Katherine?
Katherine Forrest: Uh, now, I gotta read pillar 3… you know me, I gotta read pillar 3! So, pillar 3 is a big one. It's a big one. It is one that a lot of people are focused on because of all of the litigation that's going on right now around the intellectual property rights and the assertion of various kinds of intellectual property rights relating to potential AI, you know, model training, outputs and things like that. So it's entitled, “Respecting Intellectual Property Rights and Supporting Creators.”
Scott Caravello: Do you want to take that third pillar, then?
Katherine Forrest: Yeah, yeah. Let me just, let me go through this because there's actually an interesting statement at the beginning of section 3 of this national policy framework and then various things that are embedded within it. So, at the top of section 3, it says, “American creators, publishers, and innovators should be protected from AI–generated outputs that infringe their protected content, without undermining lawful innovation and free expression.” So, first of all, it's interesting that this sort of piece right at the beginning mentions output because what you see in the bullets underneath is not focused on output as much as that statement would suggest. That statement almost suggests that this entire section would be about outputs, but it's not. The section is really, again, a weighing in by the White House on how Congress should approach IP related issues. And it also suggests that, well it doesn't even suggest, it actually states, that the administration “believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports courts allowing the courts to resolve the issue. Similarly, Congress should not take any actions that would impact the judiciary's resolution of whether training on copyrighted material constitutes fair use.” So, that first bullet is all about training and it's saying, hey, we, the White House, believe that training ought to be fair use. We're not going to stand in the way of the courts, but we're telling you courts—and, by the way, the White House has now, the Trump administration, both the first and the second Trump administration have, appointed a lot of judges. They are going to allow the courts to resolve the issue, but they're also instructing Congress—not to jump in and pass laws that would, for instance, change the Copyright Act in a way that went one way or the other. So it's pretty interesting. And there's a lot of ongoing litigation that brings us back to this pillar about both the output and the input balance and what should be done. And we've really got a whole series, a slew of cases across the country that are fighting this out. And let me mention another one of these sub points in Section 3, which is number two, “Congress should consider enabling licensing frameworks or collective rights systems for rights holders to collectively negotiate compensation from AI providers, without incurring antitrust liability.” So, that's actually harkening back to the old ASCAP BMI consent decree where, you know, there's a collective rights scheme for music and it's just suggesting, because the White House of course can't do this, Congress would have to do this, that that might be something that should be considered. But it also then goes on to say, “any such legislation should not address when or whether such licensing is required.” Again, “any such legislation however should not address when or whether such licensing is required.” So, you know, we've got a very interesting set of things going on here in this section 3.
Scott Caravello: Right, so then also part of pillar 3, the framework is recommending that Congress create a federal protection against unauthorized AI-generated uses of an individual's voice, likeness, or other identifiable attribute, which is, you know, commonly referred to as a digital replica, while also preserving parody, news, and First Amendment protections. And so then, Katherine, if it works so that we can move on to the Blackburn bill. Maybe I can just do a very quick lightning round of the other pillars and recommendations.
Katherine Forrest: Go for it!
Scott Caravello: Great, thanks. But those include a recommendation that Congress should stop the federal government from coercing AI platforms to censor or manipulate content based on political or ideological views (Section 4). Another, that Congress should create regulatory sandboxes to allow for innovation without risking liability. And that Congress should also rely on existing regulatory authorities for specific sectors, rather than creating some sort of new AI super regulator (Section 5). And then finally, the framework leans heavily into workforce issues. The White House is recommending that, using non-regulatory tools, to integrate AI training into education and job programs, among other things (Section 6).
Katherine Forrest: Right. So, you know, it's four pages, but it's really packed full and people will want to, again, watch this space, you know, search for what the White House is doing periodically on AI. There's some very interesting things. And I think that we'll see some additional work coming out in the next couple of months on putting some meat around these bones that are suggested. But that brings us to Senator Blackburn's bill, which was the potential congressional action that actually predated, the bill predated, remember, this national policy framework just by a couple of days. Now, when it predated this framework, it also predated the request, and the White House can only request that Congress stay away from legislating in this area except for the consideration of the collective licensing. So, we've got Senator Blackburn's bill out there, but then we have an ex post request by the White House to stay away from some of the very areas that Blackburn's bill touches on.
Scott Caravello: Yeah, so, and we don't have time to unpack the entire bill, but I think we can cover some of the highlights there, and so one, for example, are–is the extensive child protection provisions, which includes imposition of a duty of care for AI chatbot developers and platform safeguards for minors. And, you know, those are in a lot of senses closely mirroring the framework's emphasis on protecting minors. And at the same time, it preserves state enforcement authority for generally applicable child protection laws, which is consistent with the framework's language about respecting state regulation on the subject.
Katherine Forrest: Right. Blackburn's bill also contains a rate payer protection regime, as we've seen the national policy framework does as well. And that relates to the data centers. And the bill would require large AI infrastructure operators to bear the costs of generating new electricity and grid upgrades. And this directly reflects the similar concern about residential electricity prices. It just comes at it in a different way. But there's a commonality there, one of the few commonalities perhaps, between Blackburn's bill and the White House national policy framework.
Scott Caravello: Well, directly to your point, Katherine, about just how much it diverges, do you want to talk a bit about the rather heavy-handed copyright provisions that are directly contrasting with the policy framework approach?
Katherine Forrest: Yeah, so what Blackburn's bill does is it takes a clear stand that would declare that the unauthorized use of copyrighted works for training an AI model is not fair use. And unlike the framework, which is asking, requesting Congress not to wade into the fair use debate and allow the courts to settle it, Blackburn's bill seeks to have Congress essentially pass an amendment to the Copyright Act to amend the fair use section of the Copyright Act, which is section 107. And that amendment would say that use of copyrighted works for AI training is not a fair use. So, it would take away the fair use defense. And it also seems to be taking away a fair use defense for AI outputs as well.
Scott Caravello: Which is really, really fascinating stuff, but you know, to your point, Katherine, this bill sort of fits in a weird posture with the framework coming right afterwards, and so we really shouldn't overstate the significance of this proposal. It's not clear whether it will get traction including with respect to these copyright provisions. So, we will wait and see but obviously we will keep listeners posted if the bill advances and in what form it does so.
Katherine Forrest: Right, and that, Scott, is a great note to end on. I hope that you're able to perform your groomsman duties appropriately and help your friend get to the venue on time and all of that. You don't have a runaway bride and, uh, right? You don't want to do that… you don't want to be at the wedding where that happens.
Scott Caravello: I'm so happy this is going to come out after the wedding when—
Katherine Forrest: Right! You can report on it, you can report back, all right?
Scott Caravello: Exactly, yeah, yeah, yeah.
Katherine Forrest: All right, that's all we have time for today, folks. I'm Katherine Forrest.
Scott Caravello: And I'm Scott Caravello. Don't forget to like and subscribe.