Podcast Banner

Podcasts

Paul, Weiss Waking Up With AI

The Embodied AI Trifecta

On this week’s episode of “Paul, Weiss Waking Up With AI,” Katherine Forrest examines how AI generalization, world models and physical embodiment converge, highlighting 2025 advances in humanoid robotics and what they mean for AGI, common sense and real-world autonomy.

Stream here or subscribe on your
preferred podcast app:

Episode Transcript

Katherine Forrest: Hello, everyone, and welcome to today’s episode of “Paul, Weiss Waking Up With AI.” I’m Katherine Forrest, and I am here with you from sunny New York. And I am about to go to an incredibly rainy place in the South to hole myself up in a hotel room with the exclusive purpose of finishing this book where if I don’t press send pretty soon, I don’t know what happens, but it’s not good things. Anyway, we talked a little bit about this last week when I was discussing superintelligence and AI, and I was discussing how a superintelligent AI might actually require humans to engage in sort of a negotiation or renegotiation of our social contract. But I want to talk to you about another set of thoughts that come out of the same book with which I’m currently obsessed because I get up every morning at 5 a.m. and I work on it and then I do my regular day job and then I come back and then I work on it and all of that. And I’ll remind you that the book is called “Of Another Mind” so that when it comes out you can look for it on Amazon and all the places that you get books. And then I’m co-authoring it with Amy Zimmerman. 

But in this obsession, okay, I have this sort of trifecta that I want to talk about. And I want to talk about it because when I was putting some pieces of information into the book, I ran across some things that are just actually extraordinary relating to humanoid robotics, which we’ve talked about in previous episodes or at least one previous episode. So I want to go back to that for a second. And I want to update everybody on a little bit about what’s happening and about this trifecta of issues that comes together with AI embodiment. So it combines three concepts, the first is the ability of AI to obtain intelligence and to actually learn to generalize—and we’ll talk about that in a moment—intelligent pieces of data to a broader and more widespread area. So first is the ability of AI to obtain intelligence, and second is something that we’ve also talked about before, which is the second part of this trifecta, world models. And that actually is considered to be a way in which a “mind,” a human mind, for instance, or an AI mind, actually organizes thoughts or concepts about the environment of the world that they’re in, whether it be a narrow world or a broader world. So that’s the second part, which is organizing information into a world model. Third part is AI embodiment, which is the physical manifestation of an AI into a world, into an actual world, to our world, for instance. So let’s put these three things together, which is obtaining intelligence, a world model that’s organizing that intelligence and then a physical embodiment, and we’ll then describe how they come together. So let’s baseline everybody about why this trifecta of things sort of comes together at all. But there are a number of AI thought leaders and engineers, and some of those are actually the same, who believe that embodiment and world models are necessary for AI to actually achieve artificial general intelligence, let alone superintelligence. You know, Yann LeCun from Meta is famously one of those. And of course, he is a brilliant sort of force of nature. But let’s dig into this trifecta of concepts because I think it’s a little bit complicated. And I think that we’ll find, I know we’ll find, based upon the research that I’ve done for this book, that we’re already getting to certain places people never thought we would reach in terms of world models and obtaining knowledge without embodiment.