Podcast Banner

Podcasts

Paul, Weiss Waking Up With AI

Synthetic Identity, Part 2: Deepfake Detection

In this episode, Katherine Forrest and Scott Caravello return to their discussion of interactive deepfakes, highlighting state-of-the-art detection tools and techniques. From research frameworks such as the “GOTCHA” challenge and “active probing” via corneal reflections to commercial tools that can be integrated into popular video conferencing apps, they debrief the evolving tech landscape for spotting deepfakes.

Stream here or subscribe on your
preferred podcast app:

Episode Transcript

Katherine Forrest: Welcome back to Paul, Weiss Waking Up with AI. I'm Katherine Forrest.

Scott Caravello: and I'm Scott Caravello.

Katherine Forrest: And Scott, I have to confess to you that we are, well, I don't even need—I don't need to confess it to you because you know about this. You're here in real time with me. You're not an interactive deepfake. But we're recording this thing after lunch. And this raises a variety of interesting issues during both the holiday season, the New Year's season, which is, food come in really strange colors. I mean, have you noticed that? Like in the holiday season, like food gets—the colors change.

Scott Caravello: Oh, especially in the cafeteria here. I think they do a good job of getting with the season.

Katherine Forrest: Yeah, you know, green shows up in strange places. I'm not talking about salad. I'm talking about things that aren't normally green. But anyway, so today—believe you me, this is gonna actually sync up with what we're talking about. But today I had a red velvet whoopie pie. And for any of our audience listeners who've not had a red velvet whoopie pie, I have to say you're missing one of the great, you know, delicacies in life. You may think it's caviar, but I'm telling you, it's a red velvet whoopie pie. And, you know, so this is a holiday specialty. You don't see red velvet whoopie pies most, you know, times of year. But I had one today, and so if I'm a little slower on the uptake, I just want to tell you that that's the cause. So, you know, there it is.

Scott Caravello: All that said, two weeks ago, we promised to follow up on our conversation on interactive deepfakes.

Katherine Forrest: Right, we had—as the audience may recall, for those of you who follow along on a regular basis—we had the December 11 executive order from the White House on AI, and we really wanted to do an episode on that, you know, pretty much right away since it impacts a lot of people's planning for their companies in the coming months and year. So we are now going to go back and do this episode today on how to try and determine whether or not you have an interactive deepfake on your hands. And we talked about the interactive deepfakes and how they're kind of alarming, and that they are deepfakes that can respond and adjust in real time. They're not just videos that are static where it's, you know, me saying something I never said or some public figure saying something that they never said. It's actually—you can think of it as looking like real people who are on potentially even like a Teams call or a Zoom call or a Webex call. And they look like they are the people that you know and trust. But in fact, all—but you might be a deepfake. And so this has happened. It's not the stuff of science fiction. And it places us in a world where we end up with a shift that has been called “zero trust evidence.” So that's what we're going to do. We're going to sort of talk about the second part of that today.