Podcast Banner

Podcasts

Paul, Weiss Waking Up With AI

Responsible AI: The Need to Avoid Misuse

In this week’s episode, Katherine Forrest and Anna Gressel delve into the pressing issues surrounding responsible AI and the potential for its misuse, including the darker side of AI.

Stream here or subscribe on your
preferred podcast app:

Episode Transcript

Katherine Forrest: Good morning, everyone and welcome to today’s episode of “Waking Up With AI,” a Paul, Weiss podcast. I’m Katherine Forrest.

Anna Gressel: And I’m Anna Gressel.

Katherine Forrest: And Anna, actually, while I say waking up with AI, we're in different time zones and you've been awake for some time.

Anna Gressel: That is true. I'm actually in Rome this week. It's a great city with a long history and it's super interesting to think about where we are today with AI against that kind of ancient historical backdrop. And in particular, the Vatican City is sitting right here in Rome across the river from me right now. And the Pope has added to the voices calling for attention to AI ethics in partnership with several major tech companies.

Katherine Forrest: So, you know, there are so many calls and papers and committees and task forces for AI ethics and AI safety. It seems like many, many governmental agencies have made such calls and there are all kinds of NGOs that have made such calls, that we really hope that the word is getting out. And in that vein, Anna, let's pick up and go back to some of what we've talked about in prior episodes and look at some of the terms that have really framed the discussion around AI. Today, I thought we could talk about responsible AI and the flip side of responsible AI, which is misuse of AI. So, it's sort of the flip side of the same coin. Because we've mentioned in the past that there are words that all of these papers and task forces and calls for AI ethics and safety have talked about that are similar--transparent, responsible, accurate and fair, robust, human-centered, accountable, all of those in the context of AI. And I thought that since we've already dealt a little bit with some of the transparency issues, we could move on to responsibility.