Podcasts
Paul, Weiss Waking Up With AI
AI and Financial Institutions: Emerging Trends in Regulation and Compliance
In this episode, Katherine Forrest and Scott Caravello unpack how regulators are thinking about AI in the financial sector. Joined by Paul, Weiss colleagues Roberto Gonzalez and Sam Kleiner of the firm’s Economic Sanctions and Anti-Money Laundering (“AML”) practice group, they explore the Financial Stability Oversight Council’s initiatives on AI, the Office of the Comptroller of the Currency’s model risk management guidance, FinCEN’s position on AI tools for AML compliance, and how financial institutions are utilizing AI in their compliance programs. This is both the first episode of the new year and the first in the podcast’s history to feature guests—kicking off 2026 with fresh perspectives on AI for financial institutions and key regulatory trends that banks should keep in mind.
Episode Speakers
Episode Transcript
Katherine Forrest: Well, hello, everyone, and happy new year! We are now in 2026, Scott.
Scott Caravello: It is pretty unbelievable, and the AI developments just keep rolling. I know we have some guests today, so I think I'm going to take a back seat and let them talk all about our exciting topic.
Katherine Forrest: Absolutely. So this is the Paul, Weiss “Waking Up with AI” podcast, and we're really pleased to have our two guests with us. We've got an important and incredibly timely discussion on some developments at the end of 2025 and during 2025 on how regulators view the use of AI, with a focus on compliance under the Bank Secrecy Act. And so today, and I'm gonna let them introduce themselves a little bit, we've got Roberto Gonzalez and Sam Kleiner, both from Paul, Weiss, who advise financial institutions on a variety of complex regulatory issues, including on these AI issues. So let's turn to you, Roberto. Thank you for being here!
Roberto Gonzalez: Thanks so much, Katherine. Really excited to kick off the new year with this discussion.
Katherine Forrest: And Sam, thanks for coming along as well.
Sam Kleiner: Thank you for having us. Longtime listener and first-time caller, as they say.
Katherine Forrest: All right, great. Let's just jump right in and situate our listeners a bit, and you can tell us about your practice and also about how it intersects with AI in particular. Roberto, why don't we just sort of jump over to you and start there.
Roberto Gonzalez: Thanks. So I previously served as Deputy General Counsel at the Treasury Department during the late Obama years. And now that I'm at Paul, Weiss, I advise a range of financial institutions—so, these are banks, fintechs, crypto firms—on regulatory compliance issues, including the Bank Secrecy Act, otherwise known as the anti-money laundering laws, and economic sanctions. And we've seen a lot of interest from financial institutions in adopting AI, really, to do various functions. But this is a really heavily regulated industry, as you know. And traditionally, regulators have treated this area with some scrutiny. So we've been closely following how regulators are discussing these issues.
Katherine Forrest: Yeah, that's great. And I'm really excited that you're able to be here and tell us about them with that kind of background. And Sam, we were recently discussing this various issue and some of the changes and how regulators are talking about them. Can you give us a preview of that?
Sam Kleiner: Thanks, Katherine, really excited for this discussion. Like Roberto, I joined Paul, Weiss after serving at the Treasury Department. I was there up through 2023, so really as AI was starting to take on more widespread adoption. And just to preview some of the topics that we'll be discussing, I think it's important to keep in mind that regulators are, you know, just like us, ordinary people, and they're watching how this technology is evolving and they're seeing new and interesting use cases for AI. And what we're seeing right now is that the primary regulators for these financial institutions are showing a new degree of openness to the adoption of AI, including for core functions like compliance with the Bank Secrecy Act. And we're hearing a lot more emphasis on the idea that a failure to innovate is itself a risk to financial institutions. So looking forward to jumping into that.
Katherine Forrest: I find that really fascinating that a failure to innovate is itself a risk. It sort of flips the script from where things were in 2023, and it's incredibly interesting. So, we know that regulators are paying attention, and what do you think is a sort of a good signal of where things are going?
Roberto Gonzalez: Well, Katherine, just before the end of the year, the FSOC, which is the Financial Stability Oversight Council—Committee—this is an interagency body that identifies risks to financial stability, and it's chaired by the Treasury Secretary and includes the heads of all the banking regulators, the SEC, the CFTC, and various other agencies—they issued a report, and they had a lot to say about AI in the financial services industry. They talked about, you know, the regulators themselves adopting AI more extensively, but they also talked about using the FSOC's AI working group to have what they called, quote, a public-private dialogue to identify regulatory impediments to the responsible adoption of AI by financial institutions. So that tells us a lot about where this is going. It seems like the regulators are looking closely at the existing regulatory framework, and they want to engage to see what impediments there are to financial services institutions adopting AI more broadly. And so this suggests that we will probably hear more this year from the regulators about how to go about doing so.
Katherine Forrest: Right, you know, the regulators and legislators continue to consider these issues really all over the government. And there's been a fast-paced adoption, in particular by financial institutions, of AI generally. You know, as we all know, financial institutions were very early adopters—some of the earliest adopters—of AI when they have been using, they've been using ML, machine learning techniques in terms of some of their algorithmic trading for years and years, but more recently with generative AI, they've been using chatbots to, you know, handle increasingly sophisticated consumer inquiries. And Jamie Dimon, the CEO of JPMorgan Chase, recently stated that, quote—and the “it” there is AI—“It affects everything—risk, fraud, marketing, idea generation, customer service—and this is the tip of the iceberg.” So it's absolutely timely that we're having this conversation because financial institutions are looking at adopting agentic AI tools to augment their systems as well. And how this discussion is going to play out in the context of the money laundering space and the Bank Secrecy Act space is really an open question. Roberto, what do you think about that?
Roberto Gonzalez: Yeah, well, the Bank Secrecy Act, which is the foundational set of laws that requires banks and fintechs to have anti-money laundering programs—I mean, that's been around for decades. Banks and others have really made massive investments in their AML programs. And in fact, a study last year said that U.S. and Canadian financial institutions invest $61 billion a year on financial crimes compliance. This involves extensive technical systems to monitor transactions for unusual activity, and then teams—sometimes thousands of people at one particular bank—that need to review these alerts and decide whether there is suspicious activity that requires them to file what's called a SAR, a suspicious activity report, which is a report that's filed with Treasury and that is made available to law enforcement. This is no small undertaking. Financial institutions file millions of these reports each year, and the cost and the scale of this continue to grow. And there was a growing recognition that technological innovation and ways of streamlining all of this was critical. And so this is also similar on the sanctions side, too, where sanctions compliance requires evaluating lots of data. And there's recognition that artificial intelligence can help, you know, detect patterns and identify issues on the sanctions side as well.
Katherine Forrest: And just as a question before we get to Sam and a couple of questions we've got for Sam, but how were SAR reports generated before technology was able to assist these institutions? Were they being done by hand?
Roberto Gonzalez: Yeah, they were being done by hand in a manual process. You know, there's been innovation over time to make that process more efficient, and I think there are systems that help with that. But I think right now it's still very much a manual approach. And right now, at least for most SARs being filed today, AI is not really playing a role, although that may very well change.
Katherine Forrest: Wow, it sounds like an incredible opportunity for AI to be able to assist with that and create some real efficiency. So, Sam, turning to you, tell us a little bit about FinCEN and, you know, what it is and how it's fitting into this whole picture.
Sam Kleiner: Yeah, absolutely. So FinCEN, the Financial Crimes Enforcement Network, is the part of Treasury that's responsible for AML regulations. And so the SARs that Roberto was talking about are filed with them, and they establish sort of the rules of the road for AML along with other banking regulators. And we have seen an interest from them over the years in trying to use AI, machine learning, and technological innovation as a way to augment and streamline some of these processes. So going all the way back to 2018, FinCEN put out a statement that said that, quote, innovation has the potential to augment aspects of banks’ BSA/AML compliance programs, such as risk identification, transaction monitoring, and suspicious activity reporting. And building on that, in 2020, Congress passed a very transformational law in the AML space, the Anti-Money Laundering Act of 2020, which really sought to modernize the Bank Secrecy Act, which had been on the books for decades. And one of the core purposes of this new law was to encourage technological innovation and the adoption of new technology by financial institutions to more effectively counter money laundering and the financing of terrorism. So this is a conversation that's been ongoing for a few years about how financial institutions can use this technology to improve their AML compliance programs.
Katherine Forrest: Right. So we saw as far back as 2018 and 2020 that there were these, perhaps, high-level commitments to innovation in the AML space. But how did financial institutions proceed from there?
Roberto Gonzalez: Well, they generally proceeded cautiously. You know, banks are subject to frequent examinations from their regulators. These are pretty invasive processes that really get under the hood of what's happening at each bank. And regulators would ask a lot of questions about the models that banks were using. And what banks were hearing from regulators was to proceed cautiously. So, for example, Comptroller of the Currency Michael Hsu, in 2023, suggested that banks adopt an asking-for-permission-not-forgiveness approach when it comes to innovation. On the other hand, unlike banks, there were some fintech companies and crypto companies that were a little more forward-leaning. And so, for example, they've been adopting machine learning in transaction monitoring. But, you know, there's still a ways to go to really fully adopt, I think, AI in those processes.
Sam Kleiner: And just building on what Roberto said, the hurdle in the AML context has really been a concern that there could be regulatory questions, including during an exam, about whether the AI missed something—like if there was a suspicious activity report that should have been filed but wasn't. And this is what is referred to as the explainability challenge. The OCC has model risk management guidance, which states that if the bank uses AI models, examiners should assess if model ratings take explainability into account, which is defined as the extent to which AI decisioning processes and outcomes are reasonably understood by bank personnel. So, in effect, that sets up a requirement that the AI model has to be explainable, and the decisions it makes need to be something that can be explained and defended. And as we know, that can be quite challenging, particularly when you're using an AI model that comes from a third-party vendor.
Scott Caravello: Yeah, and Sam, just jumping in here, I mean, given those challenges, right, would it be possible to just walk us through an example of where that might come up?
Sam Kleiner: Absolutely. I'll try and make this really tangible. Let's just suppose that the Department of Justice arrests a drug kingpin and finds out that he was using a particular bank. And then the DOJ or an examiner goes to that bank and says, you know, there were some suspicious activity reports that were filed on this drug kingpin, but we think that there were a lot of transactions that should have been reported and were missed. And, you know, that could become a pretty major issue for the bank. It could have regulatory findings. The Department of Justice even has the authority to bring criminal charges against a bank if they find willful violations of the Bank Secrecy Act. And so, in the traditional transaction monitoring context, the bank could see exactly what the rules were for where an alert would get created and then could see exactly who reviewed each alert. The bank would be able to explain exactly how they reached a decision on whether or not to file a SAR on a particular alert, and everything would be pretty explainable. But if the transaction monitoring was using AI as a way to try and identify unusual patterns of activity, that may be able to help them identify complex patterns that the older heuristics were missing, but it may not be as explainable in terms of what the exact rules were and when an alert was created and how it was disposed of. So it does create some challenges if there is a regulatory expectation for clear explainability.
Katherine Forrest: Yeah, that's really interesting because we hear a lot about the need for explainability and the challenges of explainability throughout the AI space. While there's certainly progress in terms of how some systems are able to explain their decision-making, you know, AI is not by its nature a single, clear, rules-based system. We've got all kinds of different models and different tools from different models and ways of proceeding. And so it poses challenges if regulators are going to have a strict view about what should be explainable and what kind of explanation might suffice. So going back to where we started, are we beginning to see any changes in this arena?
Roberto Gonzalez: Katherine, yeah, we have been seeing changes under this new administration. So, for example, the Comptroller of the Currency who's currently in place, Jonathan Gould, was at a forum in September of last year on AI and banking. And he made some really interesting comments that shed light on where this is going. He emphasized that it could be a very real source of risk to the banking system if banks don't innovate over time. And he again, emphasized the risks of failure to embrace new technologies as a risk factor in itself. And so he spoke to the issue of explainability that we've been discussing. And he said that our current approach to model risk management needs to be revamped and revised because it is impeding the ability of banks to take advantage of things like AI. So this is a pretty important statement from the top banking regulator. And it shows that they want to update the guidance to create a clearer path for innovation. He also testified—Jonathan Gould did—to Congress in December, and he said that the OCC is looking to balance innovation with prudence and is looking to help banks conduct the very old business of banking while embracing new technologies like AI.
Scott Caravello: Yeah. Yeah, and, you know, maybe it's worth a quick mention that that viewpoint really connects back to the White House's AI action plan, which we're talking about all the time on the podcast. And that plan hits on the fact that the complex regulatory landscape in America's critical sectors can slow down adoption of AI. And so it sounds to me like the Comptroller of the Currency statements really embody that view.
Sam Kleiner: That's right, Scott. The OCC has really come out strongly in suggesting that it's going to revise these model risk management policies. And we've already seen them take concrete action in revising the guidance for community banks. They emphasize that these smaller banks don't have the same obligations as larger banks and noted that this was the first step in refining model risk management guidance for all of the OCC's regulated institutions. So, looking ahead for the year to come, we're expecting to see some updated guidance that may offer clarifications on the use of AI by financial institutions, and we've seen this is something that trade associations have been closely paying attention to and calling on the administration to address.
Katherine Forrest: Yeah, you know, that really is something that we're going to need to keep an eye on. It's obviously critical to pay attention to what the regulatory expectations are going to be in this area. And it seems like the regulators are eyeing potential changes. We don't quite know what they are—but they’re—change may be-a coming. And returning to the Bank Secrecy Act and the anti-money laundering specifically, have we heard anything from regulators in that space?
Sam Kleiner: Yes, we have. Along the same lines as what the OCC has been saying, the Treasury Department leadership have really been embracing technology and innovation, including AI. So Undersecretary of the Treasury John Hurley, who's the official who's in charge of FinCEN as well as OFAC, recently emphasized in a speech that, quote, well-governed technology is a force multiplier in AML programs. And he said that where a financial institution invests the time and money to experiment with AI and successfully drops its false-positive ratio and escalates vital information to law enforcement more rapidly, their team should be celebrated, not written up. I think that's a pretty important statement. It reflects that regulators are supportive of this type of innovation. And Congress has also been focused on this issue. In the Genius Act, which was passed last year, there was an emphasis on modernizing financial crimes compliance, including requiring Treasury to go through a public comment period on how innovative tools, including AI, can be used to detect money laundering involving digital assets. So Treasury issued that request for public comment, and there were a number of industry groups that have weighed in to emphasize that AI is really critical to improving the detection of financial crimes. And we've seen continued congressional interest in this and expect there could be further action this year.
Roberto Gonzalez: And as we look ahead over this new year, we're also expecting FinCEN to issue a new proposed rule on AML compliance programs. This is something we're keeping a pretty close eye on, and we're looking to see what the proposed rule says about the use of innovative technology, including AI and machine learning, in supporting a risk-based AML approach. And this is something we've heard over and over again from the Treasury leadership, which is, you know, in this administration, when it comes to anti-money laundering, financial crimes compliance, they really want financial institutions to take a risk-based approach. And that means putting more resources to higher-risk activities and putting fewer resources into lower-risk activities. And there's really significant potential for AI and machine learning to improve efficiencies and let, you know, analysts and their teams really turn their attention to the harder cases, which is really where this Treasury Department wants people to be spending their time. And there's certainly a view amongst a number of banks and other financial institutions that a lot of BSA work that happens day to day—such as doing first drafts of these suspicious activity reports—can be made much more efficient through the use of AI.
Sam Kleiner: This is really a timely conversation as we think about how regulators are changing in their views on these issues and the possibility of new guidance, changes to longstanding practices, and potentially some new rules. So as we look ahead in the coming year, we're really fastening our seat belts and buckling up for some potentially significant and interesting developments in this space that'll help give more direction to financial institutions to know how to use AI. And so far, a lot of this regulatory guidance that we've talked about has been really general, but we may start to see some real AI-specific regulation and guidance in this space.
Roberto Gonzalez: Yeah, and there's definitely a demand on the bank side. And there's real interest in this. We're hearing from a number of banks that they are looking to integrate AI into these functions, and they just want to make sure that they can do it in a way that's responsible, with appropriate controls and governance and documentation in place.
Katherine Forrest: Right. And as we've talked about on this podcast before, these systems need to be really thought about and implemented with clear governance systems in mind, which is especially critical when we're talking about systems that are being used for AML. A framework with clear roles, responsibilities, and procedures, all keyed to regulatory requirements and with sufficient oversight and monitoring efforts in place, is something that I think all of our listeners who are in the compliance space will want to be keeping top of mind. So what are some of the other AI trends that you folks are monitoring in the financial services space?
Roberto Gonzalez: Well, we are monitoring very closely agentic commerce. This is a really interesting area. There are these new protocols that are being developed that are allowing agentic AI to communicate across different systems. So, for example, you may be able to say that you want to book a trip to London, and an AI agent would go out and do that within the parameters that you give it. So this raises a lot of interesting legal questions that I know, Katherine, you've been thinking a lot about, because our financial laws have always presumed the presence of a human at the moment of the transaction. So this will be a really interesting set of new questions for companies to grapple with as they design and implement these systems.
Sam Kleiner: And I'm keeping a pretty close eye on how financial services firms are adapting to the growing risks from AI-enabled deepfakes. This is a topic that FinCEN has warned about. They've said that criminals are using generative AI to create falsified documents, photographs, and videos to circumvent financial institutions’ customer identification and verification processes. And the technology there has really improved for voice generation so that it can be pretty hard to distinguish now between someone who is the bank's actual customer and a fraudster who is using AI to impersonate them. And since identity verification is really central to how financial institutions operate, this creates real challenges that financial institutions have to figure out a way to work through. So Michael Barr, who's one of the governors on the Federal Reserve Board, has been talking about this in a number of recent speeches, and he has emphasized the importance of specialized training programs and, really, the use of AI-enabled technology to detect these deepfakes. So there's a bit of a game of cat and mouse here where the criminals are improving their methods, and financial institutions will need to stay one step ahead of them.
Katherine Forrest: Yeah, you know, we've been calling that whack-a-mole, which is, you know, as the AI technology in the deepfake and interactive deepfake space improves, we also have then technical developments to detect what's a deepfake or an interactive deepfake. And then the fraudsters get ahead of that, and we're back, then, to developing even better technologies. So it's really—it's a an area that we don't know exactly where it's going to land. But both of those topics that you folks were just talking about are really interesting and worth paying attention to. We're going to actually have to come back to you guys and have you on here again and tell us where these things start to develop over the course of 2026, which, by the way, Scott, still can't believe we're in 2026. I know that you just, like, threw that off because it was too shocking to your system at the beginning of the episode. But we are, Roberto and Sam, going to be coming back to you. I want to thank you so much for joining us. You are our very first guests on this podcast. And so thank you. And we'll be circling back over the course of the year.
Roberto Gonzalez: Great, thanks, Katherine, this was a lot of fun.
Sam Kleiner: Thank you, it was great to be here.
Katherine Forrest: Okay, until next time, and don't forget to like and subscribe. Hey, Scott, are you too starstruck by the 2026 to even say goodbye?
Scott Caravello: Yeah, a little bit. But, you know, I like that you took the line. It sounds nicer when you say it.
Katherine Forrest: All right, okay. See you folks next week.