Podcast Banner

Podcasts

Paul, Weiss Waking Up With AI

AI Inference: Connecting the Dots

Following last week’s deep dive into semiconductor chips, Katherine and Anna revisit the concept of AI inference. They explore AI reasoning capabilities, why inference quality matters for lawyers and recent advancements that can improve how models infer.

Stream here or subscribe on your
preferred podcast app:

Episode Transcript

Katherine Forrest: Hey, welcome everyone to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel.

Katherine Forrest: And Anna, I can see you. Nobody else can see you, but I can see that you've got your home background behind you. And you're back from Silicon Valley.

Anna Gressel: I know, I went and I sought out sunnier shores. I was up in San Francisco and the peninsula, and then actually up in Seattle, which was lovely. And it was great to just meet with a ton of fun folks, but I would say, in particular, just so many amazing women lawyers who are working on AI and chips these days. And so shout out to all of our friends. It was great to see you guys.

Katherine Forrest: Well, that's terrific. And we're glad to have you back on this side of the time change. That is, so that there is no time change.

Anna Gressel: Definitely.

Katherine Forrest: And I'm thrilled that in today's episode we get to follow up on an aspect of the one that I did solo last week, Anna.

Anna Gressel: Mm-hmm.

Katherine Forrest: That was on the exciting, scintillating topic of chips and GPUs. Actually, it was a good episode, but I had to keep reminding our audience not to turn me off. Don't turn me off.

Anna Gressel: Now I have to wait and listen to it with everyone else. I'm so on the edge of my seat.

Katherine Forrest: And it will keep you on the edge of your seat. But today we're planning to talk about another concept that's thrown around all the time in the AI world, and that is a word called inference. I-N-F-E-R-E-N-C-E, inference. And as a former judge, I have to say that I'm used to the word inference meaning sort of an educated view where you arrive at a fact based upon putting together other facts. So it's like, in a way, like circumstantial evidence, where you use almost like a connect-the-dot kind of process to infer a fact based on other facts. And so, we used to use an example when I was a judge about inference, which was, you wake up in the morning and you see that the sidewalk outside your building is wet, and you can infer that it rained the night before. But if you look around further, and you add additional information to just the wetness, and you see that there's a hose nearby, then that additional information would allow you to add to your inference and to predict that the water may have even come from the hose. And so you could test both hypotheses with additional information in the mix. You'd look for wetness, say, in the street, if it rained, or whether or not the hose was dripping if it was the hose.

And so, I used to say to juries all the time that an inference is not speculation, it's not guesswork, but it's, again, using one or more facts to make a determination as to another. And today we're going to be talking about the special meaning of the word inference here.

Anna Gressel: Yeah, in the AI space, inference is really a very particular term of art. It has some relationship to the use of the term you used when you were a judge. I definitely think there are some similarities, but it's also got a particularized difference between the normal usage of the word and what's specialized about it in the AI context. So let's talk about that a little bit. When we talk about inferences from an AI tool, we're essentially talking about the output from the model. So if I query a model, then based on all of the information at its disposal, the model infers an answer.