If consumers don’t have confidence that the AI tools they’re interacting with are respecting their privacy, are not embedding bias and discrimination, that they’re not causing safety problems, then all the marvelous possibilities really aren’t going to materialize. Nowhere is that more true than national security and law enforcement.
I’ll give you a great example. Facial recognition technology is an area where there have been horrific, inappropriate uses: take a grainy video from a convenience store and identify a black man who has never even been in that state, who’s then arrested for a crime he didn’t commit. (Editor’s note: Prabhakar is referring to this story). Wrongful arrests based on a really poor use of facial recognition technology, that has got to stop.
In stark contrast to that, when I go through security at the airport now, it takes your picture and compares it to your ID to make sure that you are the person you say you are. That’s a very narrow, specific application that’s matching my image to my ID, and the sign tells me—and I know from our DHS colleagues that this is really the case—that they’re going to delete the image. That’s an efficient, responsible use of that kind of automated technology. Appropriate, respectful, responsible—that’s where we’ve got to go.
Were you surprised at the AI safety bill getting vetoed in California?
I wasn’t. I followed the debate, and I knew that there were strong views on both sides. I think what was expressed, that I think was accurate, by the opponents of that bill, is that it was simply impractical, because it was an expression of desire about how to assess safety, but we actually just don’t know how to do those things. No one knows. It’s not a secret, it’s a mystery.
To me, it really reminds us that while all we want is to know how safe, effective and trustworthy a model is, we actually have very limited capacity to answer those questions. Those are actually very deep research questions, and a great example of the kind of public R&D that now needs to be done at a much deeper level.
Let’s talk about talent. Much of the recent National Security Memorandum on AI was about how to help the right talent come from abroad to the US to work on AI. Do you think we’re handling that in the right way?
It’s a hugely important issue. This is the ultimate American story, that people have come here throughout the centuries to build this country, and it’s as true now in science and technology fields as it’s ever been. We’re living in a different world. I came here as a small child because my parents came here in the early 1960s from India, and in that period, there were very limited opportunities [to emigrate to] many other parts of the world.