Skip to main content

An interview with Dr. Stuart Russell, author of “Human Compatible, Artificial Intelligence and the Problem of Control”

(UC Berkeley’s Dr. Stuart Russell’s new book, “Human Compatible: Artificial Intelligence and the Problem of Control, goes on sale Oct. 8. I’ve written a review, “Human Compatible” is a provocative prescription to re-think AI before it’s too late,” and the following in an interview I conducted with Dr. Russell in his UC Berkeley office on […]

(UC Berkeley’s Dr. Stuart Russell’s new book, “Human Compatible: Artificial Intelligence and the Problem of Control, goes on sale Oct. 8. I’ve written a review, Human Compatible” is a provocative prescription to re-think AI before it’s too late,” and the following in an interview I conducted with Dr. Russell in his UC Berkeley office on September 3, 2019.)

“Human Compatible” is a provocative prescription to re-think AI before it’s too late

Ned Desmond: Why did you write Human Compatible?

Dr. Russell: I’ve been thinking about this problem – what if we succeed with AI? – on and off since the early 90s. The more I thought about it, the more I saw that the path we were on doesn’t end well.

(AI Researchers) had mostly just doing toy stuff in the lab, or games, none of which represented any threat to anyone. It’s a little like a physicist playing tiny bits of uranium. Nothing happens, right? So we’ll just make more of it, and everything will be fine. But it just doesn’t work that way.  When you start crossing over to systems that are more intelligent, operating on a global scale, and having real-world impact, like trading algorithms, for example, or social media content selection, then all of a sudden, you are having a big impact on real-world, and it’s hard to control. It’s hard to undo. And that’s just going to get worse and worse and worse.

Stuart Russell HUMAN COMPATIBLE Credit Peg Skorpinski

Dean’s Society – October 23, 2006; Stuart Russell

Desmond: Who should read Human Compatible?

Dr. Russell: I think everyone, because everyone is going to be affected by this.  As progress occurs towards human level (AI), each big step is going to magnify the impact by another factor of 10, or another factor of 100. Everyone’s life is going to be radically affected by this. People need to understand it. More specifically, it would be policymakers, the people who run the large companies like Google and Amazon, and people in AI, related disciplines, like control theory, cognitive science and so on.

My basic view was so much of this debate is going on without any understanding of what AI is.  It’s just this magic potion that will make things intelligent. And in these debates, people don’t understand the building blocks, how it fits together, how it works, how you make an intelligent system. So chapter two (of Human Compatible was) sort of mammoth and some people said, “Oh, this is too much to get through and others said, “No, you absolutely have to keep it.”  So I compromised and put the pedagogical stuff in the appendices.

Desmond: Why did computer scientists tend to overlook the issue of uncertainty in the objective function for AI systems?

Dr. Russell: Funnily enough, in AI, we took uncertainty (in the decision-making function) to heart starting in the 80s. Before that, most AI people said let’s just work on cases where we have definite knowledge, and we can come up with guaranteed plans.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.