Once we get computers to match human-level intelligence, they won’t stop there. With deep knowledge, machine-level mathematical abilities, and better algorithms, they’ll create superintelligence, right?
Yeah, there’s no question that machines will eventually be smarter than humans. We don’t know how long it’s going to take—it could be years, it could be centuries.
At that point, do we have to batten down the hatches?
No, no. We’ll all have AI assistants, and it will be like working with a staff of super smart people. They just won’t be people. Humans feel threatened by this, but I think we should feel excited. The thing that excites me the most is working with people who are smarter than me, because it amplifies your own abilities.
But if computers get superintelligent, why would they need us?
There is no reason to believe that just because AI systems are intelligent they will want to dominate us. People are mistaken when they imagine that AI systems will have the same motivations as humans. They just won’t. We’ll design them not to.
What if humans don’t build in those drives, and superintelligence systems wind up hurting humans by single-mindedly pursuing a goal? Like philosopher Nick Bostrom’s example of a system designed to make paper clips no matter what, and it takes over the world to make more of them.
You would be extremely stupid to build a system and not build any guardrails. That would be like building a car with a 1,000-horsepower engine and no brakes. Putting drives into AI systems is the only way to make them controllable and safe. I call this objective-driven AI. This is sort of a new architecture, and we don’t have any demonstration of it at the moment.
That’s what you’re working on now?
Yes. The idea is that the machine has objectives that it needs to satisfy, and it cannot produce anything that does not satisfy those objectives. Those objectives might include guardrails to prevent dangerous things or whatever. That’s how you make an AI system safe.
Do you think you’re going to live to regret the consequences of the AI you helped bring about?
If I thought that was the case, I would stop doing what I’m doing.
You’re a big jazz fan. Could anything generated by AI match the elite, euphoric creativity that so far only humans can produce? Can it produce work that has soul?
The answer is complicated. Yes, in the sense that AI systems eventually will produce music—or visual art, or whatever—with a technical quality similar to what humans can do, perhaps superior. But an AI system doesn’t have the essence of improvised music, which relies on communication of mood and emotion from a human. At least not yet. That’s why jazz music is to be listened to live.
Source