Mar 17, 2021

The greatest risk to humankind: Superintelligence, by Nick Bostrom

Machines are getting ever-smarter at more and more tasks. Eventually they will get better than us at the task of programming smarter machines. At this moment their intelligence could explode as smarter machines make themselves smarter at increasing rates. Because they think thousands of times faster than us, within a few days they could dwarf us more than we dwarf ants.

In this shocking book, Nick Bostrom lays out a comprehensive account of the types of AI that might cause this, when, and how quickly they might take-off. He shows that there are terrible dangers inherent to AI. First is that even with a simple goal, like making as many paperclips as possible, the AI could obey by turning the whole world into paperclips, and colonizing the universe with self-replicating probes to best fulfill its mandate. It could predict human opposition and interpret that as a threat to its goal and preemptively wipe us out with nanofabricated drone fleets. If we ask it to produce 1000 paperclips, it will create a method to estimate how many it has produced. But if even a tiny uncertainty remains about whether it has reached 1000, it will keep producing ever more to increase the chance. If we ask it to produce 999-1001 it will unerringly strive to increase the chances of landing within that range, and it might turn the entire universe into a computer in an effort to best calculate that. It is actually very hard to work out a way to specify goals in programming language that avoids these problems, and even harder to program rules like "don't harm humans" or "minimize your impact on the world as much as possible" - think of the impossibility of concretely defining harm, human, and possible, or weighing up that minimization with the maximization of its primary goal.

It could be even worse than extinction. If we ask the AI to make us smile, it could pin our mouths back. If we say do so without directly interfering with our facial muscles, it could implant brain stimulators that cause contraction of those muscles indirectly. If we ask it to make us maximally happy, we have to define that mathematically. If we do so by activity in our pleasure circuitry, it could turn the universe into stimulated pleasure circuits in vats. Even if we devise a goal that seems to lack any hideous means or ends, we can't guarantee that a superintelligence wouldn't find a devious, perverse way to fulfill the goals, which it judges easier than the route we intended.

Bostrom is masterly at cataloguing many strategies to reduce this enormous existential risk. It is a foundational text of a movement to save humanity. If you care about the world as we know it, read this book and spread the word of AI safety.

No comments:

Post a Comment