Paths, dangers, strategies
By: Nick Bostrom
Intelligence: skill at prediction, planning and means-ends instrumental reasoning (not: normative, rationality or reason).
General intelligence: possessing common sense and an effective ability to learn, reason, and plan to meet complex information-processing challenges across a wide range of natural and abstract domains.
Artificial intelligence: the quest to find short-cuts by balancing optimal, general decision making (computationally expensive) with high performance across domains of interest.
A trendsetting, sobering and bleak book. The author does a great job in trying to quantify and conceptualize “out there” concepts, no matter how far fetched. As some have argued, it worries about the over-colonization of Mars (ie, an extremely remote problem that may never materialize), but does so in a thoroughly entertaining way.
How do humans acquire their values? Everyone starts out with innate preferences shaped by natural, sexual and cultural selection over time: an aversion to noxious stimuli and a preference for objects and behaviors that are rewarded in some way (physically, culturally). The values an individual ends up with over time depend on subsequent “life events”, experience.
Superintelligence is defined as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”.
There are multiple paths to superintelligence: stand alone artificial intelligence, whole brain emulation, biological enhancements, brain-computer interfaces, complex networks and organizations.
There are different bundles of intellectual super-abilities: speed (faster), collective (spread out over smaller sub-units), quality (may not be faster, but better).
Advantages of machine based intelligence include greater speed, greater size of computational units and storage, more durable, easier to edit and duplicate, easier to share information and coordinate.
Superintelligence likely comes with superpowers in areas of raw and applied intelligence, technology and economic output.
A superintelligent agent may be motivated by anything that helps it to reach its broader goals: self-preservation, goal consistency, cognitive enhancement, technological perfection, resource acquisition.
The link between an agent’s intelligence and values is loose. How to transfer human (or any) values to a computing agent is not well understood. There are various existential risks related to the potential misalignment of intelligence and values and the book provides mechanisms to try and control such risks .
We may not be able ourselves to figure out which value(s) we would want to transfer to an intelligent agent.
- Nick Bostrom Ted Talk:
- Nick Bostrom Google Talk on Superintelligence:
- The original source of Elon Musk’s simulation argument: