Free Novel Read

The Inevitable Page 5


  Yes. Three recent breakthroughs have unleashed the long-awaited arrival of artificial intelligence:

  1. Cheap Parallel Computation

  Thinking is an inherently parallel process. Billions of neurons in our brain fire simultaneously to create synchronous waves of computation. To build a neural network—the primary architecture of AI software—also requires many different processes to take place simultaneously. Each node of a neural network loosely imitates a neuron in the brain—mutually interacting with its neighbors to make sense of the signals it receives. To recognize a spoken word, a program must be able to hear all the phonemes in relation to one another; to identify an image, it needs to see every pixel in the context of the pixels around it—both deeply parallel tasks. But until recently, the typical computer processor could ping only one thing at a time.

  That began to change more than a decade ago, when a new kind of chip, called a graphics processing unit, or GPU, was devised for the intensely visual—and parallel—demands of video games, in which millions of pixels in an image had to be recalculated many times a second. That required a specialized parallel computing chip, which was added as a supplement to the PC motherboard. The parallel graphics chips worked fantastically, and gaming soared in popularity. By 2005, GPUs were being produced in such quantities that they became so cheap they were basically a commodity. In 2009, Andrew Ng and a team at Stanford realized that GPU chips could run neural networks in parallel.

  That discovery unlocked new possibilities for neural networks, which can include hundreds of millions of connections between their nodes. Traditional processors required several weeks to calculate all the cascading possibilities in a neural net with 100 million parameters. Ng found that a cluster of GPUs could accomplish the same thing in a day. Today neural nets running on GPUs are routinely used by cloud-enabled companies such as Facebook to identify your friends in photos or for Netflix to make reliable recommendations for its more than 50 million subscribers.

  2. Big Data

  Every intelligence has to be taught. A human brain, which is genetically primed to categorize things, still needs to see a dozen examples as a child before it can distinguish between cats and dogs. That’s even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. Part of the AI breakthrough lies in the incredible avalanche of collected data about our world, which provides the schooling that AIs need. Massive databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results, Wikipedia, and the entire digital universe became the teachers making AI smart. Andrew Ng explains it this way: “AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. The rocket engine is the learning algorithms but the fuel is the huge amounts of data we can feed to these algorithms.”

  3. Better Algorithms

  Digital neural nets were invented in the 1950s, but it took decades for computer scientists to learn how to tame the astronomically huge combinatorial relationships between a million—or a hundred million—neurons. The key was to organize neural nets into stacked layers. Take the relatively simple task of recognizing that a face is a face. When a group of bits in a neural net is found to trigger a pattern—the image of an eye, for instance—that result (“It’s an eye!”) is moved up to another level in the neural net for further parsing. The next level might group two eyes together and pass that meaningful chunk on to another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs. The code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM’s Watson; DeepMind, Google’s search engine; and Facebook’s algorithms.

  This perfect storm of cheap parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI. And this convergence suggests that as long as these technological trends continue—and there’s no reason to think they won’t—AI will keep improving.

  As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing empowers the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people who use it. The more people who use it, the smarter it gets. And so on. Once a company enters this virtuous cycle, it tends to grow so big so fast that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.

  In 1997, Watson’s precursor, IBM’s Deep Blue, beat the reigning chess grand master Garry Kasparov in a famous man-versus-machine match. After machines repeated their victories in a few more matches, humans largely lost interest in such contests. You might think that was the end of the story (if not the end of human history), but Kasparov realized that he could have performed better against Deep Blue if he’d had the same instant access to a massive database of all previous chess moves that Deep Blue had. If this database tool was fair for an AI, why not for a human? Let the human mastermind be augmented by a database just as Deep Blue’s was. To pursue this idea, Kasparov pioneered the concept of man-plus-machine matches, in which AI augments human chess players rather than competes against them.

  Now called freestyle chess matches, these are like mixed martial arts fights, where players use whatever combat techniques they want. You can play as your unassisted human self, or you can act as the hand for your supersmart chess computer, merely moving its board pieces, or you can play as a “centaur,” which is the human/AI cyborg that Kasparov advocated. A centaur player will listen to the moves suggested by the AI but will occasionally override them—much the way we use the GPS navigation intelligence in our cars. In the championship Freestyle Battle 2014, open to all modes of players, pure chess AI engines won 42 games, but centaurs won 53 games. Today the best chess player alive is a centaur. It goes by the name of Intagrand, a team of several humans and several different chess programs.

  But here’s the even more surprising part: The advent of AI didn’t diminish the performance of purely human chess players. Quite the opposite. Cheap, supersmart chess programs inspired more people than ever to play chess, at more tournaments than ever, and the players got better than ever. There are more than twice as many grand masters now as there were when Deep Blue first beat Kasparov. The top-ranked human chess player today, Magnus Carlsen, trained with AIs and has been deemed the most computerlike of all human chess players. He also has the highest human grand master rating of all time.

  If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers.

  Yet most of the commercial work completed by AI will be done by nonhuman-like programs. The bulk of AI will be special purpose software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube, but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdly narrow, supersmart speciali
sts.

  In fact, robust intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in finance instead. What we want instead of conscious intelligence is artificial smartness. As AIs develop, we might have to engineer ways to prevent consciousness in them. Our most premium AI services will likely be advertised as consciousness-free.

  Nonhuman intelligence is not a bug; it’s a feature. The most important thing to know about thinking machines is that they will think different.

  Because of a quirk in our evolutionary history, we are cruising as the only self-conscious species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose,” because compared with other kinds of minds we have met, it can solve more types of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.

  The kind of thinking done by the emerging AIs today is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans could do, they don’t do it in a humanlike fashion. I recently uploaded 130,000 of my personal snapshots—my entire archive—to Google Photo, and the new Google AI remembers all the objects in all the images from my life. When I ask it to show me any image with a bicycle in it, or a bridge, or my mother, it will instantly display them. Facebook has the ability to ramp up an AI that can view a photo portrait of any person on earth and correctly identify them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this artificial ability very unhuman. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. One of the advantages of having AIs drive our cars is that they won’t drive like humans, with our easily distracted minds.

  In a superconnected world, thinking different is the source of innovation and wealth. Just being smart is not enough. Commercial incentives will make industrial-strength AI ubiquitous, embedding cheap smartness into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences and entirely new ways of thinking—in the way a calculator is a genius in arithmetic. Calculation is only one type of smartness. We don’t know what the full taxonomy of intelligence is right now. Some traits of human thinking will be common (as common as bilateral symmetry, segmentation, and tubular guts are in biology), but the possibility space of viable minds will likely contain traits far outside what we have evolved. It is not necessary that this type of thinking be faster than humans’, greater, or deeper. In some cases it will be simpler.

  The variety of potential minds in the universe is vast. Recently we’ve begun to explore the species of animal minds on earth, and as we do we have discovered, with increasing respect, that we have met many other kinds of intelligences already. Whales and dolphins keep surprising us with their intricate and weirdly different intelligence. Precisely how a mind can be different or superior to our minds is very difficult to imagine. One way that would help us to imagine what greater yet different intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.

  The reason this fanciful exercise is worth doing is because, while it is inevitable that we will manufacture intelligences in all that we make, it is not inevitable or obvious what their character will be. Their character will dictate their economic value and their roles in our culture. Outlining the possible ways that a machine might be smarter than us (even in theory) will assist us in both directing this advance and managing it. A few really smart people, like astronomer Stephen Hawking and genius inventor Elon Musk, worry that making supersmart AIs could be our last invention before they replace us (though I don’t believe this), so exploring possible types is prudent.

  Imagine we land on an alien planet. How would we measure the level of the intelligences we encounter there? This is an extremely difficult question because we have no real definition of our own intelligence, in part because until now we didn’t need one.

  In the real world—even in the space of powerful minds—trade-offs rule. One mind cannot do all mindful things perfectly well. A particular species of mind will be better in certain dimensions, but at a cost of lesser abilities in other dimensions. The smartness that guides a self-driving truck will be a different species than the one that evaluates mortgages. The AI that will diagnose your illness will be significantly different from the artificial smartness that oversees your house. The superbrain that predicts the weather accurately will be in a completely different kingdom of mind from the intelligence woven into your clothes. The taxonomy of minds must reflect the different ways in which minds are engineered with these trade-offs. In the short list below I include only those kinds of minds that we might consider superior to us; I’ve omitted the thousands of species of mild machine smartness—like the brains in a calculator—that will cognify the bulk of the internet of things.

  Some possible new minds:

  A mind like a human mind, just faster in answering (the easiest AI mind to imagine).

  A very slow mind, composed primarily of vast storage and memory.

  A global supermind composed of millions of individual dumb minds in concert.

  A hive mind made of many very smart minds, but unaware it/they are a hive.

  A borg supermind composed of many smart minds that are very aware they form a unity.

  A mind trained and dedicated to enhancing your personal mind, but useless to anyone else.

  A mind capable of imagining a greater mind, but incapable of making it.

  A mind capable of creating a greater mind, but not self-aware enough to imagine it.

  A mind capable of successfully making a greater mind, once.

  A mind capable of creating a greater mind that can create a yet greater mind, etc.

  A mind with operational access to its source code, so it can routinely mess with its own processes.

  A superlogic mind without emotion.

  A general problem-solving mind, but without any self-awareness.

  A self-aware mind, but without general problem solving.

  A mind that takes a long time to develop and requires a protector mind until it matures.

  An ultraslow mind spread over large physical distance that appears “invisible” to fast minds.

  A mind capable of cloning itself exactly many times quickly.

  A mind capable of cloning itself and remaining in unity with its clones.

  A mind capable of immortality by migrating from platform to platform.

  A rapid, dynamic mind capable of changing the process and character of its cognition.

  A nanomind that is the smallest possible (size and energy profile) self-aware mind.

  A mind specializing in scenario and prediction making.

  A mind that never erases or forgets anything, including incorrect or false information.

  A half-machine, half-animal symbiont mind.

  A half-machine, half-human cyborg mind.

  A mind using quantum computing whose logic is not understandable to us.

  * * *

  • • �


  If any of these imaginary minds are possible, it will be in the future beyond the next two decades. The point of this speculative list is to emphasize that all cognition is specialized. The types of artificial minds we are making now and will make in the coming century will be designed to perform specialized tasks, and usually tasks that are beyond what we can do. Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.

  To really solve the current grand mysteries of quantum gravity, dark energy, and dark matter, we’ll probably need other intelligences beside human. And the extremely complex harder questions that will come after those hard questions may require even more distant and complex intelligences. Indeed, we may need to invent intermediate intelligences that can help us design yet more rarefied intelligences that we could not design alone. We need ways to think different.

  Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that reluctance in our difficulty in approving mathematical proofs done by computer. Some mathematical proofs have become so complex only computers are able to rigorously check every step, but these proofs are not accepted as “proof” by all mathematicians. The proofs are not understandable by humans alone so it is necessary to trust a cascade of algorithms, and this demands new skills in knowing when to trust these creations. Dealing with alien intelligences will require similar skills, and a further broadening of ourselves. An embedded AI will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real-time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we “know” something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, science will have to know, and progress, according to the criteria of new minds. At that point everything changes.