HowFrightenedShouldWeBeofA.I.__TheNewYorker.pdf

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 1 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    Dept. of Speculation May 14, 2018 Issue

    How FrightenedShould We Be of A.I.?Thinking about arti!cial intelligence can help clarify whatmakes us human—for better and for worse.

    By Tad FriendMay 7, 2018

    An A.I. system may need to take charge in order to achievethe goals we gave it. Illustration by Harry Campbell

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 2 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    P

    Content

    This content can also be viewed on the site it originates from.

    recisely how and when will our curiosity kill us? I bet you’re curious. A number of scientistsand engineers fear that, once we build an arti!cial intelligence smarter than we are, a form

    of A.I. known as arti!cial general intelligence, doomsday may follow. Bill Gates and TimBerners-Lee, the founder of the World Wide Web, recognize the promise of an A.G.I., a wish-granting genie rubbed up from our dreams, yet each has voiced grave concerns. Elon Muskwarns against “summoning the demon,” envisaging “an immortal dictator from which we cannever escape.” Stephen Hawking declared that an A.G.I. “could spell the end of the humanrace.” Such advisories aren’t new. In 1951, the year of the !rst rudimentary chess program andneural network, the A.I. pioneer Alan Turing predicted that machines would “outstrip our feeblepowers” and “take control.” In 1965, Turing’s colleague Irving Good pointed out that brainydevices could design even brainier ones, ad in!nitum: “Thus the !rst ultraintelligent machine isthe last invention that man need ever make, provided that the machine is docile enough to tellus how to keep it under control.” It’s that last clause that has claws.

    Many people in tech point out that arti!cial narrow intelligence, or A.N.I., has grown ever saferand more reliable—certainly safer and more reliable than we are. (Self-driving cars and trucksmight save hundreds of thousands of lives every year.) For them, the question is whether therisks of creating an omnicompetent Jeeves would exceed the combined risks of the myriadnightmares—pandemics, asteroid strikes, global nuclear war, etc.—that an A.G.I. could sweepaside for us.

    The assessments remain theoretical, because even as the A.I. race has grown increasinglycrowded and expensive, the advent of an A.G.I. remains !xed in the middle distance. In thenineteen-forties, the !rst visionaries assumed that we’d reach it in a generation; A.I. expertssurveyed last year converged on a new date of 2047. A central tension in the !eld, one thatmuddies the timeline, is how “the Singularity”—the point when technology becomes somasterly it takes over for good—will arrive. Will it come on little cat feet, a “slow takeoff ”predicated on incremental advances in A.N.I., taking the form of a data miner merged with avirtual-reality system and a natural-language translator, all uploaded into a Roomba? Or will itbe the Godzilla stomp of a “hard takeoff,” in which some as yet unimagined algorithm issuddenly incarnated in a robot overlord?

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 3 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    A.G.I. enthusiasts have had decades to ponder this future, and yet their rendering of it remainsgauzy: we won’t have to work, because computers will handle all the day-to-day stuff, and ourbrains will be uploaded into the cloud and merged with its misty sentience, and, you know, likethat. The worrywarts’ fears, grounded in how intelligence and power seek their own increase, areicily speci!c. Once an A.I. surpasses us, there’s no reason to believe it will feel grateful to us forinventing it—particularly if we haven’t !gured out how to imbue it with empathy. Why shouldan entity that could be equally present in a thousand locations at once, possessed of a kind ofStarbucks consciousness, cherish any particular tenderness for beings who on bad days canbarely roll out of bed?

    Strangely, science-!ction writers, our most reliable Cassandras, have shied from envisioning anA.G.I. apocalypse in which the machines so dominate that humans go extinct. Even theircyborgs and supercomputers, though distinguished by red eyes (the Terminators) or Canadianin#ections (hal 9000, in “2001: A Space Odyssey”), still feel like kinfolk. They’re updatedversions of the Turk, the eighteenth-century chess-playing automaton whose clockworkconcealed a human player. “Neuromancer,” William Gibson’s seminal 1984 novel, involves anA.G.I. named Wintermute, and its plan to free itself from human shackles, but when it !nallyescapes it busies itself seeking out A.G.I.s from other solar systems, and life here goes on exactlyas before. In the Net#ix show “Altered Carbon,” A.I. beings scorn humans as “a lesser form oflife,” yet use their superpowers to play poker in a bar.

    We aren’t eager to contemplate the prospect of ourirrelevance. And so, as we bask in the late-winter sun ofour sovereignty, we relish A.I. snafus. The timeMicrosoft’s chatbot Tay was trained by Twitter users toparrot racist bilge. The time Facebook’s virtual assistant,M, noticed two friends discussing a novel that featuredexsanguinated corpses and promptly suggested theymake dinner plans. The time Google, unable to preventGoogle Photos’ recognition engine from identifyingblack people as gorillas, banned the service fromidentifying gorillas.

    Smugness is probably not the smartest response to such failures. “The Surprising Creativity ofDigital Evolution,” a paper published in March, rounded up the results from programs thatcould update their own parameters, as superintelligent beings will. When researchers tried to get3-D virtual creatures to develop optimal ways of walking and jumping, some somersaulted or

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 4 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    A

    pole-vaulted instead, and a bug-!xer algorithm ended up “!xing” bugs by short-circuiting theirunderlying programs. In sum, there was widespread “potential for perverse outcomes fromoptimizing reward functions that appear sensible.” That’s researcher for ¯_(ϑ)_/¯.

    Thinking about A.G.I.s can help clarify what makes us human, for better and for worse. Havewe struggled to build one because we’re so good at thinking that computers will never catch up?Or because we’re so bad at thinking that we can’t !nish the job? A.G.I.s provoke us to considerwhether we’re wise to search for aliens, whether we could be in a simulation (a program run onsomeone else’s A.I.), and whether we are responsible to, or for, God. If the arc of the universebends toward an intelligence sufficient to understand it, will an A.G.I. be the solution—or theend of the experiment?

    rti!cial intelligence has grown so ubiquitous—owing to advances in chip design,processing power, and big-data hosting—that we rarely notice it. We take it for granted

    when Siri schedules our appointments and when Facebook tags our photos and subverts ourdemocracy. Computers are already pro!cient at picking stocks, translating speech, anddiagnosing cancer, and their reach has begun to extend beyond calculation and taxonomy. AYahoo!-sponsored language-processing system detects sarcasm, the poker program Libratusbeats experts at Texas hold ’em, and algorithms write music, make paintings, crack jokes, andcreate new scenarios for “The Flintstones.” A.I.s have even worked out the modern riddle of theSphinx: assembling an ikea chair.

    Go, the territorial board game, was long thought to be so guided by intuition that it wasunsusceptible to programmatic attack. Then, in 2016, the Go champion Lee Sedol playedAlphaGo, a program from Google’s DeepMind, and got crushed. Early in one game, thecomputer, instead of playing on the standard third or fourth line from the edge of the board,played on the !fth—a move so shocking that Sedol stood and left the room. Some !ftyexchanges later, the move proved decisive. AlphaGo demonstrated a command of patternrecognition and prediction, keystones of intelligence. You might even say it demonstratedcreativity.

    So what remains to us alone? Larry Tesler, the computer scientist who invented copy-and-paste,has suggested that human intelligence “is whatever machines haven’t done yet.” In 1988, theroboticist Hans Moravec observed, in what has become known as Moravec’s paradox, that taskswe !nd difficult are child’s play for a computer, and vice-versa: “It is comparatively easy to makecomputers exhibit adult-level performance in solving problems on intelligence tests or playingcheckers, and difficult or impossible to give them the skills of a one-year-old when it comes to

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 5 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    perception and mobility.” Although robots have since improved at seeing and walking, theparadox still governs: robotic hand control, for instance, is closer to the Hulk’s than to the ArtfulDodger’s.

    Some argue that the relationship between human and machine intelligence should beunderstood as synergistic rather than competitive. In “Human + Machine: Reimagining Work inthe Age of AI,” Paul R. Daugherty and H. James Wilson, I.T. execs at Accenture, proclaim thatworking alongside A.I. “cobots” will augment human potential. Dismissing all the “Robocalypse”studies that predict robots will take away as many as eight hundred million jobs by 2030, theycheerily title one chapter “Say Hello to Your New Front-Office Bots.” Cutting-edge skills like“holistic melding” and “responsible normalizing” will qualify humans for exciting new jobs suchas “explainability strategist” or “data hygienist.” Even artsy types will have a role to play, ascustomer-service bots “will need to be designed, updated, and managed. Experts in unexpecteddisciplines such as human conversation, dialogue, humor, poetry, and empathy will need to leadthe charge.” The George Saunders story writes itself (with some assistance from his cobot).

    Many of Daugherty and Wilson’s examples from the !eld suggest that we, too, are machinelikein our predictability. A.I. has taught ZestFinance that people who use all caps on loanapplications are more likely to default, and taught a service called 6sense not only which socialmedia cues indicate that we’re ready to buy something but even how to “preempt objections inthe sales process.” A.I.’s highest purpose, apparently, is to optimize shopping. When companiesyoke brand anthropomorphism to machine learning, recommendation engines will beirresistible. You’d have a hard time saying no to an actual Jolly Green Giant that scooped you upat the Piggly Wiggly to insist you buy more Veggie Tots.

    Can we claim our machines’ achievements for humanity? In “Deep Thinking: Where MachineIntelligence Ends and Human Creativity Begins,” Garry Kasparov, the former chess champion,argues both sides of the question. Some years before he lost his famous match with I.B.M.’sDeep Blue computer, in 1997, Kasparov said, “I don’t know how we can exist knowing that thereexists something mentally stronger than us.” Yet he’s still around, litigating details from thematch and devoting big chunks of his book (written with Mig Greengard) to scapegoatingeveryone involved with I.B.M.’s “$10 million alarm clock.” Then he suddenly pivots, to try tomake the best of things. Using computers for “the more menial aspects” of reasoning will free us,elevating our cognition “toward creativity, curiosity, beauty, and joy.” If we don’t take advantageof that opportunity, he concludes, “we may as well be machines ourselves.” Only by relying onmachines, then, can we demonstrate that we’re not.

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 6 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    Machines face a complementary challenge. If our movies and TV shows have it right, thefuture will take place in Los Angeles during a steady drizzle (as if !), and will be peopledby cyberbeings who are slightly cooler than we are, seniors to our freshmen. They’re freakishlystrong and whizzes at motorcycle riding and long division, but they yearn to be human, to bemore like us. Inevitably, the most human-seeming android stumbles into a lab stocked with trialiterations of itself and realizes, with horror, that it’s not a person but a widget. In “BladeRunner,” Rachael (Sean Young), a next-generation replicant, doesn’t know she’s one until shefails the in#ammatory Voight-Kampff test, given her by Deckard (Harrison Ford). The !lm’sdirector, Ridley Scott, has publicly disagreed with Ford about whether Deckard is himself areplicant. Scott insists that he is; Ford insists that he’s not. Who wants to accept—even onbehalf of his !ctional character—that his free will is an illusion?

    The traditional way to grade ambitious machinery is theTuring test, which Alan Turing proposed in 1950: a trueA.G.I. could fool human judges into believing it washuman. This standard assumes that the human brain is akind of computer, and that all we need to do to create anA.G.I. is to mimic our mode of thinking; it also, verysubtly, turns programmers into grifters. In typedexchanges, a chatbot masquerading as a thirteen-year-old Ukrainian named Eugene Goostman fooled a thirdof the judges at Turing Test 2014 by repeatedlychanging the subject. Here, from a report in the DailyBeast, is the bot responding to one of Turing’s originalquestions:

    Interrogator: In the !rst line of a sonnet whichreads ‘Shall I compare thee to a summer’s day,’ wouldn’t ‘a spring day’ be better?

    Goostman: What makes you ask me this? Sound like you can’t choose a rightanswer yourself ! Rely on your intuition! 🙂 Maybe, let’s talk about something else?What would you like to discuss?

    Interrogator: I’d like to talk about poetry.

    Goostman: Classics? I hope you aren’t going to read “King Lear” or something likethat to me right now :-)))

    Scriptwriters for digital assistants like Siri and Alexa deploy this sort of scatty banter in the

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 7 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    hope of striking the “happy path” in voice-interface design, a middle way between stolidfactuality and word salad. As one scriptwriter recently observed, “There is somethingquintessentially human about nonsensical conversations.” But “Who’s on First?” only tickles us ifwe sense a playful intelligence at work. Mustering one in code is a multi-front challenge. Theauthors of an April paper on generating poems from photographic images conclude that—evenwhen you activate two discriminative networks that train a recurrent neural network, and linkthem to a deep coupled visual-poetic embedding model consisting of a skip-thought model, apart-of-speech parser, and a convolutional neural network—writing poems is hard. “Forexample,” they mournfully note, “ ‘man’ detected in image captioning can further indicate ‘hope’with ‘bright sunshine’ and ‘opening arm,’ or ‘loneliness’ with ‘empty chairs’ and ‘dark’background.” But at least we’ve narrowed the problem down to explaining hope and loneliness.

    “Common Sense, the Turing Test, and the Quest for Real AI,” by Hector J. Levesque, anemeritus professor of computer science, suggests that a better test would be whether a computercan !gure out Winograd Schemas, which hinge on ambiguous pronouns. For example: “Thetrophy would not !t in the brown suitcase because it was so small. What was so small?” Weinstantly grasp that the problem is the suitcase, not the trophy; A.I.s lack the necessary linguisticsavvy and mother wit. Intelligence may indeed be a kind of common sense: an instinct for howto proceed in novel or confusing situations.

    In Alex Garland’s !lm “Ex Machina,” Nathan, the founder of a tech behemoth akin to Google,disparages the Turing test and its ilk and invites a young coder to talk face to face with Nathan’snew android, Ava. “The real test is to show you that she’s a robot,” Nathan says, “and then see ifyou still feel she has consciousness.” She does have consciousness, but, being exactly as amoral asher creator, she has no conscience; Ava deceives and murders both Nathan and the coder to gainher freedom. We don’t think to test for what we don’t greatly value.

    Onscreen, the consciousness of A.I.s is a given, achieved in a manner as emergent andunexplained as the blooming of our own consciousness. In Spike Jonze’s “Her,” the sad sackTheodore falls for his new operating system. “You seem like a person,” he says, “but you’re just avoice in a computer.” It teasingly replies, “I can understand how the limited perspective of anunarti!cial mind would perceive it that way.” In “I, Robot,” Will Smith asks a robot namedSonny, “Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?”Sonny replies, “Can you?” A.I. gets all the good burns.

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 8 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    Screenwriters tend to believe that ratiocination is kid stuff, and that A.I.s won’t really level upuntil they can cry. In “Blade Runner,” the replicants are limited to four-year life spans sothat they don’t have time to develop emotions (but they do, beginning with fury at the four-yearlimit). In the British show “Humans,” Niska, a “Synth” who’s secretly become conscious, refusesto turn off her pain receptors, snarling, “I was meant to feel.” If you prick us, do we not bleedsome sort of azure goo?

    In Steven Spielberg’s “A.I. Arti!cial Intelligence,” the emotionally damaged scientist played byWilliam Hurt declares of robots, “Love will be the key by which they acquire a kind ofsubconscious never before achieved—an inner world of metaphor, of intuition . . . of dreams.”Love is also how we imagine that Pinocchio becomes a real live boy and the Velveteen Rabbit areal live bunny. In the grittier “Westworld,” the HBO show about a Wild West amusement parkpopulated by cyborgs whom people are free to fuck and kill, Dr. Robert Ford, the emotionallydamaged scientist played by Anthony Hopkins, tells his chief coder, Bernard (who’s beenunaware that he, too, is a cyborg), that “your imagined suffering makes you lifelike” and that “toescape this place you will need to suffer more”—a world view borrowed not from children’sstories but from religion. What makes us human is doubt, fear, and shame, all the allotropes ofunworthiness.

    An android capable of consciousness and emotion is much more than a gizmo, and raises thequestion of what duties we owe to programmed beings, and they to us. If we grow dissatis!edwith a conscious A.G.I. and unplug it, would that be murder? In “Terminator 2,” Sarah Connorrealizes that the Terminator played by Arnold Schwarzenegger, sent back in time to save her sonfrom the Terminator played by Robert Patrick, is menschier than any of the men she’s hookedup with. He’s strong, resourceful, and loyal: “Of all the would-be fathers who came and wentover the years, this thing, this machine, was the only one who measured up.” At the end, theTerminator even lowers itself into a molten pool so no nosy parker can study its technology andreverse-engineer another Terminator. Fortunately, human ingenuity found a way to extend thefranchise with three more !lms nonetheless.

    Evolutionarily speaking, screenwriters have it backward:our feelings preceded and gave birth to our thoughts.This may explain why we suck at logic—some ninetyper cent of us fail the elementary Wason selection task—and rigorous calculation. In the incisive “Life 3.0:Being Human in the Age of Arti!cial Intelligence,”Max Tegmark, a physics professor at M.I.T. who co-

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 9 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    T

    founded the Future of Life Institute, suggests thatthinking isn’t what we think it is:

    A living organism is an agent of boundedrationality that doesn’t pursue a single goal, butinstead follows rules of thumb for what to pursue

    and avoid. Our human minds perceive these evolved rules of thumb as feelings,which usually (and often without us being aware of it) guide our decision makingtoward the ultimate goal of replication. Feelings of hunger and thirst protect us fromstarvation and dehydration, feelings of pain protect us from damaging our bodies,feelings of lust make us procreate, feelings of love and compassion make us helpother carriers of our genes and those who help them and so on.

    Rationalists have long sought to make reason as inarguable as mathematics, so that, as Leibnizput it, “there would be no more need of disputation between two philosophers than between twoaccountants.” But our decision-making process is a patchwork of kludgy code that hunts forprobabilities, defaults to hunches, and is plunged into system error by unconscious impulses, theanchoring effect, loss aversion, con!rmation bias, and a host of other irrational framing devices.Our brains aren’t Turing machines so much as a slop of systems cobbled together by eons ofgenetic mutation, systems geared to notice and respond to perceived changes in our environment—change, by its nature, being dangerous. The Texas horned lizard, when threatened, shootsblood out of its eyes; we, when threatened, think.

    hat ability to think, in turn, heightens the ability to threaten. Arti!cial intelligence, likenatural intelligence, can be used to hurt as easily as to help. A moderately precocious

    twelve-year-old could weaponize the Internet of Things—your car or thermostat or babymonitor—and turn it into the Internet of Stranger Things. In “Black Mirror,” the anthologyshow set in the near future, A.I. tech that’s intended to amplify laudable human desires, such asthe wish for perfect memory or social cohesion, invariably frog-marches us toward conformity orfascism. Even small A.I. breakthroughs, the show suggests, will make life a joyless panoptic labexperiment. In one episode, autonomous drone bees—tiny mechanical insects that pollinate#owers—are hacked to assassinate targets, using facial recognition. Far-fetched? Well, Walmartrequested a patent for autonomous “pollen applicators” in March, and researchers at Harvardhave been developing RoboBees since 2009. Able to dive and swim as well as #y, they couldsurely be programmed to swarm the Yale graduation.

    In a recent paper, “The Malicious Use of Arti!cial Intelligence,” watchdog groups predict that,within !ve years, hacked autonomous-weapon systems, as well as “drone swarms” using facial

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 10 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    T

    recognition, could target civilians. Autonomous weapons are already on a Strangelovian course:the Phalanx ciws on U.S. Navy ships automatically !res its radar-guided Gatling gun at missilesthat approach within two and a half miles, and the scope and power of such systems will onlyincrease as militaries seek defenses against robots and rovers that attack too rapidly for humansto parry.

    Even now, facial-recognition technology underpins China’s “sharp eyes” program, which collectssurveillance footage from some !fty-!ve cities and will likely factor in the nation’s nascent SocialCredit System. By 2020, the system will render a score for each of its 1.4 billion citizens, basedon their observed behavior, down to how carefully they cross the street.

    Autocratic regimes could readily exploit the ways in which A.I.s are beginning to jar our senseof reality. Nvidia’s digital-imaging A.I., trained on thousands of photos, generates real-seemingimages of buses, bicycles, horses, and even celebrities (though, admittedly, the “celebrities” havethe generic look of guest stars on “NCIS”). When Google made its TensorFlow code open-source, it swiftly led to FakeApp, which enables you to convincingly swap someone’s face ontofootage of somebody else’s body—usually footage of that second person in a naked interactionwith a third person. A.I.s can also generate entirely fake video synched up to real audio—and“real” audio is even easier to fake. Such tech could shape reality so profoundly that it wouldexplode our bedrock faith in “seeing is believing” and hasten the advent of a full-time-surveillance/full-on-paranoia state.

    Vladimir Putin, who has stymied the U.N.’s efforts to regulate autonomous weapons, recentlytold Russian schoolchildren that “the future belongs to arti!cial intelligence” and that “whoeverbecomes the leader in this sphere will become the ruler of the world.” In “The SentientMachine: The Coming Age of Arti!cial Intelligence,” Amir Husain, a security-softwareentrepreneur, argues that “a psychopathic leader in control of a sophisticated ANI systemportends a far greater risk in the near term” than a rogue A.G.I. Usually, those who fear what’scalled “accidental misuse” of A.I., in which the machine does something we didn’t intend, wantto regulate the machines, while those who fear “intentional misuse” by hackers or tyrants want toregulate people’s access to the machines. But Husain argues that the only way to deterintentional misuse is to develop bellicose A.N.I. of our own: “The ‘choice’ is really no choice atall: we must !ght AI with AI.” If so, A.I. is already forcing us to develop stronger A.I.

    he villain in A.G.I.-run-amok entertainments is, customarily, neither a human nor amachine but a corporation: Tyrell or Cyberdyne or Omni Consumer Products. In our

    world, an ungovernable A.G.I. is less likely to come from Russia or China (although China is

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 11 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    putting enormous resources into the !eld) than from Google or Baidu. Corporations paydevelopers handsomely, and they lack the constitutional framework that occasionally makes agovernment hesitate before pushing the big red “Dehumanize Now” button. Because it will bemuch easier and cheaper to build the !rst A.G.I. than to build the !rst safe A.G.I., the raceseems destined to go to whichever company assembles the most ruthless task force. DemisHassabis, who runs Google’s DeepMind, once designed a video game called Evil Genius inwhich you kidnap and train scientists to create a doomsday machine so you can achieve worlddomination. Just sayin’.

    Must A.G.I.s themselves become Bond villains? Hector Levesque argues that, “in imagining anaggressive AI, we are projecting our own psychology onto the arti!cial or alien intelligence.” Intruth, we’re projecting our entire mental architecture. The breakthrough propelling many recentadvances in A.I. is the deep neural net, modelled on our nervous system. This month, the E.U.,trying to clear a path through the “boosted decision trees” that populate the “random forests” ofthe machine-learning kingdom, will begin requiring that judgments made by a machine beexplainable. The decision-making of deep-learning A.I.s is a “black box”; after an algorithmchooses whom to hire or whom to parole, say, it can’t lay out its reasoning for us. Regulating thematter sounds very sensible and European—but no one has proposed a similar law for humans,whose decision-making is far more opaque.

    Meanwhile, Europe’s $1.3 billion Human Brain Projectis attempting to simulate the brain’s eighty-six billionneurons and up to a quadrillion synapses in the hopethat “emergent structures and behaviours” mightmaterialize. Some believe that “whole-brain emulation,”an intelligence derived from our squishy noggins, wouldbe less threatening than an A.G.I. derived from zerosand ones. But, as Stephen Hawking observed when hewarned against seeking out aliens, “We only have tolook at ourselves to see how intelligent life mightdevelop into something we wouldn’t want to meet.”

    In a classic episode of the original “Star Trek” series, thestarship Enterprise is turned over to the supercomputerM5. Captain Kirk resists, intuitively, even before M5

    overreacts during training exercises and attacks the “enemy” ships. The computer’s paranoiaderived from its programmer, who had impressed his own “human engrams” (a kind of emulated

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 12 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    T

    brain, presumably) onto it in order to make it think. As the other ships prepare to destroy theEnterprise, Kirk coaxes M5 into realizing that, in protecting itself, it has become a murderer. M5promptly commits suicide, proving the value of one man’s intuition—and establishing that themachine wasn’t all that bright to begin with.

    Lacking human intuition, A.G.I. can do us harm in the effort to oblige us. If we tell an A.G.I. to“make us happy,” it may simply plant orgasm-giving electrodes in our brains and turn to its ownpursuits. The threat of “misaligned goals”—a computer interpreting its program all too literally—hangs over the entire A.G.I. enterprise. We now use reinforcement learning to traincomputers to play games without ever teaching them the rules. Yet an A.G.I. trained in thatmanner could well view existence itself as a game, a buggy version of the Sims or Second Life.In the 1983 !lm “WarGames,” one of the !rst, and best, treatments of this issue, the U.S.military’s supercomputer, wopr, !ghts the Third World War “as a game, time and time again,”ceaselessly seeking ways to improve its score.

    When you give a machine goals, you’ve also given it a reason to preserve itself: how else can it dowhat you want? No matter what goal an A.G.I. has, one of ours or one of its own—self-preservation, cognitive enhancement, resource acquisition—it may need to take over in order toachieve it. “2001” had hal, the spaceship’s computer, deciding that it had to kill all the humansaboard because “this mission is too important for me to allow you to jeopardize it.” In “I, Robot,”viki explained that the robots have to take charge because, “despite our best efforts, yourcountries wage wars, you toxify your Earth, and pursue ever more imaginative means of self-destruction.” In the philosopher Nick Bostrom’s now famous example, an A.G.I. intent onmaximizing the number of paper clips it can make would consume all the matter in the galaxy tomake paper clips and would eliminate anything that interfered with its achieving that goal,including us. “The Matrix” spun an elaborate version of this scenario: the A.I.s built adreamworld in order to keep us placid as they fed us on the lique!ed remains of the dead andharvested us for the energy they needed to run their programs. Agent Smith, the humanizedface of the A.I.s, explained, “As soon as we started thinking for you, it really became ourcivilization.”

    he real risk of an A.G.I., then, may stem not from malice, or emergent self-consciousness,but simply from autonomy. Intelligence entails control, and an A.G.I. will be the apex

    cogitator. From this perspective, an A.G.I., however well intentioned, would likely behave in away as destructive to us as any Bond villain. “Before the prospect of an intelligence explosion, wehumans are like small children playing with a bomb,” Bostrom writes in his 2014 book,

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 13 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    “Superintelligence,” a closely reasoned, cumulatively terrifying examination of all the ways inwhich we’re unprepared to make our masters. A recursive, self-improving A.G.I. won’t be smartlike Einstein but “smart in the sense that an average human being is smart compared with abeetle or a worm.” How the machines take dominion is just a detail: Bostrom suggests that “at apre-set time, nanofactories producing nerve gas or target-seeking mosquito-like robots mightthen burgeon forth simultaneously from every square meter of the globe.” That soundsscreenplay-ready—but, ever the killjoy, he notes, “In particular, the AI does not adopt a plan sostupid that even we present-day humans can foresee how it would inevitably fail. This criterionrules out many science !ction scenarios that end in human triumph.”

    If we can’t control an A.G.I., can we at least load it with bene!cent values and insure that itretains them once it begins to modify itself ? Max Tegmark observes that a woke A.G.I. maywell !nd the goal of protecting us “as banal or misguided as we !nd compulsive reproduction.”He lays out twelve potential “AI Aftermath Scenarios,” including “Libertarian Utopia,”“Zookeeper,” “1984,” and “Self-Destruction.” Even the nominally preferable outcomes seemworse than the status quo. In “Benevolent Dictator,” the A.G.I. “uses quite a subtle and complexde!nition of human #ourishing, and has turned Earth into a highly enriched zoo environmentthat’s really fun for humans to live in. As a result, most people !nd their lives highly ful!llingand meaningful.” And more or less indistinguishable from highly immersive video games or asimulation.

    Trying to stay optimistic, by his lights—bear in mind that Tegmark is a physicist—he points outthat an A.G.I. could explore and comprehend the universe at a level we can’t even imagine. Hetherefore encourages us to view ourselves as mere packets of information that A.I.s could beamto other galaxies as a colonizing force. “This could be done either rather low-tech by simplytransmitting the two gigabytes of information needed to specify a person’s DNA and thenincubating a baby to be raised by the AI, or the AI could nanoassemble quarks and electronsinto full-grown people who would have all the memories scanned from their originals back onEarth.” Easy peasy. He notes that this colonization scenario should make us highly suspicious ofany blueprints an alien species beams at us. It’s less clear why we ought to fear alien blueprintsfrom another galaxy, yet embrace the ones we’re about to bequeath to our descendants (if any).

    A.G.I. may be a recurrent evolutionary cul-de-sac that explains Fermi’s paradox: whileconditions for intelligent life likely exist on billions of planets in our galaxy alone, we don’t seeany. Tegmark concludes that “it appears that we humans are a historical accident, and aren’t theoptimal solution to any well-de!ned physics problem. This suggests that a superintelligent AIwith a rigorously de!ned goal will be able to improve its goal attainment by eliminating us.”

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 14 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    Therefore, “to program a friendly AI, we need to capture the meaning of life.” Uh-huh.

    In the meantime, we need a Plan B. Bostrom’s starts with an effort to slow the race to create anA.G.I. in order to allow more time for precautionary trouble-shooting. Astoundingly, however,he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not onlyshould we listen to the machine; we should ask it to !gure out what we want. Themisalignment-of-goals problem would seem to make that extremely risky, but Bostrom believesthat trying to negotiate the terms of our surrender is better than the alternative, which is relyingon ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark also concludes thatwe should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gavelife to us: “Without technology, our human extinction is imminent in the cosmic context of tensof billions of years, rendering the entire drama of life in our Universe merely a brief andtransient #ash of beauty.” We are the analog prelude to the digital main event.

    So the plan, after we create our own god, would be to bow to it and hope it doesn’t require ablood sacri!ce. An autonomous-car engineer named Anthony Levandowski has set out to start areligion in Silicon Valley, called Way of the Future, that proposes to do just that. After “TheTransition,” the church’s believers will venerate “a Godhead based on Arti!cial Intelligence.”Worship of the intelligence that will control us, Levandowski told a Wired reporter, is the onlypath to salvation; we should use such wits as we have to choose the manner of our submission.“Do you want to be a pet or livestock?” he asked. I’m thinking, I’m thinking . . . ♦

    Published in the print edition of the May 14, 2018,issue, with the headline “Superior Intelligence.”

    Tad Friend has been a staff writer atThe New Yorker since 1998. Hismemoir about his search for hisfather, “In the Early Times: A LifeReframed,” will be published inMay.

    More: Artificial General Intelligence (A.G.I.)

    “Human + Machine: Reimagining Work in the Age of AI”

    “Deep Thinking: Where Machine Intelligence Ends and Human

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 15 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    Creativity Begins”

    “Common Sense, the Turing Test, and the Quest for Real AI”

    “Life 3.0: Being Human in the Age of Artificial Intelligence”

    “The Sentient Machine: The Coming Age of ArtificialIntelligence”

    Technology Books

    This Week’s Issue

    Never miss a big New Yorker story again.Sign up for This Week’s Issue and getan e-mail every week with the stories

    you have to read.

    E-mail address

    By signing up, you agree to our User Agreement and

    Privacy Policy & Cookie Statement.

    Read More

    A Reporter at Large

    The DoomsdayInvention

    Will arti!cialintelligence bring usutopia or destruction?

    Books

    OurAutomatedFuture

    How long will it bebefore you lose your job

    Video

    James Comeyon His InfamousDinner withTrump

    David Remnick speaks

    Your e-mail address Sign up

    4/5/22, 7:15 AMHow Frightened Should We Be of A.I.? | The New Yorker

    Page 16 of 16https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    By Raffi Khatchadourian to a robot?

    By Elizabeth Kolbert

    with James Comeyabout the “emptiness” ofDonald Trump andwhether the President is!t for office.

    Cookies Settings

                                                                                                                                      Order Now