The Plain View
Last week the Center for Humane Technology summoned over 100 leaders in finance, philanthropy, industry, government, and media to the Kissinger Room at the Paley Center for Media in New York City to hear how artificial intelligence might wipe out humanity. The two speakers, Tristan Harris and Aza Raskin, began their doom-time presentation with a slide that read: “What nukes are to the physical world … AI is to everything else.” We were told that this gathering was historic, one we would remember in the coming years as, presumably, the four horsemen of the apocalypse, in the guise of Bing chatbots, would descend to replace our intelligence with their own. It evoked the scene in old science fiction movies—or the more recent farce Don’t Look Up—where scientists discover a menace and attempt to shake a slumbering population by its shoulders to explain that this deadly threat is headed right for us, and we will die if you don’t do something NOW.
At least that’s what Harris and Raskin seem to have concluded after, in their account, some people working inside companies developing AI approached the Center with concerns that the products they were creating were phenomenally dangerous, saying an outside force was required to prevent catastrophe. The Center’s cofounders repeatedly cited a statistic from a survey that found that half of AI researchers believe there is at least a 10 percent chance that AI will make humans extinct.
In this moment of AI hype and uncertainty, Harris and Raskin have predictably chosen themselves to be the ones who break the glass to pull the alarm. It’s not the first time they’re triggering sirens. Tech designers turned media-savvy communicators, they cofounded the Center to inform the world that social media was a threat to society. The ultimate expression of their concerns came in their involvement in a popular Netflix documentary cum horror film called The Social Dilemma. While the film is nuance-free and somewhat hysterical, I agree with many of its complaints about social media’s attention-capture, incentives to divide us, and weaponization of private data. These were presented through interviews, statistics, and charts. But the doc torpedoed its own credibility by cross-cutting to a hyped-up fictional narrative straight out of Reefer Madness, showing how a (made-up) wholesome heartland family is brought to ruin—one kid radicalized and jailed, another depressed—by Facebook posts.
This one-sidedness also characterizes the Center’s new campaign called, guess what, the AI Dilemma. (The Center is coy about whether another Netflix doc is in the works.) Like the previous dilemma, a lot of points Harris and Raskin make are valid—such as our current inability to fully understand how bots like ChatGPT produce their output. They also gave a nice summary of how AI has so quickly become powerful enough to do homework, power Bing search, and express love for New York Times columnist Kevin Roose, among other things.
I don’t want to dismiss entirely the worst-case scenario Harris and Raskin invoke. That alarming statistic about AI experts believing their technology has a shot of killing us all, actually checks out, kind of. In August 2022, an organization called AI Impacts reached out to 4,271 people who authored or coauthored papers presented at two AI conferences, and asked them to fill out a survey. Only about 738 responded, and some of the results are a bit contradictory, but, sure enough, 48 percent of respondents saw at least a 10 percent chance of an extremely bad outcome, namely human extinction. AI Impacts, I should mention, is supported in part by the Centre for Effective Altruism and other organizations that have shown an interest in far-off AI scenarios. In any case, the survey didn’t ask the authors why, if they thought catastrophe possible, they were writing papers to advance this supposedly destructive science.
But I suspect this extinction talk is just to raise our blood pressure and motivate us to add strong guardrails to constrain a powerful technology before it gets abused. As I heard Raskin and Harris, the apocalypse they refer to is not some kind of sci-fi takeover like Skynet, or whatever those researchers thought had a 10 percent chance of happening. They’re not predicting sentient evil robots. Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force.
For instance, consider one the slides, among many, that Harris and Raskin shared about AI’s potential harm. It was drawn from a startling study where researchers applied advanced machine-learning techniques to data from brain scans. With the help of AI, researchers could actually determine from the brain scans alone the objects that the subjects were looking at. The message was seemingly clear: In the dystopian AI world to come, authorities will be looking inside our heads! It’s something that Bob Dylan probably didn’t anticipate 50 years ago when he wrote, “If my thought dreams could be seen / they’d probably put my head in a guillotine.” Sitting in the Kissinger Room, I wondered whether certain politicians were sharpening their decapitation blades right now.
But there’s another side to that coin—one where AI is humanity’s partner in improving life. This experiment also shows how AI might help us crack the elusive mystery of the brain’s operations, or communicate with people with severe paralysis.
Likewise, some of the same algorithms that power ChatGPT and Google’s bot, LaMDA, hold promise to help us identify and fight cancers and other medical issues. Though it’s not a prominent theme in the Center’s presentation, the cofounders understand this. In a conversation I had with Raskin this week, he acknowledged that he’s an enthusiastic user of advanced AI himself. He exploits machine learning to help understand the language of whales and other animals. “We’re not saying there’s not gonna be a lot of great things that come out of it,” he says. Let me use my biological large language model to strip away the double negative—he’s saying there will be a lot of great things coming out of it.
What’s most frustrating about this big AI moment is that the most dangerous thing is also the most exciting thing. Setting reasonable guardrails sounds like a great idea, but doing that will be cosmically difficult, particularly when one side is going DEFCON and the other is going public, in the stock market sense.
So what’s their solution? The Center wants two immediate actions. First, an AI slowdown, in particular “a moratorium on AI deployment by the major for-profit actors to the public.” Sure, Microsoft, Meta, Google, and OpenAI can develop their bots, but keep them under wraps, OK? Nice thought, but at the moment every one of those companies is doing the exact opposite, terrified that their competitors might get an edge on them. Meanwhile, China is going to do whatever it damn pleases, no matter how scary the next documentary is.
The recommended next step takes place after we’ve turned off the AI faucet. We use that time to develop safety practices, standards, and a way to understand what bots are doing (which we don’t have now), all while “upgrading our institutions adequately to meet a post-AI world.” Though I’m not sure how you do the last part, pretty much all the big companies doing AI assure us they’re already working through the safety and standards stuff.
Of course, if we want to be certain about those assurances, we need accountability—meaning law. No accident that this week, the Center repeated its presentation in Washington, DC. But it’s hard to imagine ideal AI legislation from the US Congress. This is a body that’s still debating climate change when half the country is either on fire, in a drought, flooded by rising sea levels, or boiling at temperatures so high that planes can’t take off. The one where a plurality of members are still trying to wish away the reality of a seditious mob invading their building and trying to kill them. This Congress is going to stop a giant nascent industry because of a bunch of slides?
AI’s powers are unique, but the struggle to contain a powerful technology is a familiar story. With every new advance, companies (and governments) have a choice of how to use it. It’s good business to disseminate innovations to the public, whose lives will be improved and even become more fun. But when the technologies are released with zero concern for their negative impact, those products are going to create misery. Holding researchers and companies accountable for such harms is a challenge that society has failed to meet. There are endless cases where human beings in charge of things make conscious choices that safeguarding human life is less important than, say, making a profit. It won’t be surprising if they build those twisted priorities into their AI. And then, after some disaster, claim that the bot did it!
I’m almost tempted to say that the right solution to this “dilemma” is beyond human capability. Maybe the only way we can prevent extinction is to follow guidance by a superintelligent AI agent. By the time we get to GPT-20, we may have our answer. If it’s still talking to us by then.
Leave a Reply