Science and Technology

What the Machines Want

Recently at The Signal: Seth Masket on why U.S. conservatives are standing so resolutely behind Donald Trump. Today: How will new regulation affect the course of AI? Daron Acemoglu on the challenges of trying to rein in a powerful and fast-moving tech industry. Also: Victor Shih on the growing structural problems with the Chinese economy.

Grappling With the Giants

This is Engineering
With anxiety rising globally over the rapid spread of increasingly sophisticated artificial-intelligence technologies, governments in America and Europe are making first attempts to create rules for them. In October, the U.S. administration issued a broad set of regulations and guidelines. Soon after, in December, European Union officials agreed on the final version of the EU’s AI Act.

The U.S. initiative—an executive order from the White House—requires companies working on AI models that affect national security to share the results of safety tests with the government. It also directs federal agencies to develop standardized testing for both safety and performance, suggests watermarks for AI-generated content, and recommends independent oversight of the technology’s potentially catastrophic risks.

The EU rules ban AI models that create scoring systems for individual people based on recorded behaviors—systems used pervasively in China—and establish transparency and oversight requirements for AI applications classified as high-risk.

Still, the U.S. regulations are mostly a framework of recommendations and suggestions, without the force of law; and the EU legislation won’t take effect until next year—an eternity in the world of AI development. Meanwhile, Microsoft, Amazon, and Google invested $18 billion in AI start-ups, representing about two-thirds of all global venture investments in the new technology. Tech giants and their emerging competitors continue to rush out new AI applications with the latest advances—and to continue their intense and sustained lobbying of the U.S. and European governments. So what do these new rules mean for AI?

Daron Acemoglu is a professor of economics at MIT and a co-author of the recent book Power and Progress: Our 1,000-Year Struggle Over Technology and Prosperity. In his view, the new regulations will likely have little effect on the industry, because they don’t address the fundamental problem underlying the creation of all AI applications: the massive, global harvesting of data—including nearly all online purchases, social-media posts, photos, personal details, and copyrighted material—without the consent or compensation of the people who create it.

The new U.S. and EU rules, Acemoglu says, label consumer privacy and copyrighted data as important concerns, but they don’t include any limits on AI’s collection of individual data or creative content. Instead, they try to set guardrails that can prevent the worst abuses of the technology—but for the most part, they also leave the industry with all the leeway it needs to pursue its own goals. It’s an approach that doesn’t affect the venture-capital model underwriting AI’s development—and so, won’t affect investors’ incentives to push tech firms to develop AI applications that have the potential to become monopolies, or at least dominant in their markets. And that kind of dominance tends to hinge on one thing above all: having ever more data than your competitors.

Michael Bluhm: Back in October, you made the point that the focus in the tech industry was now to push generative AI—artificial intelligence that produces text, images, and other media—as fast as possible. There’s been some tension, in the meantime, between those pushing and those calling for a slowdown. What’s going on in the AI industry now?
Advertisement
Daron Acemoglu: Generative AI is like a fever dream that’s gripped the technology industry. Everyone’s crazy about it. To be fair, generative AI has made some impressive achievements. Even industry experts didn’t predict the extent of the advances it’s made over the last 12 months. The basis of this technology is the ability to predict the next word in a sentence, yet now it has wound up sounding like a human, making jokes, and providing sophisticated answers to almost any question.

I don’t find the distinction between the accelerationists and doomsayers useful. Almost everyone in a position of power in the industry right now is in favor of going very fast, and they have similar goals: to collect data, automate tasks, and empower algorithms. Some of industry leaders pay lip service to what seem the most frightening ideas about AI, like existential risk, but don’t end up doing anything about them—understanding as well as anyone that at this point, these ideas are just unfounded.

Bluhm: You said in October that tech companies were developing AI applications largely for search and social media, though also for some automation in white-collar jobs and a few manufacturing jobs. Overall, however, you described AI’s adoption in the economy as “minuscule.”

What do you see as having changed since then?

Acemoglu: Not much. The percentage of U.S. companies that have adopted AI is still tiny—in the low single digits. Its use is mostly among larger firms.

Still, as consumers, a lot of us are already exposed to AI regularly through our smartphones and other online services. We most often see it in search, social-media algorithms, and other algorithmic platforms—like Amazon or Netflix—that make tailored recommendations for you. But AI is being taken up very slowly in production processes, whether for automating tasks or other work.

George Prentzas
More from Daron Acemoglu at The Signal:

One of the things that worries me about the tech industry, in all this, is the vision that dominates it: machines doing more and more of what humans do. For AI, the implication of this vision is that tech leaders want mostly to throw a lot of data and computing power into generative AI models, which will become “smart.” And then, once they’re smart, we’ll need fewer workers; we’ll just need AI plus tech executives, talented engineers, computer scientists, and other scientists. It’s a vision that’s extremely prevalent in the tech industry—and powerfully shapes how the industry approaches investment, development, and the question of regulation.”

As a whole, the regulation of AI is about the regulation of data. In the economy of the 21st century, the control of data will arguably be more important than the control of land was in earlier centuries. Control of data is one of the foundations of production. Laws don’t allow private corporations to expropriate all the land they want. But in America and Europe, the regulatory structure we live in today allows the biggest companies in the world to expropriate everyone’s data without paying anything for it or even asking permission.”

An AI algorithm can very cleverly figure out a person’s vulnerabilities and then tailor behavioral manipulation to them on an individual basis. Regulators aren’t yet asking, What should algorithms do or not do? So their new regulations don’t consider whether outside agencies should even have a say on the question. They don’t address how to treat libel, freedom of speech, or editorial responsibility in content generated by an AI algorithm. There’s no infrastructure to audit what algorithms are doing. Yet it has tremendous implications for society, and these implications are only proliferating.”

Members can read the full interview here
FROM THE FILES

Bridges to Nowhere

On January 28, a court in Hong Kong declared the Chinese company Evergrande—not long ago the largest real-estate developer in the People’s Republic—bankrupt. The company owed more than US$300 million in debts, after collecting down payments and mortgages from hundreds of thousands of Chinese homebuyers whose apartments remained unbuilt. In late 2021, when Evergrande first defaulted on some of its debt payments, Victor Shih—the Ho Miu Lam Chair in China and Pacific Relations at the University of California, San Diego—looked at the growing structural problems with the Chinese economy.
To access our full articles, full archive, and to support The Signal as we build a new approach to current affairs, become a member.
Join The Signal

Leave a Reply