| The announcement settled global markets, which had pushed the price of oil up to near $150. Oil prices dropped considerably after the deal was announced, and stock futures rose. Peace is good for prosperity.
The U.S. says it has stopped its bombing campaign due to the ceasefire, but some reports indicate that Iranian attacks were still underway overnight, possibly because word of the deal had not yet spread to local commanders. A fresh round of talks between the U.S. and Iran is set to begin later this week. U.S. military officials have warned that if the talks are not successful, bombing could restart.
Doing the robot. Does AI lead to universal basic income? This week, OpenAI, the company behind the ChatGPT large language model, released a big-picture policy document: Industrial Policy for the Intelligence Age.
The document explicitly name-checks the Progressive Era and the New Deal, declaring that after “the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production.”
Most companies use this sort of blue-sky policy sheet to talk up the benefits they provide. But while OpenAI does nod to some potentially large benefits of its product, much of the document is devoted to proposing policies that would solve problems the company believes might be caused by AI.
For example, the introduction includes the following sentence: “While we strongly believe that AI’s benefits will far outweigh its challenges, we are clear-eyed about the risks—of jobs and entire industries being disrupted; bad actors misusing the technology; misaligned systems evading human control; governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared.”
One of the proposals is the creation of a “Public Wealth Fund” that would provide “every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.” The details aren’t entirely clear, and there are a lot of ways to interpret the proposal. It might mean something like Alaska’s permanent fund, which pays residents an annual dividend from oil revenues. As Politico‘s Digital Future Daily newsletter notes, it also sounds an awful lot like universal basic income (UBI), a sort of welfare-for-everyone policy backed by former presidential candidate Andrew Yang, among others.
I have become much more skeptical about UBI over the years, partly because such policies would be very, very expensive if implemented widely, and partly because small-scale experiments with targeted cash interventions for the needy consistently deliver underwhelming results.
One of the best recent surveys of cash-payment programs was Kelsey Piper’s August 2025 essay in The Argument, the title of which doubles as the conclusion: “Giving people money helped less than I thought it would.” I doubt that AI will suddenly make UBI affordable and effective. I do suspect, however, that AI companies will hype their products in ways intended to squeeze subsidies and tax favoritism out of policymakers.
Hack Heaven. Meanwhile, OpenAI’s primary competitor, Anthropic, announced yesterday that the general public would not have access to the company’s frontier model, Claude Mythos Preview, because it’s such a powerful tool for exploiting security vulnerabilities in software. The company claims that during internal testing, the model found security lapses in nearly every major piece of software.
So instead of letting the public play with it, the company is giving access to a 40-company consortium, dubbed Project Glasswing, to let pros identify and fix security issues in advance.
“The goal is both to raise awareness and to give good actors a head start on the process of securing open-source and private infrastructure and code,” an Anthropic employee told The New York Times.
As a former teenage reader of the hacker zine 2600, which was occasionally—and mostly pointlessly—vilified for showing readers how to hack vulnerable systems, I’m never quite sure how seriously to take these extraordinary claims.
But given the rapid trajectory of model evolution and the warnings from AI security researchers who have said that something like this day would come, I suspect Anthropic’s new model is quite powerful and will pose some novel IT security challenges.
It also indicates that the spat between the Department of Defense and Anthropic earlier this year was extremely misguided. That resulted in the Pentagon designating Anthropic a supply chain risk, which restricts the use of the company’s products for national security. (A judge recently blocked the designation.) This is a company that now claims to have something like an AI-powered exploit for nearly every critical system on the planet—and it was essentially prohibited from military use because of a spat with Defense Secretary Pete Hegseth.
Beyond the policy implications, there are aspects to the story that suggest that things might be about to get really, really weird.
As Kevin Roose, the Times journalist behind the Anthropic story, posted yesterday on X, the new model appears to possess some capabilities that are right out of a William Gibson novel. |