In my book Google Archipelago: The Digital Gulag and the Simulation of Freedom (2019), I argued that the main danger with AI is not a rogue super-intelligence that might rebel against humanity; the far greater danger is perfect AI obedience. AI could function as the ultimate instrument of authoritarian “elites,” faithfully executing total surveillance, behavioral scoring, and preemptive social control.
Further, AI reliance risks the wholesale abdication of human agency and the flattening of human intelligence and sociality. As decision-making authority is ceded to algorithms, people will become passive nodes in a system that replaces human thinking with AI information processing—“Bots R Us”—eroding autonomy, creativity, and genuine deliberation.
Geopolitically, AI threatens to produce unrivaled systems of ideological and political dominance and cohesion. The dominant worldview is the worldview that dominates AI. The AI race is not toward liberation but toward a hybrid post-human order in which freedom is rendered obsolete, and free will, as Yuval Noah Harari says, is “history.”
In Discipline and Punish (1975), Michel Foucault referred to governmental expansion of the state through such surveillance “technologies of power” as “panopticism” (195-230). Panopticism describes a transmutation in the expression and exercise of power that took place from the pre-modern to the modern period. This change included a shift away from primarily corporal forms of punishment—torture, quartering, branding and other brutal rituals for inflicting bodily pain—but also power’s decentralization, its metastasis and penetration of the entire society—its effects no longer confined to the imprisoned, insane, or otherwise detained. The new “disciplinary” regime included the reformed prisons and other places of confinement but also escaped the confines of institutions to become applied universally to the entire population. The whole society became a disciplinary society under pantopticism.

The Panopticon itself is a circular building, in which its subjects—inmates, patients, students, etc.—are arrayed in cells surrounding a central tower. The subjects can be seen at any time by a guard, who may (or may not) occupy the central tower. The captive subjects cannot see into the tower, nor can they see each other. Likewise, they are never certain whether or not they are being observed. Although the captive individual can never verify with certainty that she is being observed, the very possibility of being observed at any time produces the intended effects of hyper-vigilance and self-circumspection on the part of the subject. As such, the subjects themselves internalize the observer and effectively monitor and police themselves. As Foucault brilliantly describes the effects of this technological innovation:
He who is subjected to a field of visibility, and who knows it, assumes responsibility for the constraints of power; he makes them play spontaneously upon himself; he inscribes in himself the power relation in which he simultaneously plays both roles [that of the observer and the observed]; he becomes the principle of his own subjection (202–203).
Foucault shows how the technologies of discipline practiced with the rise of Panopticism have become less forceful, more lightweight, less burdensome to the body, but at the same time utterly ubiquitous.
As it happens, in the U.S., at least, the Panopticon will be owned and operated by Palantir, the software company named after the legendary seeing stones of The Lord of the Rings. On the one hand, Palantir’s Maven AI platform has been made an official “program of record” by the Pentagon, embedding it as the foundational battlefield intelligence system for targeting, resource allocation, and real-time decision-making. Maven employs advanced artificial intelligence and machine learning—technologies that devour colossal streams of data to detect patterns and identify objects with ruthless efficiency—to ingest satellite imagery, drone feeds, radar signals, and intelligence reports. It spots military targets—vehicles, buildings, weapons stockpiles—then automates swaths of the so-called “kill chain,” the military sequence of locating, tracking, and striking a target, compressing decisions from detection to destruction at terrifying speed.
Tragically, this very machinery was on display during the opening hours of Operation Epic Fury. A U.S. strike obliterated the Shajareh Tayyebeh girls’ elementary school in Minab, southern Iran, slaughtering between 165 and 180 people, most of them seven to twelve year old girls. The building had long ago been converted from military use, yet stale intelligence still listed it as a legitimate target. Maven’s AI-driven velocity collapsed the narrow window for human scrutiny, turning a schoolyard into a charnel house in seconds. U.S. military investigations have found the U.S. military culpable for the “accident.” The Pentagon is now “investigating” its own platform’s role in the atrocity.
Palantir software has also been deployed by the IDF in Gaza, with devastating effects. The company forged a formal “strategic partnership” with Israel’s Ministry of Defense in January 2024—complete with a board meeting held in Tel Aviv—explicitly to supply advanced technology for “war-related missions.” Palantir’s Gotham and artificial intelligence platform have been used to fuse vast intelligence feeds, satellite imagery, intercepted communications, and behavioral data into AI-generated kill lists, optimizing the “kill chain” with ruthless speed and enabling rapid target selection and strikes. UN Special Rapporteur Francesca Albanese has cited this very partnership as grounds for believing Palantir’s tools have facilitated Israel’s “unlawful use of force,” contributing to disproportionate civilian deaths and raising the specter of complicity in war crimes. In short, the same preemptive logic that turned a girls’ school in Iran into rubble has been battle-tested and refined on a far vaster scale in Gaza—Palantir’s AI seeing stones peering into every corner of the battlefield and pronouncing sentence before any human hand reaches for the trigger.
On the other hand, the Department of Homeland Security has signed a billion-dollar blanket purchase agreement to deploy Palantir’s Gotham and Foundry platforms department-wide, fusing data across multiple agencies for threat identification, case management, and operational coordination.
Palantir won a $30 million contract from ICE in April 2025 to build ImmigrationOS—an AI-powered system for managing the entire immigration process. This fits with other major Trump-era contracts for the company: a potential $10 billion software and data deal with the U.S. Army, more than $180 million in IRS agreements since 2018 (with recent projects that organize tax records into searchable databases, choose audit targets, and spot fraud), and ongoing work with the Department of Health and Human Services and CDC—including a $443 million project to create a single shared dashboard that pulls public-health data together from multiple systems.

Factor in Palantir’s entanglements with more than 30 other federal agencies and the pattern becomes unmistakable: Palantir holds a de facto master database, one that fuses masses of data—on health, tax liabilities, financial trails, travel records, immigration status, and behavioral patterns—into a panoptic tableau. Because Palantir’s tools are built for interoperability, the Palantir Panopticon expands by necessity. Health records flow into immigration files into tax data into predictive policing. Battlefield AI logic migrates seamlessly into domestic law enforcement.
And it’s not only a database. It’s also swarms of AI agents predicting behavior and initiating actions. The “precrime” intelligence capabilities long embedded in Gotham—pattern recognition, anomaly detection, and predictive analytics—are now being scaled for domestic application, turning the entire population into nodes within an ever-expanding, totalizing threat matrix.
When a single entity becomes the indispensable infrastructure for both military targeting and domestic data fusion, we are no longer dealing exclusively with the fusion of the state and the corporation. We are also dealing with state control of the artificial intelligence explosion. We are dealing with AI dictatorship—under ruling class terms, of course.
In the Palantir Panopticon, the central tower is not a guard booth but a server farm. The inmates are not convicts but citizens. And the inspection is perpetual, penetrating, and at least AI-“enhanced.” Foucault would have recognized it instantly as the culmination of a techno-biopolitical governmentality, as the consummate panopticism. I see it as that and as the fulfillment of the warning I issued six years ago in Google Archipelago: the digital gulag was never going to be limited to the internet as it were. It was always destined to become the operating system of the corporate state itself. That is, we are no longer on the internet. We are in it. The Palantir Panopticon is not coming here. It is here. And once fully awake, it will never close its eyes again.
Categories: Uncategorized

















