TL;DR
AI is turning into an infra-and-security oligopoly: OpenAI, Anthropic, Amazon, and now SpaceX are hoovering up capital and capacity while models like Claude Mythos start to look like real cyberweapons.
At the same time, data centers are getting hit by missiles abroad and moratoria at home, and workers and regulators are pushing back on deployments that haven’t yet delivered the productivity gains they were sold on.
Key Events
Report
Capital and capability in AI are concentrating into a tiny set of labs and infra providers just as cyber offense, war risk, and political blowback step up a gear.
The spread between the productivity story being sold and the numbers CEOs, regulators, and insurers report on actual impact is getting wider.
OpenAI closed a funding round valuing it at $852B, described as the largest private raise ever. Amazon was a key participant, putting in $50B alongside NVIDIA and others.
At the same time, Anthropic revealed a revenue run rate of $30B, overtaking OpenAI’s reported $25B on that metric. Anthropic has also taken $5B from Amazon in exchange for a pledge to spend on the order of $100B on AWS cloud over time.
In parallel, SpaceX has confidentially filed for a mega-IPO, agreed to acquire AI coding platform Cursor in one of the largest software deals on record, and is fronting the Terafab chip initiative with Intel, xAI, and Tesla to produce terawatt-scale compute annually.
Anthropic’s unreleased Claude Mythos is described as its most powerful model and has identified thousands of zero-day vulnerabilities across every major operating system and web browser.
News of its leaked cyber capabilities wiped about $14.5B off cybersecurity stocks in a single day and led Anthropic to keep Mythos away from the public, limiting access to a few dozen big firms and U.S. agencies under Project Glasswing.
In practical use, Mythos helped Mozilla uncover and remediate 271 Firefox bugs and exposed a 27-year-old vulnerability in OpenBSD, a system long regarded as one of the most secure.
The GLM-5.1 model has posted the highest known cybersecurity vulnerability reproduction score at 68.7, and research shows that much smaller models can now find similar classes of flaws, lowering the cost of sophisticated offensive work.
Meanwhile ransomware is industrializing: the Change Healthcare attack alone hit hundreds of millions of patient records and inflicted losses in the billions, insurers have logged thousands of ransomware claims in a year, and AI-driven malware is increasingly probing cloud infrastructure and software supply chains.
Iranian missile and drone strikes took AWS data centers in Bahrain and Dubai “hard down,” disrupting Amazon’s Middle East cloud availability zones.
Iran has also threatened the “complete and utter annihilation” of OpenAI’s $30B Stargate data center project in Abu Dhabi, which OpenAI had already paused over energy costs and regulatory issues.
The same conflict is straining critical inputs like helium for semiconductor fabs, as Iran’s South Pars petrochemical complex comes under heavy bombing, creating a potential choke point for both AI and medical technologies.
Domestically, local opposition and policy shifts are mounting: Maine became the first U.S. state to ban major new data centers, Monterey Park in California prohibited them entirely within city limits, and roughly $98B of U.S. data center projects were blocked or delayed in a matter of months.
Industry projections now suggest a large share of major U.S. data centers planned for 2026 may be delayed or canceled, even as Texas offers billion‑dollar annual tax breaks and Amazon commits tens of billions to new facilities in Mississippi.
Anti-AI anger has turned physical: Sam Altman’s home was hit with a Molotov cocktail and later gunfire in two separate attacks days apart, and prosecutors say the suspect carried an AI CEO kill list and posted anti-AI writings.
OpenAI is simultaneously facing a lawsuit from a stalking victim who claims ChatGPT fueled her abuser’s delusions. Inside companies, surveys and commentary report that around 80% of white-collar workers are refusing to comply with mandated AI tool adoption, and some Gen Z employees are deliberately sabotaging rollouts out of fear for their jobs.
Thousands of CEOs told pollsters that AI has had no impact on employment or productivity so far, while a separate index estimates 9.3 million U.S. roles are at risk of displacement in the next few years, underscoring a growing gap between rhetoric and realized gains.
Regulators and institutions are reacting unevenly—from Bernie Sanders and AOC proposing a moratorium on new data centers and compute exports, to NHS staff refusing to use Palantir’s Federated Data Platform, New York hospitals dropping Palantir, and Wikipedia banning AI-generated text for most articles on reliability grounds.
What This Means
The center of gravity in AI is shifting toward an infra-and-security oligopoly built on politically exposed, physically vulnerable hardware, while most enterprises are still struggling to turn their existing AI spend into measurable productivity or labor outcomes.
On Watch
Interesting
We processed 10,000+ comments and posts to generate this report.
AI-generated content. Verify critical information independently.
Sources
Key Events
On Watch
Interesting