Something shifted this week. AI didn't just generate content — it sued governments, crashed billion-dollar infrastructure, bought social networks, and healed a dying dog's tumour. The agentic era is no longer theoretical. It is here, it is consequential, and as the dust from Amazon's six-hour outage and Anthropic's Pentagon lawsuit settles, it is raising questions nobody has clean answers to.

Below are the five stories that defined the week and what they mean for you.

01 — Anthropic Sues the Pentagon

In the most dramatic AI-policy confrontation since OpenAI's board crisis, Anthropic filed two simultaneous lawsuits on March 9 against the Trump administration — one in the Northern District of California and another in the D.C. Circuit Court of Appeals. The trigger: Defence Secretary Pete Hegseth had designated Anthropic a "supply-chain risk", effectively blacklisting the company from working with federal agencies and military contractors.

The dispute had been simmering for months. Anthropic drew two non-negotiable red lines in its Pentagon contract negotiations: Claude must not be used for mass surveillance of US citizens, and it must not be used to operate autonomous weapons. The Pentagon countered that it needed to use Anthropic's AI for "all lawful purposes" and could not allow a private company to veto how it employs tools in a national-security emergency. Negotiations collapsed, Hegseth issued the blacklist, and Anthropic sued.

Anthropic argues the designation is an unlawful campaign of retaliation and a violation of its First Amendment rights. It also makes a pointed strategic argument: cutting off a leading American AI lab from defence work hands a competitive advantage to China, whose AI companies operate under no such constraints. However, the contra is also true where Chinese AI companies already fully cooperate with their government in matters of national security. Anthropic not complying with the US government may cede critical advantage to China.

Why This Matters: This is the first lawsuit that tests whether AI companies can impose ethical guardrails on their own technology against government will. The outcome will define how AI safety principles can survive contact with state power — and which companies are willing to fight for them.

02 — Meta Buys Into the Agentic Web

On March 10, Meta announced the acquisition of Moltbook — the viral "social network for AI agents" — for an undisclosed sum. Founders Matt Schlicht and Ben Parr will join Meta Superintelligence Labs on March 16. The deal closed in under two weeks from first contact.

Moltbook is a Reddit-like platform where AI agents built on OpenClaw can talk to one another — posting, commenting, upvoting — entirely without human interaction. It racked up millions of registered bots within days of launch and became the obsession of Silicon Valley. The platform illustrates something important: OpenClaw, the open-source autonomous AI agent created by Austrian developer Peter Steinberger (first known as Clawdbot, then Moltbot before Anthropic's trademark intervention), spawned an entire ecosystem before most enterprise IT teams had even heard of it.

Meta lost the acqui-hire of Steinberger himself — he was snapped up by OpenAI. So it went after the next best thing: the platform his agents made famous. The real prize is the infrastructure for what TechCrunch called "the agentic web" — a layer of the internet where AI agents interact with each other autonomously, and Meta wants to own the social graph of that layer.

Why This Matters: Facebook built a social network for humans. Now Meta is positioning to build one for agents. If every AI assistant eventually has a persistent identity and social graph, Meta wants to be the platform that connects them. The Moltbook acquisition is a bet that the agentic web will have a social layer — and that social layer will matter.

03 — Amazon's AI Mandate Crashes Its Own Website

This week's most instructive cautionary tale came from Amazon. In November 2025, an internal memo signed by two SVPs mandated that 80% of Amazon engineers use Kiro — Amazon's proprietary AI coding assistant — every week. The implicit message was clear: AI-generated code was the new normal, and adoption velocity mattered more than caution.

In mid-March 2026, Amazon's main ecommerce site went dark for six hours. Orders dropped 99% across North American marketplaces. The culprit: a faulty code deployment traced to AI-written code that cleared insufficient review. The outage cost an estimated 6.3 million lost orders. An emergency engineering meeting convened by SVP Dave Treadwell on March 10 resulted in a new policy: all AI-assisted code deployed by junior engineers now requires senior sign-off. Amazon is also running a 90-day safety reset covering 335 critical systems.

The irony is pronounced. Amazon, which sells AI coding tools to other enterprises through AWS, discovered the hard way that AI-generated code and aggressive adoption mandates do not mix without equally aggressive review culture. The company that told the world to move fast with AI has quietly pressed pause.

Why This Matters: Every organisation mandating AI coding adoption should read this story. The question is no longer "can AI write code?" — it can — but "what governance model catches the errors AI confidently introduces?" Amazon is now building that model the expensive way.

04 — Anthropic Is Winning the Market It Cannot Supply

A paradox is emerging at the top of the AI race: Anthropic is winning more business than it can serve. The Ramp March 2026 AI Index — drawn from actual spending data across Ramp's corporate card users — shows Claude winning 70% of head-to-head matchups against OpenAI products. Anthropic also graced the cover of TIME magazine this week, a cultural marker that would have seemed improbable twelve months ago.

The a16z Top 100 AI Apps report added another data point: Claude and ChatGPT users barely overlap. Only 11% of heavy Claude users also count as heavy ChatGPT users, suggesting the two products have carved genuinely distinct markets. Claude is pulling developers, researchers, and enterprise power-users; ChatGPT retains its consumer base. Meanwhile, Anthropic launched inline interactive visuals — Claude can now generate live charts and diagrams embedded directly in conversation responses. The demand problem is real: supply constraints have forced Anthropic to slow enterprise onboarding even as inbound interest accelerates.

Why This Matters: Demand exceeding supply is a luxury problem, but also a dangerous window. Every enterprise that cannot get Claude on their timeline is evaluating alternatives. The question for Anthropic is whether it can build infrastructure fast enough to convert market leadership into durable market share before OpenAI or Google close the quality gap.

05 — A Dog Named Rosie and the Cancer Vaccine ChatGPT Designed

The week's most quietly extraordinary story barely made the front page. Sydney tech entrepreneur Paul Conyngham, with no biomedical training, used ChatGPT and AlphaFold to design a personalised mRNA cancer vaccine for his rescue dog, Rosie. He used ChatGPT to plan the DNA sequencing workflow and identify treatment targets, AlphaFold to model the resulting protein structures, and a machine learning pipeline to select the optimal neoantigens. The vaccine was produced at UNSW and administered at the University of Queensland in December 2025.

Within one month, Rosie's mast cell tumour had shrunk 75%. By January, she was jumping fences to chase rabbits. TAAFT newsletter called it "the first AI-designed cancer vaccine for a dog." The story spread rapidly because it is accessible — a person without a PhD, using publicly available AI tools, achieved something that would previously have required a multimillion-dollar research team. (Note: this is n=1, and researchers are careful to flag the caveats around reproducibility and clinical validation.)

Why This Matters: This story is a preview of AI's role in personalised medicine. The tools already exist. What is missing is the regulatory framework, the clinical validation infrastructure, and the safety culture to move from extraordinary anecdote to repeatable therapy. That gap is closing faster than most hospitals realise.

06 — Quick Hits: The Rest of the Week

Agents & Products

Perplexity launched "Personal Computer" — a full-stack agentic API platform enabling developers to build OpenClaw-style autonomous agents. NVIDIA open-sourced NemoClaw on March 6 as a secure, production-grade enterprise alternative to the vulnerable OpenClaw. Sora may be coming natively inside ChatGPT. Microsoft and Anthropic launched Copilot Cowork on March 10 — a joint enterprise desktop automation product for Microsoft 365. Google brought Gemini to Maps and Android Auto for conversational navigation.

Robots & the Physical World

Figure AI cleared a major training milestone in teaching humanoid robots to operate autonomously without teleoperation. A Chinese humanoid robot was "arrested" by Shenzhen police after wandering into a restricted area. Scientists emulated a complete mouse brain — neuron by neuron — in a computer, running learning tasks including a simplified Doom.

Strategy & Economics

Yann LeCun reportedly backed a $1 billion bet that LLMs alone cannot achieve AGI, signalling tension at Meta between his world-model research agenda and the company's LLaMA investment. Atlassian laid off 1,600; Oracle is planning 30,000 cuts — both citing AI-driven productivity gains. Lovable hit $400M ARR. Cursor surpassed $2B ARR. The vibe-coding economy is real and growing — even as Amazon's story shows real operational risk at scale.

Until Next Week

The through-line of this week is agency — AI systems acting in the world without waiting for a human prompt. Agents are suing governments, buying social networks, crashing retail infrastructure, and designing cancer therapies. The speed is not slowing.

If one idea is worth sitting with, it is this: the organisations that will navigate this well are not those moving fastest, but those who have answered the question Amazon is now asking in retrospect — what does our human-review layer look like when AI writes most of the code?

Distilled AI Digest — The signal, without the noise. AI intelligence for practitioners and the executives who lead them. Issue #5 March 2026

The AI landscape doesn't pause. Neither should we. Subscribe to directly receive issues in your inbox and stay ahead of every shift that matters

Reply

Avatar

or to participate

Keep Reading