What a week to be watching AI from the outside. OpenAI buried one of its most-hyped products. A new benchmark exposed the gap between AI's marketing and its actual capabilities. Meta built something that sounds like science fiction. Washington finally showed up with a rulebook. And Google shipped an algorithm straight out of a tech comedy. Let's dig in.

1. OpenAI Kills Sora — The $1B Flameout

Less than six months after its splashy launch, OpenAI has shut down Sora — its AI video-generation app. The app peaked at 3.3 million downloads in November 2025, but by February had dropped to just 1.1 million. OpenAI quietly pulled the plug on March 24.

Why it failed: Sora became a deepfake factory almost immediately. Hollywood raised alarms about nonconsensual AI videos and realistic synthetic content. Moderation couldn't keep up, and the compute costs were enormous for a product with declining usage.

The Disney domino: Disney had been planning a $1 billion investment in OpenAI, tied partly to a content partnership that included Sora. When Sora went down, so did the deal. Disney officially ended its partnership the same week.

What OpenAI said: The Sora research team will pivot to "world simulation research" — the technology will inform robotics and other AI projects rather than power a consumer app.

THE LESSON: Consumer generative video is a harder problem than generative text. Speed of launch does not guarantee speed of adoption — especially when the product gets weaponised almost immediately.

2. ARC-AGI-3 — Every AI Just Flunked the Hardest Test

The ARC Prize Foundation launched ARC-AGI-3 on March 25, and the results were sobering. The best AI in the world — Google's Gemini 3.1 Pro — scored 0.37%. Humans score 100%. Let that gap sink in.

What makes it different: ARC-AGI-3 is not a knowledge test. Each challenge is a turn-based game with its own internal logic. There are no instructions, no descriptions, no stated win conditions. The agent must figure out what it's trying to do and how to win — entirely on the fly, from scratch, every time. It is a test of genuine learning, not pattern recall.

The leaderboard: Gemini 3.1 Pro led with 0.37%, followed by OpenAI's GPT-5.4 at 0.26% and Anthropic's Claude Opus 4.6 at 0.25%. These are the frontier models — the absolute best we have. All essentially scored zero.

The significance: AI companies have been claiming rapid progress toward AGI (Artificial General Intelligence). ARC-AGI-3 shows that "improves on benchmarks" mostly means "gets better at memorising answers." True on-the-fly learning remains an almost entirely unsolved problem.

THE SIGNAL: Benchmark scores are not the same as intelligence. ARC-AGI-3 is the clearest evidence yet that today's frontier AI can't learn — it can only recall. Real AGI is still a different category of problem entirely.

3. Meta Builds a Brain Decoder

While OpenAI retreated, Meta advanced into territory that sounds like science fiction. The company released TRIBE v2, an open-source AI model that predicts how your brain responds to what you see, hear, and read.

How it works: TRIBE v2 was trained on over 500 hours of fMRI brain scans from more than 700 people. It maps how the brain activates in response to video, audio, and text. Version 2 covers 70,000 distinct brain regions — up from just 1,000 in the original — and can make accurate predictions for people it has never scanned before.

What makes it remarkable: The model's predictions match population-level brain activity better than most real scans, which are often degraded by movement and noise. In other words, the AI predicts your brain better than an fMRI machine measures it.

Why Meta is doing this: The stated goals are neuroscience acceleration and assistive technology — helping people who have lost the ability to speak. The unstated subtext: brain-computer interface research, and a long-term bet that the next computing platform is the human nervous system.

WATCH THIS: TRIBE v2 is open-source. Researchers worldwide can now build on a foundation model of human brain activity. The applications — medical, commercial, and privacy-invasive — will arrive faster than regulators expect.

4. Washington's AI Blueprint — The Rulebook Arrives

On March 20, the Trump Administration released the first-ever National Policy Framework for Artificial Intelligence — a set of legislative recommendations that could reshape how AI is built and deployed in the United States.

The headline move: The framework calls on Congress to federally preempt state AI laws. That means one national standard — not a patchwork of 50 different state regulations. California, New York, and Texas have all been moving toward their own AI rules. This blueprint would override them.

The philosophy: "Light-touch" regulation, innovation-first. The framework explicitly discourages liability for AI developers when third parties misuse their systems. It favours streamlined permitting for AI infrastructure and incentives for small businesses to adopt AI tools.

Child safety carve-out: One area where regulation goes deep: protecting minors. The framework mandates age verification, limits on data collection from children, and parental oversight tools. This is the one domain where the administration is comfortable with strict rules.

What happens next: The White House wants Congress to codify the framework "this year." Whether that optimism survives contact with Congress remains to be seen.

THE CONTEXT: For AI companies, a single federal standard — even an imperfect one — is better than 50 conflicting state laws. Expect lobbying to intensify around the specific language, especially on liability and child safety.

5. Google's "Pied Piper" Moment — TurboQuant

If you watched HBO's Silicon Valley, you remember Pied Piper — the fictional startup that cracked middle-out compression and disrupted the internet. On March 25, Google shipped something eerily similar.

What is TurboQuant: A new AI memory compression algorithm that reduces the working memory required by large language models by at least 6x, with up to 8x faster attention computation on NVIDIA H100 GPUs — and zero measurable accuracy loss.

Why this matters: The memory bottleneck is one of the biggest constraints on running powerful AI models. A 6x compression means you could run a model six times larger on the same hardware, or the same model for one-sixth the cost.

How it works: TurboQuant uses advanced vector quantization to compress the Key Value-cache — the component that stores context during a conversation. The gains are in inference (running the model), not training.

Current status: Still experimental. Google Research will present findings at ICLR 2026. It has not yet been deployed in production systems. But if it holds up, TurboQuant could materially change the economics of AI deployment.

THE IMPLICATION: A 6x memory reduction would let smaller companies run frontier-scale models without frontier-scale infrastructure. If TurboQuant ships into production, it could be one of the most democratising developments in AI this year.

Quick Hits

  • Elon Musk's "Terafab" AI chip factory: xAI announced plans for a massive domestic chip fabrication facility, aiming to reduce dependence on TSMC for Grok's hardware supply chain.

  • ChatGPT Superapp incoming: OpenAI is reportedly merging ChatGPT, voice mode, and its productivity tools into a single desktop application. Think Microsoft Office, but for AI.

  • Meta swaps headcount for AI: Meta confirmed it is replacing some engineering roles with AI agents, particularly in code generation and review. CEO Mark Zuckerberg cited "mid-level engineer" tasks as the primary target.

  • Gemini Live gets a major overhaul: Google upgraded Gemini Live with better memory, longer context, and real-time screen-sharing capabilities, positioning it as a direct competitor to ChatGPT's voice mode.

  • ARC-AGI prize pool grows: Following the ARC-AGI-3 launch, the ARC Prize Foundation has raised its total prize pool. The grand prize for the first system to hit 85% on ARC-AGI-3 is currently unclaimed.

That's your signal for the week of March 22–28, 2026. If this was useful, forward it to one person who'd appreciate it. See you next week.

Distilled AI Digest — The signal, without the noise. AI intelligence for practitioners and the executives who lead them. Issue #7 March 2026

The AI landscape doesn't pause. Neither should we. Subscribe to directly receive issues in your inbox and stay ahead of every shift that matters

Reply

Avatar

or to participate

Keep Reading