Nine issues in, and the story of AI in early 2026 is clearer than any single week made it look. Looking back across February, March, and April, three threads run through almost every story we covered — threads that were already visible in Issue #1 but are only fully legible now that we've watched them play out across nine consecutive weeks. This is what we learned, what we got right, what remains unresolved, and what to watch next.
Theme 1 — The Adoption Gap Is Real, and Getting Wider
Signal across: Issues #1, #5, #8, #9
The very first story we covered asked whether enterprise AI was delivering on its $650 billion promise. Nine issues later, the answer is more nuanced — and more specific — than it was in February.
What Issue #1 established: The ROI question was live but unanswered. Hyperscalers had committed $650 billion in AI infrastructure. Markets had erased $950 billion in market cap within days. The proof of return was beginning to emerge, but it was uneven.
What Issues #5 through #9 clarified: The unevenness has a structure. It is not random. The organisations seeing genuine returns are the 14% that entered deployment with a clear strategy and defined success metrics. The other 86% are spending heavily and learning expensively — running demos in production, verifying AI outputs manually, and often making their processes slower in the short term.
The pattern that emerged: The gap between deployment rate and strategy clarity has widened over the nine weeks we've been tracking it. By April, 97% of organisations had deployed AI agents and 54% of C-suite executives described adoption as "tearing their company apart." Those two numbers belonging to the same survey is the clearest single data point of the period.
The constructive read: The companies pulling ahead are not using better models. They are using the same frontier models with more organisational clarity about purpose, metrics, and governance. The advantage is managerial, not technological. That is actually good news — it means the gap is closable.
THE PATTERN: Enterprise AI is not failing. It is succeeding unevenly, and the distribution of success is not random. Strategy before deployment is the single biggest predictor of return. This was true in Issue #1 and it is truer now.
Theme 2 — The Workforce Reckoning Arrived Ahead of Schedule
Signal across: Issues #1, #6, #7, #8, #9
In February, the workforce question was still largely theoretical. By April, it had hard numbers, a policy manifesto from the world's most valuable AI company, and America's oldest bank deploying 20,000 autonomous agents as digital co-workers.
The acceleration: Issue #6 was the first to name it directly — AI had stopped asking permission. The agentic shift moved from assistants to actors. By Issue #8, Oracle had cut 30,000 jobs explicitly to fund AI infrastructure. By Issue #9, Q1 2026 data confirmed 78,557 tech sector layoffs, with nearly half attributed to AI and automation. In February, these were projections. By April, they were quarterly reports.
What we didn't fully predict: The speed of the policy response. OpenAI's 13-page economic manifesto in April — proposing robot taxes, a public wealth fund, and a four-day workweek — arrived earlier and with more specificity than anyone expected. The company producing the disruption became the first to propose the economic architecture to absorb it. Whether that is genuine moral leadership or sophisticated regulatory pre-emption, it shifted the frame of the entire conversation.
The BNY Mellon signal: The most instructive single deployment of the period was not from a tech company. It was from a 240-year-old bank quietly giving 20,000 AI agents their own credentials, email accounts, and Microsoft Teams access. BNY's Eliza 2.0 platform is the closest thing we have to a public blueprint for the agentic enterprise in a regulated industry. The fact that it came from financial services — not Silicon Valley — is the point.
Enterprise angle: Organisations that have invested in reskilling, human-AI teaming frameworks, and transparent workforce communication are already differentiated on talent. Those that have treated the workforce transition as an HR footnote to a technology project are beginning to feel the trust deficit. This gap will widen through 2026.
THE PATTERN: The workforce reckoning is not a future event. It is a present one, with a nine-week paper trail. The question for the next ten issues is not whether it will happen — it's whether the policy and organisational responses will keep pace with the deployment curve.
Theme 3 — The Frontier Is Bifurcating
Signal across: Issues #4, #7, #9
Nine weeks ago, "frontier AI" meant the publicly available best model. Today there are two frontiers: the public one, and an invitation-only tier running months ahead of it. This bifurcation is the most consequential structural development of the period.
How we got here: Issue #4 established the benchmark. GPT-5.4 cleared the human bar on desktop navigation — and the score felt like a ceiling being lifted. Issue #7 complicated that story: ARC-AGI-3 exposed a different kind of ceiling. Every frontier model — GPT-5.4, Claude Opus 4.6, Gemini 3.1 — scored near zero on genuine on-the-fly learning. The gap between "improves on benchmarks" and "can actually learn in real time" remained essentially unsolved.
The private tier emerges: Issue #9 introduced Project Glasswing and Claude Mythos — a model scoring 93.9% on SWE-bench Verified, restricted to roughly 40 organisations under NDA. No press release. No launch event. The most capable AI model publicly known about is not publicly accessible. That is a new kind of frontier.
What it means structurally: The AI market is developing a two-tier structure that resembles financial markets more than consumer technology. There is a public market — accessible, commoditising, increasingly competitive. And there is a private market — performance-tiered, relationship-gated, and compounding advantage for those inside it. The organisations in Project Glasswing are not just getting a better model. They are learning to work with tomorrow's models today, while their competitors are still learning yesterday's.
Enterprise angle: Access to the private frontier is becoming a genuine competitive moat. The practical implication is not technical — it is relational. Enterprise AI vendor relationships, early-access programme participation, and structured feedback commitments are now strategic assets, not procurement decisions.
THE PATTERN: The frontier is not a single line anymore. It is a layered system — public, private, and classified — and the distance between those layers is growing. Where your organisation sits in that stack is increasingly a strategic question, not a technical one.
What We Got Right — and What Remains Open
Every synthesis deserves an honest accounting. Here is ours.
What aged well — The Agentic Shift: From Issue #5 ("The Agentic Web Has Arrived") through Issue #9 (BNY Mellon's 20,000 agents), the pace and concreteness of the agentic transition exceeded most forecasts. We called it in March. By April, autonomous agents had system credentials, email accounts, and the ability to initiate trade remediation at one of the world's largest custodian banks. The agentic thesis aged well — faster deployment, broader industry adoption, and more sophisticated governance than most observers expected in Q1.
What remains genuinely unresolved — The Benchmark Problem: Issue #7 introduced ARC-AGI-3 and its near-zero frontier scores. The gap between "scores well on benchmarks" and "can actually learn on the fly" remains essentially unsolved across nine weeks and three model generations. We do not yet know whether this represents a fundamental architectural limit or an engineering problem that late-2026 models will close. It is the most important open question in AI, and it will likely produce the most important story of the next ten issues.
THE LESSON: The agentic deployment thesis is confirmed. The genuine intelligence thesis — whether today's models represent a path to on-the-fly learning — remains the most consequential open question of the year. Watch the ARC-AGI-3 leaderboard.
What to Watch — The Next 10 Issues
Five signals to track across Issues #11–19:
Robot tax legislation — OpenAI's April manifesto will produce actual congressional proposals. Watch for the first bill that references automated labour taxation. It will arrive before Issue #15.
Project Glasswing expansion — Which organisations get added to the private frontier, and from which industries? The first non-tech, non-financial sector entrant will be the signal that the private tier is scaling beyond its founding cohort.
The strategy gap narrowing — Will the 86% of enterprises without a clear AI strategy begin to close the distance on the 14% that do? Q2 2026 enterprise AI surveys will tell us whether the adoption paradox is resolving or deepening.
ARC-AGI-3 scores — Will any model break 5% by Issue #15? A meaningful score increase would signal that the benchmark problem is engineering, not fundamental. No meaningful increase would signal the opposite.
The BNY blueprint spreading — Who is the first non-financial enterprise to replicate large-scale agentic deployment with BNY's governance architecture? The sector it comes from will tell us where the agentic transition moves next.
That's the signal across the first nine weeks. The pattern was there from Issue #1. The next ten issues will tell us whether the world is catching up to it. See you in Issue #11.


