The Race Is Over. The Supply Chain Race Just Started.
Five signals this week that tell you where AI is actually going.
The Bottom Line (No Jargon Edition)
Google committed up to $40 billion to Anthropic. $10 billion now, $30 billion contingent on performance targets. at a $350 billion valuation. This is not a normal investment. Google already has Gemini. This is infrastructure control disguised as a check.
Anthropic's Claude Code and $30 billion annual run rate are driving the deal. Enterprise adoption of coding AI is scaling faster than Anthropic's compute supply. Google and Amazon are both providing chips and cloud capacity to close that gap. Two cloud giants are now co-funding the same AI lab they compete with.
DeepSeek V4 launched this week. Pro and Flash versions with better reasoning and agentic task performance. China's AI infrastructure story is running in parallel to the U.S. one, backed by Huawei Ascend chips. The separation is deepening.
Meta signed a deal to become one of the world's largest AWS Graviton customers, using hundreds of thousands of Graviton 5 cores for CPU-intensive agentic AI workloads. Infrastructure diversification is now the stated strategy at the top of Big Tech.
The Musk vs. Altman trial starts Monday, April 27. The lawsuit claims OpenAI deviated from its mission and defrauded Musk into donating. Microsoft is named as a co-defendant. Watch for what comes out in discovery, not just the verdict.
Anthropic's Claude Opus 4.7 launched last week and sits atop current benchmarks. The more telling detail: Anthropic publicly acknowledged it trails its own unreleased Mythos model. The labs are now building tools they admit they will not sell.
OpenAI’s GPT-5.5 launched this week, while GPT-5.4-Cyber expanded access for vetted security teams. The model race is still moving fast, but the more important split is now obvious: general-purpose models for everyone, specialized high-risk models for vetted users, and unreleased frontier systems held back entirely.
The pattern across all of it: winners are not being chosen by who has the best model. They are being chosen by who controls the supply chain. compute, contracts, legal standing, and government access.
The Take That Started the Week
Google's $40 billion commitment to Anthropic is the kind of move that looks strange until you see the logic behind it. Google already has Gemini. Gemini is good. Gemini 3.1 Pro has a two-million token context window and strong multimodal capabilities. So why write a $40 billion check to a competitor?
Because Anthropic's Claude Code is generating $30 billion in annualized revenue and growing. Because enterprise teams are adopting Claude for agentic coding work at a rate that is outpacing Anthropic's compute supply. And because whoever provides that compute. Google Cloud, Amazon Web Services, the chips underneath them. ends up with the structural position in the AI supply chain that matters over the next ten years. This is not altruism. It is infrastructure acquisition with a minority equity stake attached.
The Amazon side of this is already visible. Meta signed a deal this week to bring tens of millions of AWS Graviton 5 cores into its compute portfolio, explicitly for agentic AI workloads. Graviton 5 delivers 192 cores and 25% better performance than the previous generation, with inter-core communication latency reduced by up to 33%. Meta's head of infrastructure said it plainly: diversifying computing resources is strategically essential as they scale the infrastructure behind Meta's AI business. That is not a vendor preference statement. That is a supply chain strategy statement.
I have watched this play out before. In the early cloud era, the conversation was about features and latency. The durable advantages were built in procurement, in multi-year contracts, in infrastructure commitments that shaped everything downstream. The labs and the hyperscalers have figured this out. The question for everyone else is whether you have.
Cloud Roundup
AWS The Meta-Graviton deal is the story this week. Amazon announced that Meta will adopt hundreds of thousands of AWS Graviton 5 chips, making Meta one of the largest Graviton customers on the planet. The use case is specific: CPU-intensive workloads behind agentic AI. Graviton 5's 192-core architecture and reduced inter-core latency are not generic server upgrades. They are purpose-built for the continuous inference and multi-step task execution that agentic AI requires at scale. AWS is not just selling compute. It is positioning Graviton as the CPU-side infrastructure layer for the agentic era. That framing is intentional, and it matters for how you evaluate your own compute strategy.
Azure Microsoft is named as a co-defendant in the Musk-Altman trial starting Monday. The lawsuit argues that OpenAI's shift from a nonprofit to a commercial entity violated commitments made to early donors, including Musk, and that Microsoft's involvement accelerated that shift. Whatever the legal outcome, discovery alone will generate months of internal communications that enterprise teams will want to read. If your AI strategy runs heavily through OpenAI APIs on Azure, this week is a good time to review your concentration risk. The trial starting April 27 is not background noise for enterprise procurement teams. It is front-page vendor risk.
GCP Google's Anthropic bet reshapes how you read its cloud positioning. Google Cloud is not competing against Anthropic in the traditional sense. It is competing to be the infrastructure layer that Anthropic runs on. That means GCP wins whether teams choose Gemini or Claude, as long as Claude runs on Google Cloud infrastructure. That is a more sophisticated market position than most coverage is giving Google credit for. Watch how Google begins to market GCP as the neutral infrastructure layer for AI workloads. including workloads that use models it did not build.
AI Model Roundup
OpenAI GPT-5.5 did land this week, which changes the framing. The release was not just another leaderboard move. It reinforced the pattern that OpenAI is still pushing hard on general-purpose reasoning while also carving out specialized lanes like cybersecurity. GPT-5.4-Cyber expanded access for vetted security teams, and ChatGPT Extended Thinking hit a 94% reasoning score on ARC-AGI-1. The cybersecurity model expansion came one week after Anthropic rolled out Project Glasswing and previewed Mythos, the unreleased model restricted to a handful of companies for security testing. OpenAI is responding to Anthropic’s security positioning in near-real time. The cybersecurity lane is now a second competitive track running alongside general-purpose capability, and the Trusted Access for Cyber program is OpenAI’s infrastructure for controlling access to its most capable security tooling. Watch who gets in, and on what terms.
Anthropic Claude Opus 4.7 launched April 16 and currently leads on SWE-bench Pro benchmarks for agentic coding. Anthropic called it openly: Opus 4.7 is less broadly capable than Mythos, its unreleased flagship. That admission is notable. The lab is publicly acknowledging a two-tier model strategy. one tier you can buy, one tier you earn access to through vetted programs. Mythos Preview found and reported a 17-year-old remote code execution vulnerability in FreeBSD on its own (CVE-2026-4747). It also found bugs in OpenBSD, FFmpeg, and Linux kernel privilege escalation chains. The week also included a brief outage. elevated error rates across Claude, the API, and Claude Code. resolved by 1:50 PM ET on April 15. At $30 billion annualized revenue, even a short infrastructure incident surfaces fragility questions that enterprise buyers are actively asking.
Google AI Gemini 3.1 Pro's two-million token context window continues to be its sharpest differentiator. On agentic coding benchmarks, Opus 4.7 leads. On long-context research tasks, Gemini and Opus 4.7 tied at a 0.715 aggregate score. The $40 billion Anthropic investment does not signal that Google is abandoning Gemini. It signals that Google is building a portfolio position across the model layer and the infrastructure layer simultaneously. Gemini is the internal flagship. Anthropic is the external bet. GCP is the layer both run on. That is a three-part strategy, not a pivot.
The Pattern I'm Watching
Google's $40 billion Anthropic bet looks strange until you remember what happened in 1995. Microsoft invested in Apple. The investment saved Apple from bankruptcy, killed the antitrust argument that Microsoft was a pure monopolist, and gave Microsoft a browser distribution deal. Both companies got something they needed. The minority equity stake was the smallest part of the transaction. The infrastructure and distribution dynamics were the durable parts.
I am not saying Google and Anthropic are Microsoft and Apple. The dynamics are different. But the structure of the move is recognizable. Google is not buying Anthropic for the equity upside. Google is buying a supply chain position, a compute dependency, and an institutional relationship with the lab that enterprise teams are treating as the other serious AI option. If Anthropic runs on Google Cloud, uses Google TPUs, and takes Google capital, then Google is in the room regardless of which model your team chooses. That is the play.
DeepSeek V4 landing this week, backed by Huawei Ascend chips, is the other side of this pattern. China is building a parallel supply chain. models, chips, and cloud infrastructure that do not depend on NVIDIA or the U.S. hyperscalers. The V4 Pro's agentic performance claims are meaningful. But the Huawei backstory is the more durable signal. Two separate infrastructure stacks are forming at the global level, and every enterprise building AI systems in the next three years will eventually have to decide which supply chain they are willing to depend on. Most teams are not having that conversation yet.
After 30 years of watching these cycles, here is what I know: the consolidation phase always feels like a lot of separate stories until it snaps into a single picture. This week gave you the picture. Google, Amazon, and the hyperscalers are competing to be the infrastructure that the winning model runs on. The model scores will keep changing. The infrastructure dependencies will not. Which layer is your team actually building on. and do you know who controls it?
Hit reply and tell me. I read every response. Darin
Weekly AI and cloud breakdowns from someone who's been in the game since the early days of the internet. No ads. No filler. The signal.

