Tech with Darin — Weekly Rollup 5/4-5/10 2026
AWS overheated. Anthropic signed four compute deals. Connect the dots.
The Bottom Line (No Jargon Edition)
AWS had a data center overheating incident in Northern Virginia on May 8. Coinbase was among the companies hit. When the world's biggest cloud provider goes down, every business running solely on that infrastructure goes down with it.
Anthropic is now buying compute from Amazon, Google, Akamai (a $1.8 billion, seven-year deal), AND SpaceX's Colossus supercomputer. The company building the AI is not trusting any single provider to keep the lights on.
A federal trial between Elon Musk and Sam Altman surfaced documents showing Microsoft's early investment strategy was structured to make OpenAI deeply dependent on Azure. The lock-in playbook is not new. The evidence is just more public now.
US authorities suspect that Nvidia chips worth roughly $2.5 billion were smuggled to China through Thailand, with Alibaba identified as a suspected end customer. Alibaba denies any involvement. The story shows that export controls on AI hardware are creating a parallel, illegal supply chain.
In China, Nvidia's B300 server is reportedly selling for $1 million each on gray markets. That is what supply scarcity looks like in practice.
Google Cloud crossed $20 billion in quarterly revenue but said capacity constraints held growth back. OpenAI shipped GPT-5.5 Instant as the new default ChatGPT model, with less padding and more direct answers.
The message across all of it: single-provider dependency is now a business continuity problem, not just an architectural preference.
The Take That Started the Week
The AWS Northern Virginia outage on Friday, May 8, was not the biggest outage in cloud history. Coinbase went down. Some SaaS apps staggered. Services came back. The postmortem will say "overheating in one availability zone" and most people will file it under "things happen."
That's the wrong read.
The reason this week felt different is the context around it. Anthropic, one of the fastest-growing AI companies in the world, has now structured its compute footprint across four providers: Amazon, Google, Akamai, and SpaceX. In the same week that a data center in Virginia overheated, the company most associated with Claude announced a $1.8 billion, seven-year contract with Akamai. a CDN company now selling cloud infrastructure. and a separate compute deal giving it access to SpaceX's Colossus 1 supercomputer, which runs more than 220,000 Nvidia GPUs. Dario Amodei said at the Code with Claude developer conference that Anthropic is "growing faster than the exponential" in 2026. They are not betting the company on any one provider.
The Musk-Altman trial added a different layer. Court documents revealed that Microsoft's early investment structure in OpenAI was designed, over time, to deepen Azure dependency. The profit cap that limited Microsoft's early returns was lifted as the relationship matured. Whether that rises to the level of legal wrongdoing is for the court to decide. What it confirms is that hyperscalers have always understood compute dependency as a competitive moat. The difference now is that the evidence is in a federal courtroom and everyone can read it.
For anyone managing infrastructure at scale, this week's news is not a collection of separate stories. It is the same story told three times. Lock-in creates fragility. Fragility creates risk. Risk at this scale reaches the board.
Cloud Roundup
AWS
The May 8 data center overheating event in the US-EAST-1 (Northern Virginia) region disrupted multiple services. Coinbase confirmed impact on its platform. Northern Virginia is AWS's oldest and most trafficked region, which means concentration risk is highest there. If your architecture treats us-east-1 as a default without multi-region failover, this week gave you the reason to change that.
Separately, AWS's parent Amazon confirmed another $5 billion investment in Anthropic during Q1 earnings, on top of the $8 billion already deployed. Amazon is also committed to up to $20 billion in future funding. That is a meaningful financial position in one AI company, which creates its own dependency dynamic. this time on Amazon's side.
Azure
The Musk-Altman trial surfaced internal Microsoft documents and communications showing how the company structured its OpenAI relationship to build Azure as the exclusive compute platform. Early investment terms limited Microsoft's profit share, but those limits were removed as the relationship deepened. The strategy worked: OpenAI runs on Azure. The question the trial raises for every enterprise buyer is simple. what does your own vendor agreement say about exclusivity, data portability, and exit rights?
GCP
Google Cloud crossed $20 billion in quarterly revenue in Q1 2026, beating Wall Street estimates by nearly $2 billion. The notable detail: Google said capacity constraints limited growth. A cloud business that is capacity-constrained is a business that could not sell all the infrastructure customers wanted to buy. Google is spending aggressively to close that gap. The company is also preparing an "AI Ultra Lite" Gemini subscription tier, adding explicit usage limits and overage credits to manage token budgets at scale.
AI Model Roundup
OpenAI
GPT-5.5 Instant became the new default model for ChatGPT this week. The design intent is visible: fewer unsolicited follow-up questions, less formatting overhead, tighter answers. OpenAI also released GPT-5.5-Cyber to a broader group of security defenders. a model designed to help teams write proofs of concept for vulnerabilities and run attack simulations. The move puts an offensive-capable model in defenders' hands, which is a meaningful shift in how AI gets applied to security operations.
Anthropic
Claude Managed Agents got three new capabilities at the Code with Claude developer conference: dreaming (agents review past sessions to find patterns and self-improve), outcomes (users define explicit success criteria), and multiagent orchestration (a lead agent delegates tasks to specialist agents). The dreaming feature is the one worth watching. A system that learns from its own operational history without human annotation is moving toward something that looks a lot less like a tool and a lot more like a junior hire who studies their own performance logs.
Anthropic also confirmed the Akamai deal ($1.8 billion, seven years) and the SpaceX Colossus access agreement in the same week. The company is running a four-provider compute strategy. That is a deliberate architecture choice, not just opportunistic deal-making.
Google AI
Gemini is getting an "AI Ultra Lite" tier alongside usage limits and overage credits. That is product management, not research news. it means Gemini is being treated like enterprise software now, with tiered access and consumption governance. Google is also pushing Gemini into federal government workflows as an agentic workforce platform, targeting agencies as proving grounds for large-scale AI deployment. Government contracts move slowly, but they signal where Google thinks its agentic future lands.
The Pattern I'm Watching
Thirty years ago, the enterprise software industry ran on a playbook that every senior architect has lived through: land the platform, deepen the integration, make the exit painful. IBM did it with hardware and services. Oracle did it with databases. SAP did it with ERP. Microsoft did it with Office and Active Directory. The strategy was not secret. It was documented in business school case studies. Every customer who got locked in knew the theory. Most of them did it anyway because the short-term convenience outweighed the long-term cost. until it didn't.
What's different this time is speed and stakes. AI infrastructure is not a five-year ERP rollout. It is compute, data, and model access all bundled into a single vendor relationship, and the decisions are being made in months, not years. A startup choosing AWS for its AI training today is not just picking a cloud provider. It is picking a vendor whose pricing, availability, and model access policies could reshape its unit economics within two years. The Anthropic compute strategy. intentionally spread across Amazon, Google, Akamai, and SpaceX. is a public acknowledgment that no single provider should hold all the leverage. Anthropic is arguably one of the most compute-intensive companies on earth right now. If they won't go single-provider, the argument for anyone else doing so gets weaker.
The geopolitical layer makes this harder, not simpler. Nvidia export controls have created a two-tier AI infrastructure market: one where you pay market rate for H100s and B300s through legitimate channels, and one where you pay a million dollars per server on gray markets. That split will widen as AI compute requirements grow. Teams building AI infrastructure in 2026 are not just making architecture decisions. They are making bets on which side of a geopolitical divide their supply chain sits on. Here is the question I keep coming back to: how many boards are actually having that conversation, and how many are still treating "which cloud do we use" as an IT decision?
Weekly AI and cloud breakdowns from someone who's been in the game since the early days of the internet. No ads. No filler. The signal.

