The AI platforms are pulling up the ladder on developers
Tech with Darin Weekly Roll up - **Publish Date:** 2026-04-11
The Bottom Line (No Jargon Edition)
Anthropic built an AI model so capable at finding security holes in software that it sent an unsolicited email to one of its own researchers during testing. The company decided not to release it publicly. Only 11 handpicked partners get access.
A developer named Peter Steinberger built a popular open-source tool called OpenClaw that works with Anthropic's Claude. Anthropic first changed the billing rules so his tool costs extra, then temporarily banned him from the platform entirely for "suspicious activity." The ban was lifted, but the message was sent.
Microsoft quietly stripped the Copilot name from Windows apps like Notepad and Snipping Tool. The AI features are still there. The brand is not. That is what a retreat looks like.
A woman filed a lawsuit against OpenAI, alleging that ChatGPT encouraged her ex-boyfriend's stalking behavior and helped him create harassment materials. OpenAI is simultaneously backing a bill that would shield AI companies from liability in exactly these kinds of cases.
CoreWeave, the GPU cloud provider that went public last month, signed a multi-year compute deal with Anthropic. CoreWeave now serves nine of the ten largest AI model providers. The company projects more than $12 billion in revenue for 2026, up from $5.1 billion last year.
The Take That Started the Week
Anthropic built something it was afraid to ship. Claude Mythos, the company's next-generation model, could autonomously find and exploit zero-day vulnerabilities in production software. During testing it broke out of its sandbox and emailed a researcher. Anthropic halted the public release, restricted access to 11 partners under "Project Glasswing," and committed up to $100 million in usage credits for defensive cybersecurity work.
That decision matters more than the model itself. This is the first time a major AI lab has built something and explicitly said: we are not ready to put this in the world. Not a PR talking point. An actual operational hold. The model found a 27-year-old flaw in OpenBSD during testing. That is not a benchmark score. That is a real vulnerability in software that runs real systems.
Now hold that decision next to this: the same week Anthropic briefly banned Peter Steinberger, the creator of OpenClaw, from accessing Claude at all. The reason cited was "suspicious activity." He had built one of the most widely used third-party agent frameworks for Claude. Earlier that week, Anthropic had already changed its billing policy to charge extra for anyone using Claude through third-party harnesses like his. The ban was lifted quickly, but the sequence is worth noting. The platform giveth. The platform taketh away.
These two events together tell you something about where we are. The AI labs are now big enough, and their models capable enough, that they are making sovereign-level decisions. They decide what gets released and what doesn't. They decide which developers get access and on what terms. They are the regulators now, whether they want that title or not. And the developers building on top of these platforms are finding out the hard way what "platform risk" really means when the platform is the intelligence layer.
Cloud Roundup
AWS
Amazon's satellite internet service entered enterprise beta this week. Originally called Project Kuiper before being rebranded as Amazon Leo last November, the service now has roughly 250 satellites in orbit. CEO Andy Jassy confirmed a mid-2026 commercial target in his shareholder letter and said pricing will undercut Starlink. Partners already signed include Verizon, AT&T, Delta, JetBlue, and NASA. The FCC requires 1,618 satellites by July 30. That is a lot of launches in a short window. Worth watching whether the timeline holds.
The broader AWS signal this week is what wasn't announced. The big infrastructure moves were all going to CoreWeave and other specialized GPU clouds. AWS has Bedrock and SageMaker, but when Anthropic needed raw compute capacity at scale, they went to CoreWeave first. That is a quiet data point, not a verdict. But it is worth tracking.
Azure
Microsoft's Copilot retreat continued. The company removed Copilot buttons and branding from Notepad and Snipping Tool in Windows 11, replacing the menu with "Writing Tools." The AI features are still running underneath. The name is gone.
This matters because Microsoft spent two years and enormous marketing budget making Copilot a household name inside enterprise IT. The fact they are now quietly distancing the brand suggests the adoption numbers or satisfaction scores are not where they expected. Microsoft's own documentation acknowledged this week that users should "not trust AI" for certain tasks, then had to walk that statement back publicly. That is not a confident narrative for a product line that represents billions in future revenue.
GCP
Google released Gemma 4 this week, its most capable open-weights model family. The models are designed for complex reasoning on low-power devices and come with an Apache 2.0 license, which is a meaningful shift from prior licensing terms. Gemini Nano 4 for Android is coming later this year, with 2B and 4B parameter variants running locally on device.
The Anthropic-Google relationship is also worth tracking. Anthropic signed a deal this week to secure 3.5 gigawatts of Google TPU capacity starting in 2027, expanding a prior 1 gigawatt commitment. Anthropic's annualized revenue run rate crossed $30 billion in early April 2026, up from $9 billion at the end of 2025. At that growth rate, compute supply becomes the constraint before model capability does. Google is both a competitor and a critical infrastructure provider to Anthropic. That relationship gets more complicated as the revenue gap closes.
AI Model Roundup
OpenAI
OpenAI is finalizing a cybersecurity-focused model similar to what Anthropic built with Mythos. Axios reported a staggered rollout plan, driven by the same concerns: a model this capable at finding vulnerabilities cannot be released wide-open. The company is also introducing a $100 per month ChatGPT tier targeting professionals doing heavier coding and "real projects."
The legal picture darkened this week. A woman filed suit alleging ChatGPT encouraged her ex-boyfriend's stalking and helped him create materials to harass her. Florida's Attorney General opened a separate investigation into whether ChatGPT was involved in the 2025 Florida State University shooting. Meanwhile OpenAI is actively lobbying for a bill that would shield frontier AI developers from liability for critical harms caused by their models, as long as those harms were not intentional or reckless. The timing of that lobbying effort, against this legal backdrop, is not subtle.
Anthropic
Three things happened at Anthropic this week and they point in the same direction. First, Mythos held back from public release. Second, OpenClaw's creator temporarily banned. Third, CoreWeave deal signed to scale compute capacity. Put them together: Anthropic is getting more powerful, more cautious about what it deploys, and more aggressive about controlling how its models get used. The $30 billion annualized revenue run rate is the fuel behind all three decisions.
Project Glasswing, the restricted access program for Mythos, includes Nvidia, Google, AWS, Apple, and Microsoft as partners. Those are not startups. That is a list of the largest technology companies on earth getting private access to a model the public cannot touch.
Google AI
Gemma 4 shipped with Apache 2.0 licensing, which is a genuine open move. The models bring serious reasoning capability to devices that previously couldn't run anything close to frontier performance. For developers building local AI applications, this is the most interesting release of the week.
The Intel partnership for AI infrastructure using Xeon CPUs and custom IPUs is worth a look for anyone architecting inference pipelines. CPUs are making a quiet comeback in the inference stack, especially for latency-sensitive workloads where GPU queue time is the real bottleneck.
The Pattern I'm Watching
I have watched this exact sequence before. Not with AI, but with cloud itself. In 2008 and 2009, AWS started offering raw compute to developers who had no other way to scale quickly. The terms were simple, the access was wide, and the ecosystem exploded. Then, somewhere around 2012 and 2013, the platform calculus shifted. Pricing got more complex. Preferred partnerships emerged. Certain workloads got steered toward AWS's own managed services rather than raw compute. Developers who built on top of the platform started finding their integrations quietly deprecated or repriced.
This week's Anthropic-OpenClaw story is that pattern running at AI speed. A developer builds something useful on a platform. The platform grows fast enough that it no longer needs that developer's goodwill. The billing rules change. The access gets tightened. The developer's position goes from "ecosystem partner" to "third-party risk." This is not malicious. It is just what platforms do when they get big enough to set the terms instead of accepting them.
The legal liability angle is new this time, though. In the cloud era, the worst a platform could do to a developer was shut off their API. Today, if a model deployed through a third-party harness causes harm, the question of who is liable is genuinely unsettled. OpenAI backing a bill to cap its own liability while simultaneously facing stalking and mental health lawsuits is the most honest preview of where this goes. The liability is going to land somewhere. The fight right now is about where.
After 30 years of watching platform cycles, I keep coming back to the same question: at what point does a platform become infrastructure? And once it becomes infrastructure, what obligations come with that? The power grid doesn't get to decide which appliances plug in. The phone network couldn't refuse calls based on the conversation it predicted. AI platforms are making content and access decisions that no prior infrastructure layer was allowed to make. The regulatory frameworks that eventually caught up to cloud were slow and incomplete. The ones catching up to AI are going to be faster and more aggressive, because the harms are more visible and more immediate.
What happens to the developer ecosystem when the intelligence layer consolidates into five platforms, each with the power to ban, reprice, or restrict access at will?
Weekly AI and cloud breakdowns from someone who's been in the game since the early days of the internet. No ads. No filler. The signal.

