
COO, AG Mednet.
McKinsey alum and Kellogg MBA, JP drives operational strategy for Judi across regulated clinical trial environments.
Anthropic built what it claims is the most capable AI model ever created
— and instead of releasing it, launched a cybersecurity coalition to defend against it.
Project Glasswing, announced in April 2026, brings together AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, and others in a cross-industry initiative to secure critical software infrastructure for the AI era. The catalyst: Anthropic's internal model Claude Mythos demonstrated unprecedented capabilities in identifying software vulnerabilities — capabilities powerful enough that Anthropic chose to restrict public access and channel the technology into defensive applications first.
An Industry First
Set aside the AI safety debate for a moment and consider the strategic implications.
This is the first time a frontier AI company has voluntarily withheld its most capable model and built a defensive infrastructure coalition before deployment. OpenAI didn't do this with GPT-4. Google didn't do this with Gemini. Anthropic looked at what Mythos could do and decided the responsible path wasn't release-then-mitigate — it was build-the-defenses-first.

Now layer in the other Anthropic news this week. They signed a multi-gigawatt compute deal with Google and Broadcom — infrastructure at a scale typically reserved for hyperscalers. They launched Managed Agents, a product that lets enterprises deploy autonomous AI agents without building infrastructure. And they reportedly overtook OpenAI in revenue run rate.
The picture that emerges is a company moving simultaneously on three fronts: model capability (Mythos), enterprise distribution (Managed Agents), and responsible deployment (Glasswing). Most AI companies pick one or two. Anthropic is executing on all three at once.
What This Means for Regulated Enterprises
For enterprises in regulated industries — pharma, healthcare, financial services — this matters enormously. The question has always been: can we trust AI systems enough to deploy them in environments where mistakes have real consequences? Glasswing doesn't answer that question completely, but it changes the conversation. When the company building the most powerful model is also building the coalition to defend against its misuse, it signals a maturity in the AI industry that regulated enterprises have been waiting for.
In clinical trials specifically, this creates a practical framework for thinking about AI deployment. The tools are becoming powerful enough to meaningfully automate trial operations — protocol design, monitoring, data reconciliation, safety signal detection. But the regulatory and accountability requirements don't go away. They intensify. What enterprises need is exactly what Anthropic's Managed Agents promise: powerful AI capabilities wrapped in infrastructure someone else manages, deployed within workflow boundaries that maintain compliance and traceability.
The Architecture We're Building
At AG Mednet, this is the architecture we've been building with Judi. Not just the AI layer, but the governance layer — the workflow orchestration that defines what AI agents can do, tracks what they did, and ensures accountability in regulated environments. Anthropic building the defensive infrastructure is a necessary condition. Building the operational guardrails for specific industries is the sufficient condition.
The AI safety question used to be abstract. With Glasswing, it just got very concrete. The companies that figure out how to deploy AI power within accountability frameworks will define the next era of regulated industries.

