GPT-5.5: Democratizing Cyber Capabilities
Today, OpenAI released GPT 5.5, its answer to Anthropic’s Mythos. The two companies took very different paths. This is the same old security question: who gets access to powerful tools and research?
Today, OpenAI released GPT 5.5, its answer to Anthropic’s Mythos. From early access in our security labs, 5.5 performs at a similar level for offensive security.
The two companies took very different paths. This is the same old security question: who gets access to powerful tools and research?
OpenAI made a clear call. 5.5 is open to everyone in ChatGPT and Codex. We think that’s right. It’s chat and Codex only for now, but the same principles should extend to the API.
History Repeats Itself
For anyone who’s been in security long enough, this feels familiar. Not because of AI, but because of disclosure.
Around 2000, the vendor-sec mailing list embodied the “private club” model: invite-only vendors plus a few trusted researchers, sharing vulnerabilities in secret to coordinate fixes before disclosure. The theory was sound.
In practice, it failed. Vulnerabilities leaked. The channel didn’t prevent exploitation, it created a two-tier system: insiders knew, everyone else didn’t. That asymmetry didn’t protect users, it exposed them.
The full disclosure debate settled the point. This wasn’t recklessness, it was structural. Private clubs fail. Secrecy doesn’t reduce risk, it concentrates it.
We learned that lesson. Now we’re replaying the same debate with AI-powered offensive tools.
Why the Private Club Model Fails Again
Mythos is limited to a small elite circle: Glasswing partners, vetted vendors, selected researchers. The logic mirrors vendor-sec: restrict access to reduce harm, give defenders a head start. The flaw is the same.
Capability doesn’t vanish when you gate it. It gets rediscovered, leaked, replicated. And the gap between proprietary and open models is shrinking fast. Defensive exclusivity exists, but it’s brief.
Meanwhile, smaller teams are locked out. The mom-and-pop business, the regional hospital, the mid-market company with a part-time security lead. They’re not in Glasswing. They don’t have elite offensive tooling, and they need it most. Restricting access doesn’t protect them, it widens the gap.
OpenAI is making a different bet: give broad access, but do it intelligently.
The Smarter Approach: Trust, Verify, Enforce
What OpenAI built around 5.5 isn’t unstructured openness. Access is staged: Codex and ChatGPT now, API later with stronger safeguards. That’s the right shape.
Every user is a registered company. KYC for a security tool. Clear guardrails against obvious misuse. And no blanket zero data retention, so abuse leaves a trail. You can find and validate vulnerabilities on your own systems, but you can’t quietly operationalize exploits at scale without accountability.
That combination matters. Not a private club, not naive openness. Broad access with a real accountability chain. Trust but verify, and verify means KYC, not invitations.
This gets the disclosure debate right: democratize the tools, enforce responsibility. Accountable access, like gun licensing in most jurisdictions: you’re registered, and you’re responsible for how it’s used.
We're partnering with OpenAI, and as soon as GPT-5.5 API access lands, XBOW customers can run assessments on top of it. Offense security testing, moving at machine speed, is the only way defense keeps up.


.avif)