
OpenAI has launched Daybreak, a new cybersecurity initiative that brings together frontier artificial intelligence (AI) model capabilities and Codex Security to help organizations identify and patch vulnerabilities before attackers find a way in using the same issues.
“Daybreak combines the intelligence of OpenAI models, the extensibility of Codex as an agentic harness, and our partners across the security flywheel to help make the world safer for everyone,” the AI upstart said. “Defenders can bring secure code review, threat modeling, patch validation, dependency risk analysis, detection, and remediation guidance into the everyday development loop so software becomes more resilient from the start.”
Like Anthropic’s Mythos, the idea is to leverage AI to tilt the balance in favor of defenders and help detect and address security issues before they are found by bad actors. Access to the tooling remains tightly controlled for now, with OpenAI urging interested organizations to request for a vulnerability scan or contact its sales team.
Daybreak leverages Codex Security to build an editable threat model for a given repository that focuses on realistic attack paths and high-impact code, identify and test vulnerabilities in an isolated environment, and propose fixes.
The effort is built on the foundations of three models: GPT-5.5 (which has standard safeguards for general purpose use), GPT-5.5 with Trusted Access for Cyber (for verified defensive work in authorized environments), and GPT-5.5-Cyber (a permissive model for red teaming, penetration testing, and controlled validation).
Several major companies like Akamai, Cisco, Cloudflare, CrowdStrike, Fortinet, Oracle, Palo Alto Networks, and Zscaler are already integrating these capabilities under the Trusted Access for Cyber initiative, OpenAI said, adding it’s working with industry and government partners to deploy “more cyber-capable models” in the future.
The rollout comes as AI tools have shortened the time it takes to discover latent security issues that may have otherwise escaped notice, turning what would once have taken a significant amount of time and effort into a much shorter period of work. As a result, the patching process can struggle to keep up even under ideal conditions.
Earlier this March, HackerOne paused its bug bounty program citing a shift in balance between vulnerability discoveries and the ability for open-source maintainers to address them, attributing it to how AI-assisted research has led to an uptick in the volume of new flaws and the speed at which they are identified.
This also has had the side effect of what’s called triage fatigue, where project maintainers are required to sift through a flood of vulnerability reports, some of which could be plausible-sounding but entirely hallucinated by the AI models.
As AI lowers the barrier to finding security flaws, companies like Anthropic, Google, and OpenAI have increasingly positioned AI security agents as a new operational layer to address the remediation bottleneck and safeguard digital infrastructure from potential exploitation.
In a post published last week, security researcher Himanshu Anand said “the 90 day disclosure policy is dead,” as large language models (LLMs) compress disclosure and exploit timelines to near-zero.
“When 10 unrelated researchers find the same bug in six weeks, and AI can turn a patch diff into a working exploit in 30 minutes, what exactly is the 90-day window protecting? Nobody,” Anand said.