I have spent much of my career working on how AI can improve security, especially across vulnerability management, prioritization, training domain specific language models, detection engineering and purple teaming. At IBM Research, I worked on projects like NL2Vul, using language models to score vulnerabilities and connect them to attack techniques. At Microsoft, I worked on efforts like TIPS and SecEncoder, using domain-specific models to connect vulnerabilities, threat actors, and prioritization more directly to operational action. Some of this work shipped and was used by customers and practitioners, including through IBM X-Force Red's vulnerability management services. That gave me a close look at how organizations actually manage security risk, not just how they report it.
I have seen what the dashboards show and what the spreadsheets hide.
The uncomfortable truth is that, despite years of progress, most enterprise security postures are still weaker than leaders think. That is not a provocative claim. It is an observation.
Progress Is Real. Posture Is Not.
The industry has made real progress. CVSS gave us a shared language for scoring vulnerabilities. EPSS improved the conversation by adding probability and helping teams ask not just how bad a vulnerability is, but how likely it is to be exploited. Risk-based prioritization pushed organizations away from "patch everything" toward "patch what matters first."
Those are meaningful advances. I believe in them. I helped build some of them.
But frameworks do not patch systems. Scores do not close tickets. Prioritization only matters if the organization can actually act on it.
In many enterprises, the gap between knowing and doing is still enormous. Critical vulnerabilities sit in spreadsheets. Patching cycles stretch into quarters. Security teams can explain the risk clearly and still lack the authority, staffing, or operational support to reduce it. The tools have improved faster than the posture.
The Lights-On Dilemma
Every organization lives with the same tension: keep the business running today while investing in resilience for tomorrow.
Security rarely loses because no one cares. It loses because it is forced to compete with everything else.
Product deadlines, reliability work, revenue goals, and customer commitments all draw from the same pool of budget and engineering time. When the choice is between shipping the release and fixing the security issue that has not yet caused an outage, the release often wins. That is not irrational. It is how incentives work.
I do not say that to criticize teams. Most are doing the best they can inside real business constraints. But the consequences of being slow have changed.
This Is Not a Forecast
The shift to agentic AI in cybersecurity is not theoretical. It is already changing how attacks are carried out and how quickly they unfold.
The clearest signal is not just better tooling. It is compressed time. Mandiant recently highlighted cases where the time from initial access to action on objective fell to 22 seconds. Whether the path is exploitation, credential use, or lateral movement, the pattern is the same: attackers can now operate in continuous loops at machine speed.
Anthropic also disclosed the first reported AI-orchestrated cyber espionage campaign, with AI reportedly carrying out 80 to 90 percent of the campaign and humans stepping in only at a handful of decision points. That is not a distant edge case. It is another sign that attack execution is moving closer to continuous, machine-speed loops.
That changes the meaning of "slow."
If your organization still runs on quarterly patch cycles, you are not simply behind. You are operating on a cadence built for a different era.
Shifts Versus Loops
John Lambert famously wrote, "Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win." The point was structural. Defenders tend to see inventories, point findings, and ticket queues. Attackers see relationships, paths, and opportunities to pivot.
In the agentic age, there is a second asymmetry:
Defenders think in shifts. Attackers think in loops.
Most SOCs still run on a human-time model. An alert fires. Someone triages it. It gets escalated. Another person investigates. A response decision waits on context, staffing, approvals, or the next handoff. Even very good teams are constrained by the way the work is organized.
Attackers are increasingly less constrained. Autonomous systems do not sleep, lose context at shift change, or wait in a queue. They probe, adapt, retry, pivot, and continue. Block one path and they test another. Burn one credential and they try the next. The attack is no longer just a sequence of human actions. It is a loop.
That is the mismatch. Defenders often optimize for throughput within shifts. Attackers optimize for continuity across time.
Your Own Tools Are Part of the Attack Surface
There is another part of this story that deserves more attention: the same AI systems companies are deploying to move faster also expand the attack surface.
The LiteLLM supply chain compromise was a useful reminder that AI infrastructure is still infrastructure. Model gateways, agent frameworks, MCP servers, plugins, connectors, orchestration layers, and secrets all become part of the security boundary. Every agent you deploy is also a non-human identity with credentials, permissions, and access paths.
Most organizations are still treating these as features, not as new attack surface.
That is one reason so many modern attacks look less like breaking in and more like logging in. If an attacker can steal a token, hijack an integration, abuse an over-permissioned agent, or compromise part of the software supply chain, they may not need an exotic exploit at all.
Speed without architecture does not create advantage. It creates shortcuts, and shortcuts become backdoors.
AI Can Help Defenders Too, But Only on the Right Foundation
None of this is an argument against AI. Quite the opposite.
I have seen firsthand that AI can materially improve defensive work. Projects like AVDA, SecEncoder, NL2Vul, and TIPS showed that domain-specific models can help teams understand vulnerabilities faster, connect technical findings to attacker behavior, and prioritize remediation more intelligently.
But AI does not repair a broken operating model.
You cannot bolt an AI onto a workflow that still depends on sequential human handoffs and expect machine-speed defense. You cannot deploy autonomous tools across the enterprise without identity governance for non-human actors, behavioral monitoring, clear permissions, and automated containment where the signal is strong. The architecture enables the AI, not the other way around.
The practical work is not glamorous. Govern non-human identities. Instrument agent workflows. Reduce exposure windows on internet-facing systems. Automate the high-confidence parts of detection and response. Design operations to preserve context across time instead of losing it at every handoff.
That is the foundation that lets AI amplify defenders rather than amplify complexity.
The Real Shift
The point is not to be alarmist. It is to be honest about the environment we are already in.
Most organizations do not need more slogans about AI. They need to update the assumptions behind their security programs. The world those programs were built for was slower, more manual, and more forgiving than the one we have now.
Defenders still think in shifts. Attackers increasingly think in loops.
The organizations that adapt will be the ones that redesign their architecture, workflows, and decision-making so they can operate at the speed of the systems they are defending.
That is the same lesson as my last post: speed matters, but only on good architecture.
Disclaimer: These are my personal thoughts and do not reflect the views of my current employer or any previous employers.