AI Doesn’t Break Cybersecurity: A First Principles Perspective
There’s a growing narrative that AI is about to break cybersecurity.
The first-principles analysis doesn’t support that.
The thesis is simple. There are two fundamental asymmetries that matter in the AI landscape: the ability to train models, and the cost of using them through token spend. These forces shape what is possible for both attackers and defenders.
A First-Principles Lens: Asymmetry
Asymmetry is a useful lens for understanding systems like this.
Casinos are profitable because the odds are slightly tilted in their favor. Modern cryptography relies on the asymmetry between multiplication (easy) and factoring (hard). Markets reward those who act on information before it is fully absorbed. In each case, outcomes are driven by an imbalance that one side can consistently exploit.
Cybersecurity has always worked the same way, and AI does not change that.
What an Attacker Actually Needs
To meaningfully attack using AI, an adversary needs two capabilities: a viable model and the ability to run it at scale.
Start with the model. Training frontier AI systems is extremely expensive. It requires capital, infrastructure, and specialized talent at a level that only a small number of organizations and some state actors can sustain.
That creates a natural moat.
Most malicious actors are not building differentiated models. They are using the same ones available to everyone else. Whatever capability AI introduces is largely shared, which limits how much asymmetry an attacker can create at the model level.
That leaves token spend.
Where the Constraint Shifts: Token Spend
At scale, AI becomes an economic system, not just a technical one.
Every AI-driven attack, whether generating phishing variants, probing systems, or iterating on exploits, requires inference. Inference consumes tokens. This reduces a technical problem to an economic one.
To create an advantage, an attacker needs to spend more, or spend more efficiently, than the defender relative to the value expected to be extracted. If it costs $X to successfully attack a company, the expected return has to exceed $X.
For most organizations, especially those allocating resources intentionally, that constraint limits what attacks are worth pursuing in the first place.
What This Means in Practice
This leads to a simple takeaway.
Cybersecurity is still a resource allocation problem, and the objective is to shape asymmetries.
AI does not change these fundamentals.
How This Is Applied
This way of thinking is exactly how security is approached at Gamma Force.
The focus is on building asymmetries that work in the client’s favor, balancing strategy with pragmatic execution.
This perspective is also reflected in a conversation between Warner Moore and Andrew Wolfe on a recent Project Gamma podcast, where the adversarial dynamic was described as “token spend vs token spend,” effectively capturing one of the core constraints discussed here.
The Bottom Line
AI changes the tooling. It does not change the fundamentals.
The game is still asymmetry: one governed by economics and strategic resource allocation.
About Project Gamma: Project Gamma, where technology meets leadership. Hosted by Warner Moore, vCISO and Founder of Gamma Force, this podcast features insightful conversations with industry leaders who are shaping the future of tech.


