On February 20th, Anthropic announced Claude Code Security; a new capability (for now within Claude Code Web, but it will make its way to other Claude Code interface points). The key functionality is the ability to intelligently scan a code base for security bugs, triage (to reduce F+), prioritize (assign severity), and generate fixes. This is working at scale (data point: it found 500+ bugs that eluded the open source world recently). It can find issues that rule based SAST cannot (such as business logic flaws, broken auth, etc).
Within hours, LinkedIn was flooded with hot takes: "AppSec is dead." "SAST is over." "Shift-left is obsolete."
I'm a huge fan of Anthropic and Claude. My company, boostsecurity.io, has been using AI for AppSec for quite some time (including implementing our own "Claude Code Security" capabilities).
However, having built two AppSec companies, and helping some of the largest, most complex software teams in the world build secure software; I believe the "death of code security/appsec" claims are exaggerated.
The logic therefore follows: kill shift-left, kill traditional SAST/Code Security, let the coding agent handle it, and enjoy perfect code forever.
This conflates LLMs getting better at finding vulnerabilities with what an enterprise with tens or hundreds of thousands of code bases actually needs to reduce risk, achieve compliance, and respond to incidents. These are not the same problem.
"Old library, now risky." An application was deployed six months ago. It has a library that, until yesterday, was known to be safe to use. However, as of today, this library has a critical vulnerability (newly disclosed CVE) that is being actively exploited (very high EPSS) in the wild. The code didn't change, but the threat did. No amount of code-generation-time scanning would have caught this.
"Supply chain attack du jour." A popular open source package gets compromised. The next time someone (or some pipeline) runs make update, you're owned. Claude wasn't in the loop. Claude is not analyzing all the transitive dependencies' sources.
"Misconfigured pipeline." An organization has 50 open source repositories. One of them has a vulnerability in how the CI/CD pipeline is configured, so when an attacker submits a carefully crafted pull request, it causes real damage. This has nothing to do with application code.
"Oops, Claude didn't catch this one." A developer used Replit or Codex instead. A developer told Claude: "I don't care about your suggestion, just do as I say." A developer copied terrible code off the internet. A developer is malicious. Claude didn't generate it. Claude didn't review it. Now what?
I can go on ... but you get the point.
Modern software security is not just SAST (or AppSecTesting, or even ASPM). Your software factory has many moving parts, each with distinct risks and distinct controls:
Claude Code Security addresses a small slice of this.
Claude Code Security will be (and in some respects already is) good at a small portion of this. That portion matters, but calling it the end of AppSec is like saying the invention of spell-check killed the need for editors.
The best security engineers I know are already using AI to build capabilities that would have taken years to develop, and they're doing it in weeks. Threat modeling at design time. Analyzing pull requests for security issues beyond what any rule engine could catch. Validating whether a vulnerability is actually exploitable in a given context. Improving triage so teams stop wasting cycles on noise.
These are real, meaningful applications. AI is a superpower for the people who know how to wield it.
But it is part of the solution, not the entire solution.
Scanning a codebase of millions of lines with an LLM on every commit is neither practical nor cost-effective. The economics don't work at enterprise scale, and the non-determinism factor complicates things. The right question isn't "can an LLM find this vulnerability?" It's "what is the most effective way to find, prioritize, and fix this class of risk across an organization with thousands of repositories?" and "How do we ensure that we keep our applications continuously protected in light of real time threats?".
Sometimes that answer is an LLM. Sometimes it's a deterministic rule. Sometimes it's a runtime signal. Many times it will be a combination.
And yes, of course, getting the coding agent (or a closely associated agent) to generate secure code from the get go is a huge plus, and yes, it's incredible when the coding agent finds and fixes issues in code.
This is not a threat to AppSec. It is the biggest unlock AppSec has had in a decade.
The vendors who win will not be the ones claiming AI replaces everything that came before. They will be the ones using this extra power in practical ways to help enterprises build secure applications, securely. That means combining AI with the operational backbone that enterprises actually need: inventory, prioritization, policy enforcement, compliance, and incident response, for the entire software factory, not just the code that the agents write.
Software development is changing in fundamental ways. AppSec is changing as a result. AppSec is not dead. AppSec that does not adapt to the new world is.