BoostSecurity Blogs, Press & Events

The Death of AppSec Is Greatly Exaggerated

Written by Zaid Al Hamami | Feb 23, 2026 2:25:22 PM

On February 20th, Anthropic announced Claude Code Security; a new capability (for now within Claude Code Web, but it will make its way to other Claude Code interface points). The key functionality is the ability to intelligently scan a code base for security bugs, triage (to reduce F+), prioritize (assign severity), and generate fixes. This is working at scale (data point: it found 500+ bugs that eluded the open source world recently). It can find issues that rule based SAST cannot (such as business logic flaws, broken auth, etc).

Within hours, LinkedIn was flooded with hot takes: "AppSec is dead." "SAST is over." "Shift-left is obsolete."

I'm a huge fan of Anthropic and Claude. My company, boostsecurity.io, has been using AI for AppSec for quite some time (including implementing our own "Claude Code Security" capabilities).

However, having built two AppSec companies, and helping some of the largest, most complex software teams in the world build secure software; I believe the "death of code security/appsec" claims are exaggerated.

The Claim

The argument goes something like this:

  1. Since Claude Code (and eventually other models) can now do security finding/fixing, and ...
  2. secure code has to happen at agentic generation time — waiting until code is written is too late, then
  3. shiftleft is unnecessary; Why detect and fix in CI/CD when the LLM can find and fix while generating code? especially since...
  4. AI SAST > rule based SAST; Rule-based SAST can't catch everything — business logic flaws, auth issues, design problems.

The logic therefore follows: kill shift-left, kill traditional SAST/Code Security, let the coding agent handle it, and enjoy perfect code forever.

This conflates LLMs getting better at finding vulnerabilities with what an enterprise with tens or hundreds of thousands of code bases actually needs to reduce risk, achieve compliance, and respond to incidents. These are not the same problem.

A few quick scenarios

Here are some scenarios showing you that scanning at agentic code generation time is not sufficient.

"Old library, now risky." An application was deployed six months ago. It has a library that, until yesterday, was known to be safe to use. However, as of today, this library has a critical vulnerability (newly disclosed CVE) that is being actively exploited (very high EPSS) in the wild. The code didn't change, but the threat did. No amount of code-generation-time scanning would have caught this.

"Supply chain attack du jour." A popular open source package gets compromised. The next time someone (or some pipeline) runs make update, you're owned. Claude wasn't in the loop. Claude is not analyzing all the transitive dependencies' sources.

"Misconfigured pipeline." An organization has 50 open source repositories. One of them has a vulnerability in how the CI/CD pipeline is configured, so when an attacker submits a carefully crafted pull request, it causes real damage. This has nothing to do with application code.

"Oops, Claude didn't catch this one." A developer used Replit or Codex instead. A developer told Claude: "I don't care about your suggestion, just do as I say." A developer copied terrible code off the internet. A developer is malicious. Claude didn't generate it. Claude didn't review it. Now what?

I can go on ... but you get the point.

What Software Security Actually Is

Modern software security is not just SAST (or AppSecTesting, or even ASPM). Your software factory has many moving parts, each with distinct risks and distinct controls:

  •  Developers — insider threats, compromised accounts. 
  • Developer endpoints — malicious packages, compromised IDE extensions, supply chain threats at the workstation.
  • Developer infrastructure — malicious GitHub Actions, poisoned Terraform modules, misconfigured source control.
  • First-party code — this is what SAST covers. One piece of a much larger picture.
  • Third-party components — open source packages, container base images, ML models.
  • Cloud infrastructure — from IaC templates to the runtime cluster hosting your application.
  • Logical components — APIs, microservice boundaries, access control surfaces.

Claude Code Security addresses a small slice of this.

What Enterprises Actually Ask Us

We can debate naming conventions: ProdSec vs. AppSec vs. DevSecOps. It doesn't matter. The enterprises we serve are not asking for "a SAST tool" or "an AI-native SAST tool." They are asking us to help them protect their entire software development operation. Today, that means:

  • Inventory. What do I have? Developers, developer tools, CI jobs, container images, ML models, coding agents, IDE extensions, Kubernetes clusters, end-of-life libraries, and much more. Which of these are important? Where is the code that touches PII? Where is the code that ends up internet-facing? Are my pipelines configured securely? Where are my API endpoints?
  • Prioritization. What are the highest-risk issues, and why? Help me triage the critical ones. Help me power the AI coding agents to get fixes shipped.
  • Enterprise complexity. We use tags to label different applications. We need different reports, workflows, and policies based on those tags. We have RBAC requirements. Our code stays in our infrastructure. We have a secure coding standard that we want you to apply across all SCM's, CI/CD pipelines - and for securing the developer endpoint - we use every IDE and coding agent you can think of. You use our LLMs, not yours.
  • Incident response. When the next tj-actions or shai-hulud happens, help us figure out if we're impacted, and what we need to do about it. Fast.

Claude Code Security will be (and in some respects already is) good at a small portion of this. That portion matters, but calling it the end of AppSec is like saying the invention of spell-check killed the need for editors.

What Is Actually Changing

AppSec is not dying. It is evolving (again). And what AI makes possible now is genuinely exciting.

The best security engineers I know are already using AI to build capabilities that would have taken years to develop, and they're doing it in weeks. Threat modeling at design time. Analyzing pull requests for security issues beyond what any rule engine could catch. Validating whether a vulnerability is actually exploitable in a given context. Improving triage so teams stop wasting cycles on noise.

These are real, meaningful applications. AI is a superpower for the people who know how to wield it.

But it is part of the solution, not the entire solution.

Scanning a codebase of millions of lines with an LLM on every commit is neither practical nor cost-effective. The economics don't work at enterprise scale, and the non-determinism factor complicates things. The right question isn't "can an LLM find this vulnerability?" It's "what is the most effective way to find, prioritize, and fix this class of risk across an organization with thousands of repositories?" and "How do we ensure that we keep our applications continuously protected in light of real time threats?".

Sometimes that answer is an LLM. Sometimes it's a deterministic rule. Sometimes it's a runtime signal. Many times it will be a combination.

The Opportunity

I am very bullish on AI-powered security analysis in all its forms. From research, to threat modeling, to vulnerability discovery and remediation, to exploit generation and validation. LLMs are genuinely excellent at reasoning about source code, and they can discover vulnerability classes that traditional SAST never could.

And yes, of course, getting the coding agent (or a closely associated agent) to generate secure code from the get go is a huge plus, and yes, it's incredible when the coding agent finds and fixes issues in code.

This is not a threat to AppSec. It is the biggest unlock AppSec has had in a decade.

The vendors who win will not be the ones claiming AI replaces everything that came before. They will be the ones using this extra power in practical ways to help enterprises build secure applications, securely. That means combining AI with the operational backbone that enterprises actually need: inventory, prioritization, policy enforcement, compliance, and incident response, for the entire software factory, not just the code that the agents write.

Software development is changing in fundamental ways. AppSec is changing as a result. AppSec is not dead. AppSec that does not adapt to the new world is.