BoostSecurity Blog

Building a Do-It-Yourself Defect Discovery Practice

Written by John Steven | Nov 4, 2022 8:17:42 PM

With the exception of a few vendors and their heavily invested customers, many agree that application security’s era of “big box” defect discovery tools is over. Practitioners have successfully encouraged their organizations to shift to OSS defect discovery tools and leverage development platforms for triage as well as findings workflow management and even dashboarding. Software Security Groups (SSGs) have set to work implementing OSS SAST, DAST, SCA tools, and have begun adding Infrastructure-As-Code and Container scanning to the mix. Security Practitioners work with their DevOps teams to integrate these tools with SCM platforms, their CSP’s provided ‘Security Command Center’, or both. This approach is so developer-centric that product security engineers within development sometimes take the initiative to accomplish it before security arrives to mandate defect discovery. I call this the “Do- It-Yourself” or “DIY” Defect Discovery Practice.


Advantages of the DIY approach include:

  • Enterprise defect discovery tooling historically cost security six-to-seven figures in license cost alone, the DIY program cost savings is immediately compelling.
  • Results accuracy of OSS has varied but is often comparable, finding as much as 80% of what commercial equivalents find, and sometimes even outperforming them once filtered for critical findings accepted by developers for fix.
  • Using SCM or CSP platforms for results and dashboard means fewer questions about developer login to security tools and how many different experiences a development team can stand. And,
  • A good portion of the ‘findings avalanche’ and triage effort vanishes along with commercial tools, who’s core rule sets grew over time to win competitive benchmarks and customer bake-offs.

Practitioners use the bandwidth savings to customize the rulesets OSS offer, encoding the low-hanging fruit from their secure coding standards. Is DIY a panacea? What challenges persist?

Integration and Roll-out Cost

The effort and challenge of integrating defect discovery regimen with SCM or build management is consistently the #1 SSG-killer: Managers continually struggle to meet their rollout and defect discovery coverage goals because of the integration effort whether they assign their own team, rely on security champions, or delegate to development teams directly. Interestingly, the integration effort of DIY is about the same as the previous enterprise-class commercial solutions. If there’s a difference, it’s more realistic expectations: the DIY approach more explicitly places roll-out responsibility on adopting organizations whereas enterprise security
tools combined SaaS with professional services and spun this burden as smaller than it was in reality. DIYers still face the prospect of an m-to-n integration space: [Jenkins, Circle, GitHubActions, etc.]-to-[Semgrep, DependencyCheck, Checkov, etc.]. Sometimes the simplicity of these tools makes integration easier but the challenge of successfully integrating a portfolio of apps often confines SSGs to covering only a percentage (10-50%) of their apps before they wave the white flag of surrender.


Security platforms with an ASOC capability address the challenge of integration effort directly and dramatically impact on reducing integration effort. In a previous life, I measured a consistent 75% reduction in roll-out effort using ASOC tooling. Previous generations of these tools were themselves larger enterprise security plays, with associated complexity, but modern ASOC
platforms have followed defect discovery tools into a simple to use, developer-centric, even self-service model. Choose an ASOC platform that supports your chosen (m) OSS and commercial security tools, then use its ready-made SCM and build template integrations for your (n) development platforms. More than any other decision, the choice to intermediate defect discovery tools with an ASOC platform accelerates rollout and portfolio coverage.

What IS a ‘Finding’ Anyways?


The most common complaint product security and SSG leaders share with me, once integrations begin producing findings, is that they can’t reach agreement with engineers as to “What is a finding and what do I have to fix?”


* The Findings Plumbing Itself
In the bad-old-days, different defect discovery tools had dramatically different findings formats. Some reported “risks”, others “issues” and each with a myriad of colliding or disparate attributes. Today, almost all OSS and commercial tools output SARIF, so tool output is reliably in a standard format. Likewise, SCM platforms and CSPs alike were designed to consume SARIF. Having verified SARIF support in their chosen tools and platforms, security practitioners rest assured that once integrated, their defect discovery plumbing won’t leak or clunk in the night. If only it were that easy. I’d argue that SARIF fails to solve all the interesting problems with which a defect discovery practice is forced to contend, but that’s another topic for another day. With a DIY approach, SARIF assures security can get findings to engineers, within their development and operations platforms’ security UI.


For now, 1) validate the tools you’ve selected emit SARIF and 2) that you understand what kinds of enrichment you’ll need to apply, enhancing this output to work within your scoring regimen and vulnerability management workflows.


* Routing Findings to the Accountable Party
Experienced security practitioners know the pain of developer pushback on a finding’s validity, or the priority of fixing reported findings. “Sure, but that’s not my code/resource/problem.” Ensuring each finding is reported to the accountable party is one of the most vital steps an organization can take to facilitate developer and security agreement towards reliably planned and scheduled fixes. This ‘attribution’ problem is most easily solved by SAST tools that integrate well with SCM platforms. The best of these tools correlate committers, committed changes, and scan results. Composition analysis tools, IAC, configuration, and container/image scanning tools present a greater challenge. In a DIY practice, different teams may select tools that take differing approaches to container scanning (one evaluating configuration specs while another looks at registry metadata, while other considers the binary). Even if a single tool is selected, some tools operate on multiple formats including configuration or source files, build and orchestration metadata, resulting binary artifacts, or interrogation of runtime configuration and state. This matters to attribution because engineering may deem someone within the devops or platform team accountable for an operating container while a developer bears accountability for some container composition files.

Experiment with and validate that output of each chosen defect discovery tool can reliably be routed to the correct accountable party. Scan output that indicates an operational resource may be routed to a developer to fix the associated container, image, k8s config, or IAC that created affected resources, but only if you can build a reliable means of making that correlation between vulnerable resource and underlying source artifact.

* Normalization
One large problem with ‘big box’ commercial defect discovery tools is the sheer volume of rules, and therefore the dizzying number of finding categories on which they opine. Maturing practices ‘disable’ a vast majority of these rule categories before broad rollout. In a DIY practice, more modern tools typically have rulesets borne from standards like CIS Benchmarks, various “best practices” from CSA or CNCF, “cheat sheet” resources, or industry standards such as NIST-218, -53, and so forth. The challenge with this source material is that it defines overlapping sets of concerns at different levels of abstraction. As OSS tool maintainers sought to provide assurance in these standards, their rule sets began to reflect the overlap and ambiguities. As an example, when considering insider threat of malware insertion for repos or build, does the tool couch that as an authorization problem, identifying least privilege in repos permissions? A process problem, identifying a lack of commit and merge approval workflow? Or, a behavioral problem, detecting anomalies in commit metadata and material itself?

With enterprise security tools, maturing practices realized they’d have to normalize tool output: disabling what didn’t fit their risk model, adjusting the way detection and reporting happened for a particular security standard, then authoring or importing their own remediation guidance, tuned to that standard’s scope and resolution. As organizations adopting a DIY approach evolve from initial implementation to a growing/maturing practice, they should alot time for customizing tool rules in pursuit of normalized output, classification, and remediation guidance.

* Vulnerability Management Workflow
Finally, there are always valid reasons why engineers need to push back on what’s reported as a finding, even after tuning. This may be because the tool is outright incorrect, because the tool can’t consider the context that defines exploitability and impact, or for other reasons. The main limitation that OSS tools common to DIY is a lack of triaging and suppression workflows, let alone scalable ones. The supposition was that the downstream SCM or CSP security platforms would handle triage and suppression.

An ASOC tool can overlay vulnerability management workflow on OSS tools. With or without one, organizations should establish a policy for what kinds of suppression are accepted, and under what circumstances, and in what tool or platform they should be requested and enforced. Define and map suppression marking and workflow from native tool features to your vulnerability management workflow. Failing to do so might mean that developers circumvent governance by ignoring finding instances (or classes) without proper visibility and audit from the security organization and those responsible for risk sign-off.

Scaling DIY Practices

DIY defect discovery practices, powered by OSS tools, solve some of the problems from which legacy programs, reliant on enterprise security tools, suffer. Specifically, these programs often suffer less from voluminous tool output and complex security-tool-to-SDLC integrations. On the other hand, problems remain from the legacy approach, such as:

  • Sufficiently reducing implementation level-of-effort to achieve defect discovery at application portfolio scale;

  • Supporting both developers and governance with vulnerability management workflow;

  • Routing findings to the right parties and compelling them to accept and fix the finding; and

  • Producing consistently detailed output about identified defects and remediation advice.

These problems remain, irrespective of the advantages to an OSS-driven DIY approach over legacy ‘big box’ enterprise security tools. Meeting the challenges above is more directly related to product security and the SSG having the experience and know-how to:

  • Design and code SCM and CSP integrations;

  • Define and implement then socialize and enforce vulnerability management workflow; and

  • Customize defect discovery tool rules, content, and output labeling/scoring to be consistent, coherent, and actionable.

Without the experience or subject matter expertise to do the above, DIY practices often fail to get beyond a prototype phase of < 10 apps. ASOC tools seek to encapsulate this experience and expertise – both in their integration functionality as well as in the configuration and content they overlay onto supported tools. Whether you leverage such a tool, or DIY, consider the above aspects of practice delivery. And, enjoy the hands-on collaboration this new era of developer-driven defect discovery entails.