The Scenario
InfoDynamics is a 2,500-person software company providing business intelligence tools to Fortune 500 companies. Their security team consists of 3 people: a CISO, a security architect, and one analyst. On a Wednesday morning in October, their quarterly vulnerability-scanning tool (Nessus) completed a full scan of their 8,000 production, staging, and development assets. The results were apocalyptic: 23,000 distinct vulnerability findings.
The security team’s previous vulnerability management process was straightforward: use the cvss-common-vulnerability-scoring-system score to prioritize. CVSS assigns a severity score from 0 (low) to 10 (critical). Their policy was simple: patch all vulnerabilities with CVSS >7 within 30 days; patch CVSS 5-7 within 90 days; everything else within six months.
Of the 23,000 findings, 4,200 had CVSS >7 (critical or high severity). Using their traditional process, this meant: 4,200 vulnerabilities ÷ 3 security team members ÷ 20 working days per month = 70 vulnerabilities per person per month to track, validate, coordinate remediation, and verify patches. The security team would need 50+ months to clear just the “critical” vulnerabilities.
The problem was that CVSS is not a measure of actual risk. CVSS measures technical severity (does the vulnerability allow remote code execution, does it require authentication, etc.). But CVSS doesn’t account for:
- Exploitability: Is there a public exploit? Is the vulnerability already being weaponized? How difficult is it to exploit?
- Asset criticality: Is the vulnerable system an internet-facing web server (high risk) or a development workstation (low risk)?
- Business context: Is this vulnerability in a system that actually needs to run, or in a service that could be disabled?
For example:
- A buffer overflow in an obscure photo editing library with CVSS 8.8 (critical) but no known exploits, affecting only development workstations = low business risk
- A authentication bypass in a production API with CVSS 7.2 (high) but with active exploitation in the wild = very high business risk
The security analyst, Trevor, had been attempting to review the 4,200 critical vulnerabilities. He was spending 20 minutes per vulnerability: reviewing the cvss-common-vulnerability-scoring-system information, checking if there was a public exploit available, checking the affected asset’s role (development vs. production), estimating remediation effort. At 20 minutes per vulnerability, the 4,200 critical findings would take 1,400 hours—nine months of full-time work, just to prioritize them.
Trevor proposed a different approach: risk-based-prioritization based on exploitability, not just CVSS score. He redesigned the vulnerability triage process around indicators-of-compromise from threat intelligence: if a vulnerability is being actively exploited in the wild, it gets priority 1 (patch within 2 days). If a public exploit exists but isn’t being used, it’s priority 2 (patch within 2 weeks). If the vulnerability is internet-facing, it’s priority 3 (patch within 30 days). Everything else is priority 4.
Using this new framework:
- 340 of the 4,200 critical findings were being actively exploited (threat intelligence indicated active use)
- 1,100 had public exploits available
- 2,200 were in internet-facing systems (but no public exploits)
- 560 were in internal systems or development environments (low exploitability in practice)
This recategorization meant:
- Priority 1 (2-day deadline): 340 vulnerabilities → requires coordination with ops to patch or disable, feasible for 3-person team
- Priority 2 (2-week deadline): 1,100 vulnerabilities → significant effort but manageable within 30-day sprint
- Priority 3 (30-day deadline): 2,200 vulnerabilities → can be batched with regular patch cycles
- Priority 4 (monitor/mitigate): 560 vulnerabilities → either disable service or implement compensating controls
This prioritization was based on real risk, not CVSS noise.
The new framework also revealed opportunities for remediation-vs-mitigation decisions: instead of patching every vulnerability immediately, some could be mitigated:
- Disable unnecessary services (the web server doesn’t need SSH, the development server doesn’t need to be internet-facing)
- Implement network-based-idsips signatures for known exploits
- Segment networks so vulnerable systems can’t reach sensitive data
- Implement application-allowlisting on systems running vulnerable software
By the end of Q4, using the new risk-based-prioritization framework:
- All 340 actively exploited vulnerabilities had been patched (within 2 days of disclosure)
- 1,080 of 1,100 publicly exploitable vulnerabilities were either patched or mitigated
- 1,800 of 2,200 internet-facing vulnerabilities were patched
- 200 of 560 low-priority vulnerabilities were mitigated through network segmentation or service disabling
The security team had gone from “we can’t possibly remediate 23,000 vulnerabilities” to “we’ve effectively mitigated the business risk from the 5,000 most dangerous vulnerabilities.”
What Went Right
- Risk-based-prioritization aligned security effort with actual business risk: Instead of being driven by CVSS scores (which don’t reflect business context), the team prioritized based on exploitability and asset criticality.
- Threat intelligence integration for exploitability assessment: Using threat-intelligence feeds to determine which vulnerabilities were being actively exploited enabled accurate prioritization.
- Remediation-vs-mitigation decisions reduced effort: Some vulnerabilities could be addressed through network segmentation or service disabling rather than patching, reducing the patching burden.
- Asset criticality classification: Understanding which systems were internet-facing, which were production, and which were development enabled appropriate prioritization.
What Could Go Wrong
- Purely CVSS-based prioritization: If the security team had continued using only CVSS scores, they would have wasted resources patching low-risk vulnerabilities while missing high-risk ones.
- No exploitability assessment: Without understanding which vulnerabilities actually had public exploits or were being weaponized, prioritization would be random.
- No asset criticality classification: If the team didn’t know which systems were internet-facing or critical to operations, they couldn’t weight the risk appropriately.
- Insufficient threat-intelligence-integration: If the team didn’t have access to threat feeds indicating which vulnerabilities were being exploited, they would miss active threats.
- No false-positivesnegatives handling: Some vulnerability scanners report false positives (vulnerabilities that don’t actually exist due to mitigating factors). These need to be identified and filtered out of the prioritization process.
Key Takeaways
- CVSS is a technical severity score, not a business risk score: CVSS measures vulnerability technical properties (does it allow RCE, does it require auth) but doesn’t reflect whether the vulnerability is exploitable in your environment or critical to your business.
- Risk-based-prioritization must incorporate asset criticality and exploitability: Vulnerability risk = (likelihood of exploitation) × (asset criticality) × (impact if exploited). All three factors matter. The 23,000 → 5,000 reduction came from understanding that many “critical” vulnerabilities had low exploitability or affected low-criticality assets.
- Threat-intelligence-integration identifies which vulnerabilities matter: Threat feeds reveal which vulnerabilities are being actively exploited, which have public exploits, and which are targeted at your industry. This dramatically accelerates prioritization.
- Remediation-vs-mitigation decisions can reduce patching burden: Some vulnerabilities can be addressed without patching: disabling unnecessary services, segmenting networks, implementing application allowlisting, or retiring vulnerable systems.
- Vulnerability scanning must be continuous, not quarterly: A quarterly scan means vulnerabilities are 3 months old before they’re even discovered. Monthly or even weekly scanning enables faster response to new threats.
- Credentialed-vs-non-credentialed-scans must be used together: Non-credentialed scans (probing from outside the network) identify internet-facing vulnerabilities. Credentialed scans (running inside the network with admin access) identify system-level vulnerabilities. Both are necessary.
Related Cases
- case-penetration-testing — A penetration test can validate which vulnerabilities in the prioritized list are actually exploitable, improving the prioritization model over time.
- case-hardening — disable-unnecessary-services-and-ports and least-functionality-principle reduce the attack surface, eliminating many vulnerabilities without patching (the vulnerable service doesn’t exist).
- case-risk-management — Vulnerability management is a subset of risk management. Asset criticality assessment and impact evaluation are essential to prioritization.