For most of the last decade, the consensus answer to "how should I prioritise vulnerabilities?" has been some combination of CVSS, EPSS, and CISA KEV. CVSS for technical severity. EPSS for exploitation probability. KEV for confirmation of in-the-wild use. Stack the three, and you have an industry-standard prioritisation pipeline.
That pipeline is a real improvement over CVSS alone. But it has a structural blind spot: every input describes the vulnerability at internet scale, not in your environment. CVSS is the same number on every host. EPSS is the same probability for every organisation. CISA KEV is a binary that doesn\'t care whether you run the vulnerable software or whether your team can remediate quickly. None of these signals tell you what attacking you would actually look like, or what risk a given finding actually poses to your specific operations.
That gap is the entire reason Centraleyezer scores risk differently. Real RBVM is not "CVSS plus two more internet feeds." Real RBVM is contextual: a score that is built from the finding, the asset it lives on, the network around the asset, the threat landscape relevant to you, and the team that has to fix it. That is what this post is about.
What\'s wrong with CVSS for prioritisation?
CVSS β the Common Vulnerability Scoring System β was never designed as a prioritisation engine. It was designed as a technical-severity rubric for a CVE in the abstract: how bad is the bug itself, on a scale of 0 to 10, on a hypothetical fully-exposed system with no mitigations? That is a useful question, but it is not the question security teams actually need answered. The question they need answered is: given my assets, my network, and my team β which of these findings will cause an incident first?
Most organisations end up running a CVSS-driven queue anyway because nothing else is standardised. The result is a queue that systematically buries moderate-CVSS findings on critical assets and floats high-CVSS findings on irrelevant ones. CVSS is not the wrong number; it is the wrong question for prioritisation.
CVSS vs EPSS vs contextual risk scoring
The standard answer to "CVSS isn\'t enough" has been to layer on EPSS β the Exploit Prediction Scoring System from FIRST.org, which gives a 0β1 probability that a CVE will be exploited in the wild within 30 days β and to overlay CISA KEV, which flags CVEs confirmed exploited somewhere on the internet. That stack is an improvement over CVSS alone. It is still not contextual.
Here is the difference in plain terms:
- CVSS answers "how technically severe is this CVE in the abstract?" Useful for risk-comms; useless for triage on its own.
- EPSS answers "across the entire internet, what is the probability someone exploits this CVE in the next 30 days?" Useful as a tiebreaker; uninformative about your specific exposure.
- CISA KEV answers "has this CVE been confirmed exploited somewhere in the wild?" Useful as evidence; uninformative about whether you are exposed.
- Contextual risk scoring answers "given this finding, on this asset, in this network, with this team, against this threat landscape β how risky is it for me?" That is the question that decides the queue.
CVSS, EPSS, and KEV are still ingested in Centraleyezer for traceability β auditors love them, and they are useful in the technical report β but they are deliberately not used as inputs to the contextual score. The score is built from six factors that do describe your environment.
The structural problem with CVSS, EPSS, and CISA KEV
These three signals are useful, well-designed, and freely available. They are also all measuring the same kind of thing β properties of a CVE in the abstract. None of them know anything about your assets, your network topology, your threat landscape, or your team\'s response patterns. A complete prioritisation system has to add the things they can\'t see.
- CVSS describes technical severity in a vacuum β a 9.8 on a wide-open production database scores identically to a 9.8 on an air-gapped lab machine.
- EPSS describes the probability that someone, somewhere will exploit a CVE in the next 30 days β not whether anyone is targeting your sector, your stack, or your perimeter.
- CISA KEV confirms that a CVE has been exploited in the wild somewhere β but a "KEV-yes" finding on a network you don't expose may be far less risky than a "KEV-no" finding on your customer-facing API.
- None of the three account for whether your team will actually fix the issue this week or this quarter β yet that gap is one of the largest determinants of real-world exposure.
The result, in any organisation that runs CVSS-or-EPSS-driven triage at scale, is the same: a queue that systematically over-prioritises high-severity findings on low-impact assets and systematically under-prioritises moderate-severity findings on the assets that matter most. Audit-ready, but operationally wrong.
The six factors that drive a contextual risk score
Centraleyezer\'s contextual risk score for every vulnerabilityβasset pair is built from six factors. Each one answers a question that CVSS, EPSS, and KEV cannot answer on their own.
DREAD
A structured threat-modelling score across Damage, Reproducibility, Exploitability, Affected users, and Discoverability. DREAD captures the inherent severity of the finding the way an attacker would assess it β not as an abstract number, but as a five-dimension threat profile that maps to actual attacker behaviour.
Asset Criticality
Set per asset by the people who own it: low, moderate, important, critical. The same finding on a payment gateway is not the same finding on a developer sandbox. CVSS treats them identically. Contextual RBVM does not.
Network Exposure of the Asset
Where the asset actually lives β internet-facing, DMZ, internal, or fully isolated behind compensating controls. An attacker has to reach an asset before they can exploit it. An exposed asset elevates every finding it carries; a fully isolated one downgrades them.
Exploitability in your environment
Not "is exploit code published somewhere on the internet?" β but "given this asset's actual configuration, version, and compensating controls, how practical is weaponisation?" This is the question internet-wide signals like EPSS cannot answer.
CTI signals
Cyber threat intelligence about the finding and the technology stack: relevant active campaigns, the threat actors targeting your sector, and exploitation chatter from feeds that are tuned to what matters to your organisation β not generic exploit-prediction averages.
Human-AI Reaction Loop
The factor no internet-wide signal can ever capture: how your team actually responds. Centraleyezer learns from each asset owner's acknowledgement time, remediation time, and risk-acceptance patterns. A vulnerability owned by a slow team is operationally riskier than the same vulnerability owned by a fast one β and the score reflects it.
Why the Human-AI reaction loop is the unfair advantage
The first five factors describe the finding and the asset. The sixth describes the people. That is the factor no internet-wide signal will ever provide β and the one that most accurately predicts whether a vulnerability will turn into an incident.
A "high" finding owned by a team that historically remediates in 48 hours is operationally much less dangerous than a "medium" finding owned by a team that historically takes six weeks. The exposure window is what attackers exploit. CVSS, EPSS, and KEV are blind to it. The Human-AI loop is not.
In practice, this means the same finding on the same asset can receive a different contextual score across two organisations β or even two teams in the same organisation β depending on their actual response patterns. Security leaders get a far more honest picture of where their real risk concentrates: not on the vulnerabilities that score highest in the abstract, but on the ones that will live unfixed the longest in their reality.
Same finding, different verdict: contextual RBVM in practice
The clearest way to see why the contextual model produces different β and better β prioritisation than CVSS+EPSS+KEV is to walk through real scenarios.
OpenSSL vulnerability in a TLS library
CVSS+EPSS+KEV verdict
Critical β emergency patch
Contextual RBVM verdict
Low β schedule with next maintenance
Why: The library is used only on an internal monitoring agent. The asset is in an isolated management VLAN with no inbound exposure, asset criticality is "low", and the agent's owner remediates this class of finding within hours every time. DREAD is mid; exploitability in this configuration is near-zero. Real risk: low.
Stored XSS in an internal admin panel
CVSS+EPSS+KEV verdict
Medium β backlog
Contextual RBVM verdict
High β fix this week
Why: The admin panel sits on a customer-facing host that the same team owns a paymentprocessing pipeline on. Asset criticality is "critical", network exposure is "internet-facing", and the owning team has a 21-day historical median acknowledgement time on findings of this class. DREAD scores high on Damage and Affected users (privileged admin sessions). Real risk: high β and CVSS+EPSS would have buried it.
Buffer overflow in a desktop printer driver
CVSS+EPSS+KEV verdict
High β fix in 7 days
Contextual RBVM verdict
Low β accept with review
Why: The driver is on print servers in branch offices. No network exposure beyond the local LAN, asset criticality is "low", DREAD reproducibility is low (requires local user interaction with a malicious print job), and CTI shows no active campaigns targeting this vendor. Real risk: low.
Authentication bypass in a customer-facing identity service
CVSS+EPSS+KEV verdict
High β fix in 7 days
Contextual RBVM verdict
Critical β fix today
Why: Internet-facing, asset criticality "critical", DREAD scores extreme on Damage and Affected users, and CTI flags credential-stuffing campaigns active in your sector this week. The owning team's last similar finding took 11 days to remediate. Operational risk is well above what internet-wide signals would suggest. Real risk: critical.
What changes when you switch to contextual RBVM
Teams that move from CVSS-driven (or CVSS+EPSS-driven) triage to a contextual model consistently see four shifts in operations within the first quarter:
- The "critical" queue shrinks dramatically. The findings that remain are the ones that actually warrant emergency response.
- The "medium" queue starts producing real surprises β the findings on critical assets owned by slow teams now show up where they always belonged: at the top.
- Risk acceptance becomes meaningful. When a finding is contextually low, accepting it is a defensible decision, not a quiet capitulation.
- Conversations with auditors get easier. "Why didn't you patch this 9.8?" has a real answer when the score reflects exposure, asset criticality, and compensating controls β not raw severity.
This is what we mean when we say Centraleyezer prioritises by actual business risk rather than raw severity. CVSS, EPSS, and CISA KEV remain useful as ingestion metadata and for traceability β your CVE records carry them, your technical reports show them, your auditors can see them β but they are not what drives the prioritisation. The contextual score does.
Common questions
Does Centraleyezer ignore CVSS, EPSS, and CISA KEV entirely?
No β they are ingested for traceability and shown in technical reports so analysts and auditors can see the full picture of every CVE. They are simply not used as scoring inputs. The contextual risk score is built from DREAD, asset criticality, network exposure, exploitability in your environment, CTI signals, and the Human-AI reaction loop. CVSS, EPSS, and KEV remain available as reference data alongside the score.
Why DREAD instead of CVSS?
DREAD captures severity across five dimensions that map directly to attacker behaviour (Damage, Reproducibility, Exploitability, Affected users, Discoverability). It is more expressive than a single CVSS number and integrates cleanly with the other contextual factors. CVSS describes a vulnerability in the abstract; DREAD describes it as a threat to the specific asset it sits on.
How does the Human-AI reaction loop avoid penalising overworked teams?
The loop measures operational reality β it is not a performance review. The signal is used to allocate attention, not blame. If a team is consistently slow on a class of finding because they lack capacity, the higher contextual score on those findings makes the case for additional resourcing visible to leadership. The model is a mirror, not a judge.
Does this approach support compliance frameworks like NIS2, DORA, ISO 27001 and PCI-DSS?
Yes β and arguably better than CVSS-only triage does. Auditors are not looking for a specific scoring algorithm; they are looking for evidence of a structured, risk-based, documented process. A contextual model with explicit factors, full audit trail, and SLA tracking gives auditors exactly what NIS2 Article 21, DORA Article 9, ISO 27001 A.8.8, and PCI-DSS Requirement 6 expect to see.