Home Features Cutting the Vulnerability Noise with Context

Cutting the Vulnerability Noise with Context


Chasing Patches

Why does 20% of hacking activity focus on vulnerabilities? Here’s why and it’s no surprise. The cycle of scanning and patching continues as it ever does. The discussion of which patches get deployed in what order ranges from friendly conversation to heated argument. There are never enough cycles to patch them all. And some systems cannot be patched because of deprecated code or critical functionality.

By Brandon Hoffman, CISO at Netenrich

Certainly, we could shift the heavy lifting of ordering and priority left, and have machines do the work. The idea of having the machines learn and crunch enough data to make decisions for us is appealing. Many smart people and vendors have worked hard on this problem for decades, yet the need for a human analyst remains too deeply embedded.

So, an organization builds a risk model and aligns it to their business. The machines are spinning away on the risk model and the data input. Yet somehow it feels that something is missing because while we all accept that there is no 100%, the damage is still not being controlled. Setting aside the technology difficulty and dependence on future improvements, there remains an opportunity to dissect the inputs to the risk model and make course corrections.

Risk Model Elements

Every organization has, or should have, a risk model that is aligned with the specifics of an organization’s business and exposure areas. Some elements of this risk model are generic or more widely applicable. These elements are commonly adopted by technology providers and include all the usual suspects such as CVSS scoring, patch availability, and assessments related to the function of the system impacted.

Furthermore is a core component of risk valuation – contextualization. Context is fundamental for any intelligence driven model, and that’s what we are really chasing……context. Some variables of context are more easily identified and remain relatively constant. Examples that are present in the vulnerability risk model include the operational area of the system, uptime requirements of the system, and the user(s) of the system.

More difficult components that require constant re-assessment include the downstream impact and accessibility of the system in question. How exposed is this system to somebody targeting it and if exploited, where can you pivot from that system? Technology providers have done a reasonably good job of creating systems that can generate this type of analysis. Technology that assesses risk exposure from the outside in and systems that calculate network paths and potential access through the network.

The final piece needed for context comes down to adversary capability, intent, and opportunity. These contextualized data points are critical inputs to any risk model, especially when considering risk to business systems and data. To evaluate capability, intent, and opportunity, you need to assess the “who”, and this analysis is what bolsters the model to create further efficiency and gains in risk reduction.

Bolstering the Model by Adding Context

The majority of attacks on business systems are driven by cybercriminals. Their motivation is the potential financial gain from selling access, data, or tools that empower other criminals. From the adversary’s perspective, they carefully consider which tools to equip themselves with for targeting specific high-value victims, but also look at it from the other angle at tools and exploits being developed against vulnerable systems that can be attacked opportunistically.

They too are considering risk against a similar model – Is there a patch, how many of these systems exist (availability and exposure of targets), what data may be present on these systems, and where are these systems generally positioned in the organization. Depending on the capability of the adversary, they may also consider if there is available exploit code, is it weaponized, and is it easy to use or re-purpose. The starting point for this perspective is from outside-in. The initial attack vector will most likely be an externally facing company asset, which provides a unique challenge and opportunity at the same time.

Analysis of the adversary perspective can be a daunting and complex task. Understanding of the attack surface available to the adversary is just the beginning. It is compounded by the difficulty to obtain the prerequisite information for interpretation of motive and intent. Creating a statistical analysis of how frequently a specific exploit is mentioned or discussed doesn’t really provide enough contextual data to satisfy this component of the risk model. The observables necessary to analyze this would cover credibility of an adversary over time (have they been successful at this), where an adversary is sharing information about an exploit (is it with the world or exclusively with other credible criminals), and most importantly the technical status/maturity of the exploit being discussed and shared. These contextual data points are the critical inputs to the risk model that can have a significant and effective impact on scoring and prioritization.

Consider a situation where there is a vulnerability on a system of low value, rated as low criticality. Understanding where that system exists and downstream access from that system is important, yes. However, if there is no observed targeting or meaningful discussion of that vulnerability by the community of adversaries who normally have the capability, intent, and opportunity to attack the system, or possibly there is only rudimentary code available to leverage that vulnerability, the criticality would remain low.

Conversely, if the people who typically perpetrate these attacks on a regular basis are having meaningful conversations and executing technology advancements to make leveraging this vulnerability more accessible, that low priority vulnerability becomes critical.

Context remains the key input to risk modeling and that context comes from a reasonably challenging combination of closed-source access, data collection and triage, and human-driven intelligence analysis.

About the Author

Brandon Hoffman is the Chief Information Security Officer (CISO) at Netenrich. He is a veteran CTO and security executive well-known for driving sales growth and IT transformation. Hoffman is responsible for Netenrich’s technical sales and security strategy for both the company and its customers. Most recently, he oversaw Intel 471’s dark web threat intelligence business. As former CTO at Lumeta Corporation and RedSeal Networks, Hoffman led technical and field development in network security, vulnerability, and risk management. He’s also held key practitioner roles in security architecture, penetration testing, networking, and data center operations. Hoffman holds a MS Degree from Northwestern University and a BS Degree from University of Illinois at Chicago.


Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.


Subscribe Now to receive Free Newsletter

* indicates required

By submitting this form, you are consenting to receive marketing emails from: EC-Council, 101 C Sun Ave. NE, Albuquerque, NM, 87109, http://www.eccouncil.org. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact