Relying solely on the Common Vulnerability Scoring System (CVSS) is insufficient when it comes to effective vulnerability management. While the CVSS score provides a quantitative measure of a vulnerability’s severity, it fails to capture the contextual nuances that can significantly impact the actual risk to an organization. In this article, we will discuss how best to choose a vulnerability management solution.
Factors such as the network architecture, asset value, exploit availability, and the organization’s specific environment are not adequately accounted for in the current CVSS score calculations.
First recently announced CVSS v4.0 in preview. In this new version, there is an intention to incorporate additional metrics that capture the contextual aspects of vulnerability management. While specific details on how these metrics will be implemented in CVSS 4.0 remain unclear at this point, it is promising to see recognition of the need for a more comprehensive approach.
Assessing Vulnerabilities and Prioritizing Risk
To ensure comprehensive vulnerability management, it is crucial to establish a workflow that incorporates additional metrics beyond the CVSS score, such as exploitability analysis, asset criticality, business impact, Runtime Insights, fix availability, and workaround availability.
Let’s dig deeper with the following example:
In this example, we will use an HTTP web server image called security_playground. It’s always better to use your own example.
Severity assessment
The vulnerability severity is determined by calculating the CVSS score. The resulting CVSS score ranges from 0 to 10, with 10 being the most severe. The severity assessment is typically categorized as follows:
CVSS score 0.0 to 3.9: Low severity
CVSS score 4.0 to 6.9: Medium severity
CVSS score 7.0 to 8.9: High severity
CVSS score 9.0 to 10.0: Critical severity
Let’s scan our image. In this example, we used Sysdig CLI scanner – you can read more about Sysdig CLI scanner here.
~ % ./sysdig-cli-scanner docker.io/sysdiglabs/security-playground:latest --apiurl https://eu1.app.sysdig.com/
Code language: Perl (perl)
The results show 3,969 vulnerabilities in this image, 191 of them are critical.
Indeed, facing 3969 vulnerabilities in a single image can be overwhelming and challenging to handle. Incorporating additional metrics to add context to the results can greatly assist in prioritizing and managing these vulnerabilities effectively.
Exploitability analysis
Exploitability analysis assesses the likelihood of each vulnerability being exploited. A lot of vulns are theoretical and cannot be easily exploited in real life. Exploitability information is reported by security analysts. The exploitability of a vulnerability is confirmed when:
- Attacks targeting this vulnerability have been reported
- Proof of concept (POC) code is publically available
There are several reputable sources for feeds and databases that provide information about exploited vulnerabilities, such as National Vulnerability Database (NVD), Cybersecurity and Infrastructure Security Agency (CISA), and Exploit Database (Exploit-DB).
Some vulnerability scanners can provide this information as part of the scanning process. We filtered the scan results from the previous example and the good news is that out of the initial 3,969 vulnerabilities, we identified 203 as exploitable and only 6 of which are critical.
Runtime insights analysis
Most of the vulnerabilities reported in container environments are actually noise. Only vulnerabilities that are tied to packages used at runtime offer a real chance of exploitation. Runtime insights provide deep visibility into system calls to identify what packages are loaded at runtime.
A powerful runtime insights mechanism should be able to track every binary used at runtime and link that to the packages, then filter vulnerabilities based on loaded packages in memory that can be exploited.
Detecting run packages at runtime with Falco
In our example, one of the critical CVEs is CVE-2021-3711 reported in the openssl package. As you see, the affected packages are “libssl-dev, libssl1.1, openssl.”
With Falco, you can simply create a rule to monitor opened files inside each container.
customRules:
my_rules: |-
- rule: Monitor Opened Files in Containers
desc: Detect when files are opened inside containers
condition: evt.type in (open,openat,openat2) and container and container.image != "host" and k8s.ns.name= "default"
output: >
Opened file: %fd.name
Process: %proc.name
Process ID: %proc.pid
Container ID: %container.id
Container Name: %container.name
priority: NOTICE
tags:
- file_open
Code language: Perl (perl)
In this rule:
- The
condition
field specifies the conditions for triggering the rule:evt.type in (open,openat,openat2)
captures the file open eventscontainer and container.image != "host" and k8s.ns.name= "default"
limited to containers that are not running with the “host” image and are within the Kubernetes namespace “default.”
- The
output
field defines the output message when the rule is triggered. It includes the name of the opened file (%fd.name
), the process name (%proc.name
), the process ID (%proc.pid
), and the user accessing the file (%user.name
). - The
priority
field sets the priority level of the alert. - The
tags
field includes relevant tags to categorize the alert.
By enabling this rule for the workloads in the default namespace, Falco will log every syscall to open a file. By checking the Falco log and filtering for libssl, we can tell that libssl is opened and loading into the memory, then it can be exploited
That was an example of leveraging runtime insights to prioritize vulnerabilities. However, in a production environment, it is essential to automate this process. By automatically detecting binaries, linking them to their respective packages, and comparing them to the vulnerability scan results, you can efficiently filter and generate alerts based on the identified vulnerabilities.
Fix or workaround availability
When vulnerabilities are discovered, it is crucial to determine if there are known solutions or actions that can be taken to mitigate or resolve the security issue. Fix availability helps determine the feasibility and urgency of applying a remediation measure to address the vulnerability.
The availability of a fix can vary depending on the nature of the vulnerability and the software or system affected. It may be provided by the software vendor as a software update, patch, or specific configuration change. In some cases, interim workarounds or mitigation steps may be suggested until a permanent fix is available.
Example of fix availability:
A suggested fix for our openssl vulnerability is to upgrade the openssl version to 1.1.1d-0+deb10u2.
Example of workaround availability:
The log4j exploitation happens when the Log4j2 library can receive variable data from the LDAP and JNDI lookup. and execute it without verification. In this scenario, creating a security rule to block any unauthorized LDAP traffic to the log4j server would be a good workaround until the development team patches these deployments.
Asset criticality and business impact
Asset criticality and business impact are essential components of vulnerability management assessments. Based on the importance of the asset and the business impact to the organization, you can prioritize remediation efforts by focusing on the most valuable and critical assets. Usually, these metrics are measured in terms of confidentiality, integrity, and availability.
Outcome
By following this workflow, we were able to effectively narrow down the initially discovered 3,969 vulnerabilities to just 2 that require immediate attention.
The described steps show how vulnerability management assessment and prioritization work under the hood. However, in a production environment, implementing a robust vulnerability management solution that automates and facilitates these steps is crucial.
Finally, consolidating your vulnerability management system into a CNAPP solution and ensuring seamless integration within your ecosystem and software development lifecycle (SDLC) brings numerous advantages. By centralizing security functionalities, such as vulnerability management, within a single CNAPP solution, you can streamline operations, reduce complexity, and improve overall efficiency. Integration within your ecosystem and SDLC enables automated vulnerability scanning, assessment, and remediation processes at various stages, including development, testing, and deployment.
Conclusion
- When you choose a vulnerability management solution, make sure how it will handle these steps.
- Leveraging runtime insight to prioritize remediation effort is a core component in the vulnerability management lifecycle and requires a robust runtime engine security engine.
- Try to consolidate your security tools by looking at a comprehensive CNAPP solution that integrates and fits in your ecosystem.