Sysdig | Joseph Yostos https://sysdig.com/blog/author/joseph-yostos/ Thu, 09 May 2024 15:59:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://sysdig.com/wp-content/uploads/favicon-150x150.png Sysdig | Joseph Yostos https://sysdig.com/blog/author/joseph-yostos/ 32 32 The First CNAPP with Out-of-the-Box NIS2 and DORA Compliance https://sysdig.com/blog/the-first-cnapp-out-of-the-box-nis2-and-dora-compliance/ Tue, 19 Mar 2024 14:30:00 +0000 https://sysdig.com/?p=85966 In an era where cloud attacks and threats are happening very fast and constantly evolving, the European Union (EU) has...

The post The First CNAPP with Out-of-the-Box NIS2 and DORA Compliance appeared first on Sysdig.

]]>
In an era where cloud attacks and threats are happening very fast and constantly evolving, the European Union (EU) has stepped up its cybersecurity game with two new regulations: the Digital Operational Resilience Act (DORA) and the revised Directive on Security of Network and Information Systems (NIS2). With more strict requirements on compliance controls and breach disclosures, these regulations are set to transform how businesses manage their cyber risks in Europe. If you’re feeling overwhelmed by these changes, you’re not alone. That’s where Sysdig comes in. As the first CNAPP to offer out-of-box policies for DORA and NIS2 compliance, we’re here to guide you through these new requirements, ensuring your business isn’t just compliant, but also more secure.

Overview of DORA and NIS2

In the past, most regulations were checked periodically for compliance – maybe monthly, quarterly, or up to annually. However, to address the ongoing surge of cyberattacks and the speed at which they move, these new regulations are looking to implement stricter controls and, more importantly, very aggressive requirements around time to disclosure to regulatory authorities in the case of a security event, privacy event, or breach. In the case of DORA, you only have four hours from the moment of classification of the incident as major to disclose. With NIS2, you have 24 hours.

Digital Operational Resilience Act (DORA) is an implementing act introduced by the European Union to address and enhance the security and resilience of digital operations within the financial sector. It aims to consolidate and standardize the digital operational resilience practices across financial entities, ensuring that they can withstand, respond to, and recover from all types of ICT (Information and Communication Technology) related disruptions and threats. The Regulation will apply from Jan. 17, 2025, which means financial companies have less than a year to become compliant with DORA.

DORA applies to the vast majority of the financial services sector. This includes, but is not limited to:

  • Banks and credit institutions
  • Investment firms
  • Insurance companies
  • Asset managers
  • Payment service providers
  • Crypto-asset service providers

Additionally, DORA extends its reach to third-party ICT service providers, including cloud services, which are integral to the operations of financial entities. This is significant as it marks the first time financial services supervisors are given authority to oversee these third-party vendors directly. As it pertains to cloud, DORA also specifies that financial entities should use multi-cloud approaches to improve resiliency. Multi-cloud strategies can indirectly create other security gaps due to varied technology. This approach necessitates that appropriate unified controls and monitoring are implemented to ensure those security gaps aren’t exploitable.

Network and Information Systems Directive (NIS2) 

Unlike regulations, which are directly applicable, NIS2 is an EU directive that sets general objectives for Member States’ national laws on cybersecurity and ICT systems and networks, with the aim of strengthening security across the EU. 

The main goal of NIS2 is to significantly raise the level of cybersecurity across the EU by expanding the scope of the original directive, introducing stricter security requirements, and increasing the accountability of entities within critical sectors. 

NIS2 broadens the scope of cybersecurity obligations to include a wide range of sectors critical to the EU’s economy and society. It encompasses entities in energy, transport, banking, financial markets, healthcare, water supply, digital infrastructure, public administration, and space.

POINT OF VIEW PAPER
Practical Cloud Security Guidance in the Era of Cybersecurity Regulation

Read Now

Sysdig’s Role in Facilitating NIS2 and DORA Compliance

Sysdig is the first Cloud-Native Application Protection Platform (CNAPP) to provide out-of-box compliance policies specifically designed to help organization’s satisfy the technical elements of the European Union’s new regulatory frameworks, DORA and NIS2, as they pertain to cloud resources.

Reading the specifications of DORA and NIS2 could be complex – a best practice would be to disassemble this complex stuff in the elementary building blocks. And that’s what we’re going to do in the following section.

DORA 

Sysdig facilitates this by providing comprehensive controls covering various aspects of Linux, Kubernetes, cloud environments, and identity management. 

Sysdig NIS2 and DORA compliance

These are some of the technical requirements that apply to cloud environments. We will explain these requirements and look at some examples of security controls from Sysdig that ensure cloud assets meet DORA compliance conditions. 

CHAPTER II, ICT risk management
Article 5, Governance and organization

Financial entities shall have in place an internal governance and control framework that ensures an effective and prudent management of ICT risk, in accordance with Article 6(4), in order to achieve a high level of digital operational resilience.

The management body of the financial entity shall define, approve, oversee, and be responsible for the implementation of all arrangements related to the ICT risk management framework referred to in Article 6(1).


Sysdig provides around 300 controls to ensure availability, authenticity, integrity, and confidentiality of data under this article.

Here are some examples:

API Server:
– Defined tls-cert-file and tls-private-key-file

IAM
– Appropriate Service Accounts Access Key Rotation

Storage:
– S3 – Blocked Public Access (Account-wise)

Networking
– Disabled Endpoint Public Access in Existing Clusters 

Linux Security
– /etc/bashrc, or /etc/bash.bashrc contains appropriate `TMOUT` setting
CHAPTER II, ICT risk management
Article 6, ICT risk management framework, Art 6.2

The ICT risk management framework shall include at least strategies, policies, procedures, ICT protocols, and tools that are necessary to duly and adequately protect all information assets and ICT assets. This will include computer software, hardware, servers as well as to protect all relevant physical components and infrastructures, such as premises, data centers and sensitive designated areas. It will also ensure that all information assets and ICT assets are adequately protected from risks including damage and unauthorized access or usage.
The ICT risk management framework must encompass comprehensive strategies, policies, procedures, and tools designed to safeguard all information and ICT assets. This includes software, hardware, servers, physical components, and more.

Sysdig supports these requirements through 190 controls and a multi-layered security approach that includes:

Identity security
– IAM – No Multiple Access Keys

Workload protection
– Workload mounting ServiceAccount Token
CHAPTER II, ICT risk management
Article 7, ICT systems, protocols, and tools

“In order to address and manage ICT risk, financial entities shall use and maintain updated ICT systems, protocols and tools that are:

(a) appropriate to the magnitude of operations supporting the conduct of their activities, in accordance with the proportionality principle as referred to in Article 4;
(b) reliable;
(c) equipped with sufficient capacity to accurately process the data necessary for the performance of activities and the timely provision of services, and to deal with peak orders, message or transaction volumes, as needed, including where new technology is introduced;(d) technologically resilient in order to adequately deal with additional information processing needs as required under stressed market conditions or other adverse situations.”
This section of DORA is all about utilizing and keeping up-to-date ICT systems, protocols, and tools that are scalable, reliable, resilient, and high-performance.

Sysdig aids financial entities in meeting these requirements by providing:

Workload security:
– Container running as privileged

Kubernetes:
– Kubelet – Defined streaming-connection-idle-timeout
– Kubelet – Disabled hostname-override
– Kubelet – Disabled read-only-port
– Kubelet – Enabled make-iptables-util-chains
– Kubelet – Enabled protect-kernel-defaults

Audit Log:
– Audit Log Events – file system mounts
– Audit Log Events – kernel module loading and unloading
CHAPTER II, ICT risk management
Article 9, Protection and preventionArt 9.3
“In order to achieve the objectives referred to in paragraph 2, financial entities shall use ICT solutions and processes that are appropriate in accordance with Article 4. Those ICT solutions and processes shall:

(a) ensure the security of the means of transfer of data;
(b) minimize the risk of corruption or loss of data, unauthorized access and technical flaws that may hinder business activity;
(c) prevent the lack of availability, the impairment of the authenticity and integrity, the breaches of confidentiality and the loss of data;
(d) ensure that data is protected from risks arising from data management, including poor administration, processing-related risks and human error.”
This Article emphasizes that financial entities must employ ICT solutions and processes that ensure data transfer security, minimize risks such as data corruption, unauthorized access, and technical issues, and prevent data availability, authenticity, integrity, confidentiality breaches, and data loss. These measures must also protect data from management-related risks, including administrative errors, processing hazards, and human mistakes.

Sysdig achieves this by means of controls like:
API Server:

API Server
– Defined strong cryptographic ciphers

Compute
– Disabled connection to serial ports

Firewall Configuration:
– IPv4 – firewall rules
– Networking – disallowed default network

These are just some examples of the technical requirements of DORA. Our comprehensive policy extends beyond these examples.

NIS2

NIS2 requirements are very similar to DORA but with a different scope. NIS2 covers all critical infrastructure companies. The scope of critical infrastructure is massive, including the expected healthcare providers, utilities, and telecom providers, but also digital service providers. Entities fall within essential or important categories with different control requirements, monitoring provisions, and attestation levels. 

Sysdig covers the 14 technical requirements of NIS2, with 2,905 total number of controls. 

Most of the technical requirements are under Article 21, “Cybersecurity risk-management measures,” of Chapter IV, “Cybersecurity Risk-Management measures and reporting obligations.” Here are some of the technical requirements.  

Sysdig NIS2 and DORA compliance
“Member States shall ensure that essential and important entities take appropriate and proportionate technical, operational and organizational measures to manage the risks posed to the security of network and information systems which those entities use for their operations or for the provision of their services, and to prevent or minimize the impact of incidents on recipients of their services and on other services.

Taking into account the state-of-the-art and, where applicable, relevant European and international standards, as well as the cost of implementation, the measures referred to in the first subparagraph shall ensure a level of security of network and information systems appropriate to the risks posed. When assessing the proportionality of those measures, due account shall be taken of the degree of the entity’s exposure to risks, the entity’s size and the likelihood of occurrence of incidents and their severity, including their societal and economic impact.”
NIS2 requires entities to adopt suitable measures across technical, operational, and organizational domains to manage security risks for their network and information systems, aiming to reduce the impact of incidents. These measures should align with the latest standards and be cost-effective, reflecting the entity’s risk exposure, size, and potential incident impacts.

Sysdig addresses this through over 200 controls, here are some examples:
– Compute – Installed latest OS patches
– Container permitting root
– Logging – Enabled Cluster Logging AKS/EKS
– SQL Server – Enabled periodic recurring scans
– SSH Server Configuration Permissions –  public host key files
Article 21.2(d)The measures referred to in paragraph 1 shall be based on an all-hazards approach that aims to protect network and information systems and the physical environment of those systems from incidents, and shall include at least the following: supply chain security, including security-related aspects concerning the relationships between each entity and its direct suppliers or service providers.key focus is on securing the supply chain, which involves addressing security aspects in the relationships between entities and their direct suppliers or service providers.

Sysdig can facilitate compliance with this requirement through over 200 controls, and here are some examples:

Secure SDLC:
– Registry – Enabled Vulnerability Scanning
– Registry – Read-only access

Logging:
– Logging – Enabled cclusterl logging 

Access control:
– Over-permissive access to resource types in group

Secret:
– Secrets Management

These are just some examples of the technical requirements of NIS2. Our comprehensive policy extends beyond these examples.

Conclusion

In conclusion, the NIS2 directive and DORA regulations mark significant milestones in the European Union’s journey towards stronger cybersecurity and operational resilience, particularly within critical sectors and the financial industry. Set to come into effect in January 2025, these comprehensive frameworks necessitate that affected entities — spanning a broad array of sectors — implement robust measures to protect their network and information systems against a wide range of cyber threats.

In this pivotal moment, Sysdig stands out as the first Cloud-Native Application Protection Platform (CNAPP) to offer out-of-the-box policies to assist in NIS2 and DORA compliance. This unparalleled readiness positions Sysdig not just as a tool, but as a strategic partner for businesses seeking to navigate the impending regulatory landscape confidently.

To learn more about compliance and regulations in cloud-native environments, watch our panel conversation: Delivering Secure, Compliant Financial Services in the Cloud.

The post The First CNAPP with Out-of-the-Box NIS2 and DORA Compliance appeared first on Sysdig.

]]>
Sysdig Integration with Backstage https://sysdig.com/blog/sysdig-integration-backstage/ Mon, 11 Mar 2024 16:00:00 +0000 https://sysdig.com/?p=85537 Developers are frequently tasked with working with multiple tools in the cloud-native era. Each of these tools plays a crucial...

The post Sysdig Integration with Backstage appeared first on Sysdig.

]]>
Developers are frequently tasked with working with multiple tools in the cloud-native era. Each of these tools plays a crucial role in the application life cycle, from development to deployment and operations. However, the sheer variety and diversity of these tools can increase the likelihood of errors or the accidental inclusion of critical vulnerabilities and misconfigurations.

To tackle this problem Backstage provided a comprehensive developer portal that offers an integrated perspective on all software resources, documentation, and tools. It’s a one-stop-shop that helps developers manage, monitor, and document the entire software development lifecycle (SDLC). In 2023, the Cloud Native Computing Foundation (CNCF) declared Backstage as the third most rapidly expanding project of the year.

Backstage as its own, already stands out as a robust resource for developers and DevOps teams. However, its utility is greatly enhanced when integrated with Sysdig, which layers additional real-time insights into active vulnerabilities, misconfigurations, and runtime behaviors.

By embedding Sysdig’s security insights directly within Backstage, developers gain immediate visibility into security concerns, significantly accelerating the time to detect and respond to issues. This integration aligns with the cloud-native ethos of agility and efficiency, bringing critical security information to the forefront of the development process.

As we delve deeper into the benefits and workings of the Sysdig-Backstage integration, we will explore: 

  • How the integration speeds up issue detection by consolidating all relevant information in one place.
  • How runtime insights can assist developers in prioritizing vulnerable packages that are currently in use. 
  • How developers can gain comprehensive visibility into the complete software development lifecycle (SDLC) using runtime insights.
  • How this integration streamlines the process of vulnerability management, facilitating collaboration between developers and security teams.
  • How the integration empowers developers to take proactive responsibility for application security, minimizing the need for security operations (SecOps) teams to intervene in identifying and communicating vulnerabilities for remediation.

Let’s get started!

Backstage and Sysdig: Pillars of Modern Development

Backstage has emerged as a developer portal, offering a one-stop-shop for developers to access tools, services, and information crucial to their daily tasks. It was created at Spotify, and then donated to the CNCF

Before Backstage, developers were forced to use many different tools from code repositories, continuous integration and delivery (CI/CD) pipelines, monitoring and observability platforms, to security scanning and compliance tools.

Sysdig integration backstage

The sheer number and variety of these tools complicate the development landscape and lead to complex, frequently manual processes. This complexity heightens the risk of errors or oversights, significantly increasing the chances that vulnerabilities or misconfigurations might inadvertently make their way into production.

Backstage introduces a single-pane-of-glass that aggregates information and controls from various tools. However, the deep security aspect at build, deploy, and runtime is necessary to enhance its power.

Sysdig’s Cloud Native Application Protection Platform (CNAPP) is designed to reduce the time it takes to detect and investigate risks, and respond to incidents. By integrating Sysdig with Backstage, developers can gain access to Sysdig’s insights on vulnerabilities, misconfigurations, and runtime behaviors directly within their primary workspace. This makes it easier for developers to identify and address potential issues in their applications earlier in the devops cycle.  

Sysdig integration backstage

The integration of Sysdig with Backstage significantly enhances this ecosystem by bringing visibility into application behavior, security vulnerabilities, and potential misconfigurations directly to the forefront of the developer workspace.  Enriching Backstage with Sysdig’s run-time insights improves developer efficiency by allowing them to identify and remediate the highest priority issues while avoiding the need for multiple tools, logins, and context changes which reduces the chances of errors or vulnerabilities being missed.

Sysdig integration backstage

Integrating Sysdig with Backstage 

Sysdig released an official plugin for backstage. The plugin interacts with the Backstage backend and frontend through APIs which leverages annotations in the ‘catalog-info.yaml’ files of components.

APIs: Sysdig plugin extends Backstage’s backend via APIs to perform various operations, such as fetching vulnerability scan results from sysdig backend.

Annotation: Annotations are a key concept in the Backstage Catalog, used to attach metadata to entities defined in ‘catalog-info.yaml’ files, such as links to documentation, system dependencies, and integration points with tools like Jenkins for CI/CD, or Sysdig for security insights.

To install the Sysdig plugin, please follow the steps on this GitHub page.  

Example Workflow

A service is registered in the Backstage Catalog with a catalog-info.yaml file, which includes annotations linking to its source code repository and other integrations. 

Adding service to backstage 

Here is an example of a ‘catalog-info.yaml’ for a service called “sock-shop-cart”. It is linked to the source code on GitHub repository using annotation “github.com/project-slug”.

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: sock-shop-cart
  annotations:
    github.com/project-slug: JosephYostos/secure-inline-scan-examples
spec:
  type: service
  lifecycle: experimental
  system: sock-shop
  owner: guests

Once the service is added to the catalog, the sock-shop-cart can be managed from Backstage.

Sysdig integration backstage

Scanning Images with GitHub Actions and Sysdig 

Sysdig and Backstage also integrate with Github Actions. Every time a commit changes to the code, a pipeline action will be triggered. And the image will then be scanned for vulnerabilities and misconfigurations by Sysdig to ensure its security before being pushed to the registry or rejected if it does not pass the predefined security policy. You can read more about Sysdig and GitHub Actions integration here.

Sysdig integration backstage

Pipeline scan 

Now, after the changes have been committed and the pipeline workflow results verified, the scanning results need to be checked. To facilitate this, the following annotation is added to the service.

  sysdigcloud.com/image-freetext: docker.io/josephyostos/testactions

sysdigcloud.com/image-freetext” is a free text query that can be used to search for anything in the pipeline scan results. In the given example, the registry and image name are defined to obtain the image scan results for vulnerabilities and misconfigurations that has been conducted during the image build time.

Sysdig integration backstage

Runtime Insights

As developers now become responsible for the full application lifecycle, it became very important to be aware of vulnerabilities at runtime and also what vulnerable packages are in-use. For this purpose, the following annotation can be used to fetch the runtime scan results of the sock-shop-cart application.

    sysdigcloud.com/kubernetes-namespace-name: sock-shop
    sysdigcloud.com/kubernetes-workload-name: sock-shop-carts
    sysdigcloud.com/kubernetes-workload-type: deployment

In-use information helps developers prioritize fixes for packages that are actually loaded in memory and pose a high risk.

Sysdig integration backstage

Secure more

Sysdig provides curated annotations allowing developers detailed views into the potential risks associated with their current build. For example, in addition to what we have mentioned, application owners can fetch registry scanning results, compliance reports, and more.

All the available annotations from Sysdig are available in this source file

Here is an example of how the ‘catalog-info.yaml’ will look like at the end.

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: sock-shop-cart
  annotations:
    github.com/project-slug: JosephYostos/secure-inline-scan-examples
    sysdigcloud.com/image-freetext: docker.io/josephyostos/testactions
    # VM Runtime
    # sysdigcloud.com/kubernetes-cluster-name: sock-shop-cluster
    sysdigcloud.com/kubernetes-namespace-name: sock-shop
    sysdigcloud.com/kubernetes-workload-name: sock-shop-carts
    sysdigcloud.com/kubernetes-workload-type: deployment
    # VM Registry
    sysdigcloud.com/registry-vendor: harbor
    sysdigcloud.com/registry-name: registry-harbor-registry.registry.svc.cluster.local:5443
    # Posture
    sysdigcloud.com/resource-name: sock-shop-carts

spec:
  type: service
  lifecycle: experimental
  system: sock-shop
  owner: guests

Conclusion

The integration of Sysdig with Backstage marks a pivotal advancement in the cloud-native development landscape. With this integration, software developers now have a centralized hub to manage, track, and protect their applications. By making vital security information readily accessible, it empowers developers to proactively manage application security, reducing the dependency on Security Operations teams to identify and relay vulnerabilities for resolution.

Consequently, this integration not only enhances developer efficiency but also accelerates the identification and mitigation of potential issues, reinforcing a culture of security and agility in cloud-native application development.

The post Sysdig Integration with Backstage appeared first on Sysdig.

]]>
SBOM as a Core Element in Sysdig’s CNAPP Strategy for Enhanced Security https://sysdig.com/blog/sbom-in-sysdigs-cnapp-strategy-for-enhanced-security/ Thu, 08 Feb 2024 15:00:00 +0000 https://sysdig.com/?p=84163 In the fast-paced world of application development, the use of open source components offers a quick path to building sophisticated...

The post SBOM as a Core Element in Sysdig’s CNAPP Strategy for Enhanced Security appeared first on Sysdig.

]]>
In the fast-paced world of application development, the use of open source components offers a quick path to building sophisticated applications. However, this approach introduces critical questions about software composition, licensing, and security.

Before pushing any new application to production or even staging, the security and compliance teams alongside the application owner must address the following:

  • The specific components within their software.
  • The open source libraries in use.
  • Internal dependencies of the application.
  • Scanning for vulnerabilities, including in third-party libraries.

This is where the importance of a Software Bill of Materials (SBOM) becomes clear. An SBOM is not just a standardized list of the image components; it’s a necessity for ensuring that cloud-native applications are secure, compliant, and trustworthy.

Gartner has listed SBOM as one of the core components of Cloud-Native Application Protection Platform (CNAPP) in its latest CNAPP market guide.

In this blog, we will discuss how Sysdig uses the SBOM as a core component for vulnerability management workflow to understand image (container and Host) contents, and making this content available to extract for Compliance and Regulatory audits.

The Evolution of VM Scanning with SBOM Extraction

The concept of SBOM is not new; the open source community recognized the need for creating SBOMs over a decade ago. The Software Package Data Exchange (SPDX) open standard, initiated in 2010, marked an early effort to tackle this issue.

Prior to the adoption of SBOMs, there was a notable gap in understanding code dependencies, which represented a significant challenge for application security teams. Integrating SBOMs into vulnerability management tools has revolutionized this process, allowing for comprehensive scans of both the application and its dependencies, including third-party libraries and frameworks.

Beyond enhancing security, SBOMs offer additional advantages such as significantly reducing the resources required for vulnerability management. 

For a deeper dive into the fundamentals of SBOMs, consider exploring further in the “SBOM 101” article.

Incorporating SBOM into Sysdig’s Vulnerability Management Process

All Sysdig Vulnerability Management Scanning options, both agent-based (CLI Scanner, Cluster, Host, and Registry) and agentless, now include the capability to extract the SBOM from the scanned source or image repository. The SBOM is then sent to the backend with other context and profiling data to accomplish the Vulnerability Detection process. 

All images are extracted into an SBOM format that is compatible with the CycloneDX standard.


BlogImages-Integrating SBOM as Core Element diagram

After the initial scan, when the SBOM is extracted and stored in the SBOM Database, each subsequent scan request prompts the scan engine to retrieve the current SBOM via API. It then scans all listed components within this SBOM before sending the scan results back.

This workflow offers multiple benefits:

  • Resource efficiency: 
    • Significantly reduces the resources used on the customer’s side since vulnerability matching, policy evaluations, and other processes are conducted on the Sysdig backend, minimizing the load on client systems.
    • Once the SBOM is stored, if another image is found with the same SBOM, extraction and downloading of the SBOM will not be needed which saves time and resources.
  • Simplified client logic: A lot of business logic is removed from client components like the vuln matching part, reducing the need to update the client components on the customer side. This also means that client components will already use the most updated logic for vuln matching, policies, risk acceptance, and risk spotlight without needing to update the client component.

Expanding Export Capabilities: Streamlining Compliance Through SBOM API Integration

Sysdig has recently introduced the capability to export SBOMs directly from the Sysdig SBOM Database via an API, utilizing the widely recognized CycloneDX format. This advancement is particularly crucial for meeting compliance requirements and facilitating regulatory audits.

CycloneDX is a standardized format that has been adopted by various repositories and platforms for the seamless exchange and integration of SBOM data. By enabling SBOMs to be exported in this format, Sysdig significantly eases the integration with other supply chain security tools, thereby enhancing collaboration and compliance across the board.

To extract an SBOM for a specific image, you can use the following simple API query:

curl --request GET \

 --url 'https://secure.sysdig.com/secure/vulnerability/v1beta1/sboms?assetId=sha256:c276a3cc04187ca2e291256688a44a9b5db207762d14f21091b470f9c53794e2&assetType=container-image' | jq

In this query:

  • assetId’ refers to the unique identifier of the asset for which the SBOM is being retrieved.
  • assetType’ specifies the type of asset, which can be either “container-image” or “host.”
  • bomIdentifier’ is used to specify the ID of a single SBOM.

When querying an SBOM via the API, you have the option to provide either the ‘bomIdentifier’ alone or both ‘assetId’ and ‘assetType’ to retrieve the desired SBOM.

SBOM Details and Layer Analysis

Within every extracted SBOM, several distinct layers and types of components are identified, each contributing to the overall structure and functionality of the software. Here’s a breakdown of the layers and components you may find in an SBOM:

Operating system layer:

The SBOM specifies an “operating-system” component, which in the following example is Debian version “12.2.” This forms the foundational layer of the software, indicating the base environment on which other components are built and interact.

{

      "bom-ref": "1c32decb-cdcf-43a8-b0f5-ad78f376ff9e",

      "type": "operating-system",

      "name": "debian",

      "version": "12.2"

    },

   Library components:

Each library represents a different software package included in the system. These libraries include essential tools and utilities that the application relies on. Each library is detailed with its specific version and package URL (purl), providing a clear picture of the software dependencies.

{

      "bom-ref": "209522c2-79e8-48ec-a95f-ca1ec69243cf",

      "type": "library",

      "name": "adduser",

      "version": "3.134",

      "purl": "pkg:deb/debian/adduser@3.134?distro=debian-12.2&upstream=adduser&upstream-version=3.134",

    },

   Layer information at package level:

The SBOM provides detailed layer information for each package. Properties such as “sysdig:layer:digest” and “sysdig:layer:index” are included for each component, indicating the unique identifier (digest) of the layer in which the component resides and its position (index) within the layer stack. This information is crucial for understanding how the application is constructed and for pinpointing where in the build process each component is introduced.

"properties": [

        {

          "name": "sysdig:layer:digest",

          "value": "sha256:cb4596cc145400fb1f2aa56d41516b39a366ecdee7bf3f9191116444aacd8c90"

        },

        {

          "name": "sysdig:layer:index",

          "value": "0"

        },

   Installation paths:

Each component’s entry in the SBOM includes an “installPath,” showing where the package is located within the system. For example, the “installPath” for many listed components is “var/lib/dpkg/status,” which is a common location for package metadata in Debian-based systems. This detail helps in identifying where the software components are stored and how they are organized within the file system.

{

 "name": "sysdig:package:installPath",

 "value": "var/lib/dpkg/status"

}

Conclusion

In conclusion, Sysdig’s integration of SBOMs into its CNAPP strategy represents a significant leap in securing cloud-native applications. By adopting the CycloneDX standard for SBOMs, Sysdig not only enhances vulnerability management but also streamlines compliance processes. The ability to export SBOMs directly enhances collaboration and ensures a transparent, secure software supply chain. This strategic move underscores Sysdig’s commitment to advancing cloud-native security in an ever-evolving digital landscape.

The post SBOM as a Core Element in Sysdig’s CNAPP Strategy for Enhanced Security appeared first on Sysdig.

]]>
The Power of Library-Based Vulnerability Detection https://sysdig.com/blog/library-based-vulnerability-detection/ Fri, 24 Nov 2023 16:00:00 +0000 https://sysdig.com/?p=82079 With an ever-growing number of vulnerabilities being discovered annually, vulnerability management tools are rapidly evolving to handle and prioritize these...

The post The Power of Library-Based Vulnerability Detection appeared first on Sysdig.

]]>
With an ever-growing number of vulnerabilities being discovered annually, vulnerability management tools are rapidly evolving to handle and prioritize these risks. However, it remains one of the most overwhelming and time-consuming areas in cybersecurity. There’s still significant room for enhancement, especially in reducing false alerts and prioritizing genuine threats.

The vulnerability scanning process can be divided into four stages:

  • Asset Retrieval: Accessing and scanning the content of an asset
  • Analysis: Extracting the SBOM (Software Bill Of Materials)
  • Vulnerability Matching: Aligning vulnerabilities with the SBOM
  • Policy Evaluation & Risk Acceptance: Deciding on the risk levels of the identified vulnerabilities

While each phase has room for improvement, this blog focuses on the third stage—Vulnerability Matching—and the innovations recently introduced by Sysdig.

Challenges in Vulnerability Detection 

  • Software vs. Affected-Library Detection: A significant challenge in vulnerability detection arises from the inaccuracy in identifying affected packages, especially for Non-OS packages. For instance, many CVE data sources, including the NVD (National Vulnerability Database), sometimes provide detection information at the software level (e.g., Log4j) rather than the package level (e.g., org.apache.logging.log4j.LogManager). This discrepancy can lead to false positives, as not all packages within an application might be vulnerable.

Here is an example: the NVD page for the log4j vulnerability (CVE-2021-44228) only lists affected software without specifying the vulnerable libraries. 

Library-based Vulnerability Detection

In contrast, other data sources, like the GitHub advisory database, precisely pinpoint that the only affected package is “org.apache.logging.log4j:log4j-core.”

Library-based Vulnerability Detection
  • Versioning and naming discrepancy: Many data sources provide a range of vulnerable applications or packages by saying, for example, anything below v2.4.1 is vulnerable. However, this becomes complicated as each manufacturer follows a different naming and versioning schema. For instance, one manufacturer might use a four-digit (or quad-level) version number, while another adopts a three-period separator known as “Semantic Versioning” or “SemVer.”  This discrepancy in versioning and naming requires a lot of curation and sometimes leads to false matching scenarios.  

Enhancing Non-OS Vulnerability Detection

Sysdig has taken several steps to improve the fidelity of package matching:

Incorporating GitHub + GitLab 

Sysdig unified detection based in affected-library for Non-OS packages by incorporating security feeds from GitLab Open Source and GitHub Security Advisory Databases. The two advisory databases typically include detailed information about each vulnerable library and it’s regularly updated. The information is curated, often with input from the broader security community, ensuring a level of trustworthiness and transparency.

That being said, we will keep using VulnDB dataset to complement vulnerability metadata, for example, by getting the dates when a certain vulnerability is being discovered and disclosed, exploiting data, scores, and summary/description.

Curating results from multiple sources

Sysdig integrates results from over a dozen detection sources. Beyond GitHub and GitLab advisory databases, Sysdig recently started incorporating security feeds from Ruby, Python, and PHP. 

Cross-referencing vulnerabilities reported from multiple data sources helps verify their authenticity and severity. In addition, some feeds may provide richer contextual information about vulnerabilities, including potential mitigations, exploitability, or real-world impact. Having multiple feeds can ensure you obtain this detailed context where available.

Proactive vulnerability detection & identification

Sysdig has implemented an automated testing harness for its detections to monitor Recall, Precision, and F1 scores against previous datasets and industry open source benchmarks. This ensures proactive identification of detection variances.

The Outcome 

Sysdig’s approach of focusing on impacted libraries, instead of the broader software category, has shown tangible results. By prioritizing data from trusted sources like GitHub and GitLab, and integrating other diverse data sources, there has been a notable improvement in detection accuracy and a significant reduction in false positives. For instance:

  • Log4shell: Now has three affected libraries, down from 101 previously
  • SpringShell: Now has seven affected libraries, down from 21 previously
  • CVE-2017-16026: Now has one affected library, down from 13 previously
  • CVE-2015-9251: Now has two affected libraries, down from 11 previously

Conclusion

The realm of vulnerability management is complex and ever-evolving. As cyber threats become more sophisticated, it’s imperative for vulnerability detection tools to stay a step ahead. Sysdig’s recent advancements in refining vulnerability matching emphasize the importance of precision and comprehensive data sourcing. By centering their approach on affected libraries and diversifying their data sources, Sysdig not only improves the detection accuracy, but also instills greater confidence in the vulnerability management process. As the cybersecurity landscape continues to evolve, such innovations underline the importance of continuous adaptation and the relentless pursuit of perfection. 

The post The Power of Library-Based Vulnerability Detection appeared first on Sysdig.

]]>
Agentless Vulnerability Management: A Complete Guide to Strengthening Your Security https://sysdig.com/blog/agentless-vulnerability-management-guide/ Wed, 27 Sep 2023 11:00:00 +0000 https://sysdig.com/?p=79583 “Security doesn’t slow us down, and we’re here to prove that.” This is how Maya, a devSecOps team lead at...

The post Agentless Vulnerability Management: A Complete Guide to Strengthening Your Security appeared first on Sysdig.

]]>
“Security doesn’t slow us down, and we’re here to prove that.”

This is how Maya, a devSecOps team lead at one of the prominent software development companies, started her presentation in front of Security leaders to show how the Agentless vulnerability management approach helped her:

  • Streamline the onboarding of new deployments.
  • Significantly cut down complexity and setup time.

In this blog post, we’ll introduce you to Sysdig’s new Agentless scanning for vulnerability management. We’ll explore its functioning and how Sysdig has overcome the limitations of traditional agentless methods by combining the strengths of both agent and agentless approaches.

Team agentless: Balancing agility and security

Maya’s team, deeply rooted in DevOps practices, hesitated at the thought of adding more operational complexity. Their mantra was efficiency and agility. The agentless approach emerged as an appealing alternative to agent-based, promising security without encumbering their fast-paced development processes.

Given the fact that they have many applications managed by a third party where they can’t deploy agents, in addition to many traditional applications that they are very hesitant to touch or install any agents on, this approach appealed to the security leaders.

However, in some cases, using Agentless scanning alone comes with certain limitations in system visibility. For example, it does not provide insights into whether vulnerable packages are already in use and loaded into the system memory. Additionally, Agentless scanning lacks real-time visibility, potentially causing it to miss information about intermediate states of the system between scans.

Maya deploys Agentless in minutes, and starts protecting her infrastructure right away. Over time, they implement agents where possible to have better insights.

Introducing Vulnerability Management Agentless Scanning in Sysdig Secure

Sysdig has developed a comprehensive scanning solution by integrating both agentless and agent-based deployments:

  • Sysdig Agentless scanning leverages existing cloud providers’ APIs to discover and scan resources very fast.
  • The Sysdig agent is a lightweight package that operates at the kernel level, utilizing eBPF (extended Berkeley Packet Filter) technology to provide minimal overhead and real-time visibility.
  • And they work together. For example, the Sysdig runtime agent creates a profile for each workload, which the agentless scanner can utilize to identify in-use packages and prioritize remediation efforts based on the presence of vulnerable packages already in use.

Let’s jump into the world of Agentless scanning and find out how it operates.

How Agentless Scanning Works

Agentless security tools generally rely on leveraging existing interfaces and APIs provided by the cloud service providers to collect information and perform vulnerability assessments.

The Agentless scanning process begins with identifying cloud assets within the account. Subsequently, a snapshot of the root volume associated with each instance is generated. This snapshot is then mounted to a dedicated scanner instance operating within an isolated environment. Within this secure environment, the scanner instance proceeds to execute the VM analyzers on the designated mount point. The outcome of this execution is the generation and preservation of the Software Bill of Materials (SBOM). Upon the conclusion of this procedure, the volume is dismounted and the snapshot is subsequently removed. Lastly, we send the scan results to Sysdig UI and it becomes available through API as well.

Sysdig’s Agentless scanning solution is designed with the principle of least privilege in mind. This means that we only request the specific permissions that are necessary for the scanning process. By doing so, we greatly reduce the attack surface and potential risks associated with excessive permissions.

You can use tags to exclude particular virtual private cloud (VPC) or hosts in your account from being scanned

Let’s dive into the onboarding of the scanning flows.

Onboarding cloud account

The onboarding phase primarily entails creating permissions in the customer’s account to enable the discovery, scanning, and assessment of various workloads.

Part of the onboarding is to create the following resources:

  • Global resources
    • aws_iam_role
    • aws_iam_policy
  • Regional resources
    • aws_kms_key
    • aws_kms_alias

These roles/policies provide the minimal set of permission to scan the running hosts.

An example of these permissions would include the ability to discover EC2 instances and create/delete snapshots to facilitate scanning and assessment. Refer to our documentation for the list of required permissions.

Permissions
    ec2:Describe*
    ec2:CreateSnapshot
    ec2:CopySnapshot
    ec2:DeleteSnapshot (with the additional constraint to restrict deletion to only volumes created by Sysdig)  

The great news is that you won’t have to configure any of these permissions manually. In Sysdig UI, choose your cloud provider, provide the account details, and all of the required permissions will be set up in minutes.

The swiftness of the onboarding process is a significant advantage of the agentless approach. It typically completes within 10 to 15 minutes, ensuring the security of all your workloads in no time.

Scanning mechanism and flow

The Agentless scanning process encompasses multiple stages. Each stage requires a certain level of permission. Sysdig emphasizes the minimum permission required at each stage and communicates this clearly to all stakeholders. Now, let’s delve into each phase of the process.

1) Discovery: Agentless scanning starts by discovering the cloud assets within the cloud environment. It accesses the cloud service provider’s API to obtain information about the assets. For each onboarded account, Sysdig has to detect running instances. Here are the required permissions for this stage:

ec2:DescribeVpcs
ec2:DescribeInstances to detect running (?) instances
ec2:DescribeVolumes

2) Snapshot creation: For each volume of running instances, we have to create/take snapshots. Here are the required permissions for this stage:

ec2:CreateSnapshot
ec2:DescribeSnapshots

3) Attach snapshot: Create a volume from the shared snapshot and then attach it to the scanner instance:

ec2:CreateVolume
ec2:AttachVolume

4) Extract SBOM: Execute the VM analyzers over that mount point, produce the SBOM, and save SBOM to S3 bucket.

5) Remove volume: Unmount the volume, then delete the volume.

ec2:DetachVolume
ec2:DeleteVolume

6) Report results: Push scan results to Sysdig UI and it becomes available through API as well.

Once the scan is completed, you will be able to see all the scanned hosts via agent or agentless in one place.

Vulnerability management overview page

All in one place, you can easily select vulnerabilities with fixes available and exploits discovered.

If you also install the Sysdig agent, both agent and agentless approaches work together to provide more Runtime Insights. For example, you can prioritize based on vulnerable packages in-use. By doing this, Sysdig reduces up to 95% of the vulnerability noise.

Conclusion

At the end, Maya’s adept implementation of the agentless approach fortified infrastructure without hindering rapid deployments. Her strategic use of lightweight agents, when required, illuminated runtime insights.

Sysdig synergizes the benefits of both agent-based and Agentless scanning methodologies. Through agentless instrumentation, you can swiftly secure your environment within minutes and reduce management and maintenance efforts. Simultaneously, the lightweight agent offers deep runtime insights with minimal overhead and real-time visibility. This combination empowers you to strengthen your security posture and effectively stay ahead of vulnerabilities.

If you want to learn more about the new Agentless scanner:

The post Agentless Vulnerability Management: A Complete Guide to Strengthening Your Security appeared first on Sysdig.

]]>
CVSS Version 4.0: What’s New https://sysdig.com/blog/cvss-version-4-0-whats-new/ Tue, 01 Aug 2023 15:15:00 +0000 https://sysdig.com/?p=77126 Over the last decade, many vulnerabilities were initially perceived as critical or high but later deemed less important due to...

The post CVSS Version 4.0: What’s New appeared first on Sysdig.

]]>
Over the last decade, many vulnerabilities were initially perceived as critical or high but later deemed less important due to different factors. One of the famous examples was the “Bash Shellshock” vulnerability discovered in 2014. Initially, it was considered a critical vulnerability due to its widespread impact and the potential for remote code execution. However, as the vulnerability was further analyzed, it was found that the severity could vary based on various environmental factors.

Vulnerability management often relies on CVSS (Common Vulnerability Scoring System) as a key component in assessing and prioritizing vulnerabilities. CVSS is a standardized framework used to assess and quantify the severity and impact of security vulnerabilities in computer systems and software.

The CVSS framework is maintained by the Forum of Incident Response and Security Teams (FIRST), an international nonprofit organization that focuses on incident response and coordination. FIRST recently announced the CVSS 4.0 Public Preview with a target official publication date of Oct. 1, 2023.

In the version, FIRST tried to reinforce the concept that CVSS is not just the Base score and considered additional factors. Let’s discuss what the key changes in CVSS 4.0 are.

Here is a list of highlighted changes:

  • Introducing a new level of granularity with Added Base Metrics and Values
  • Clearer Insight into Vulnerability Impact: Assessing Effects on Vulnerable and Subsequent Systems
  • Simplifying the Threat metrics to focus only on Exploit Maturity
  • Introducing a New Supplemental Metric Group for Enhanced Extrinsic Attributes
CVSS 4.0

CVSS 4.0: New Base Metrics and Values

What are the Base Metrics?

The Base metric group represents the intrinsic qualities of a vulnerability and provides a fundamental assessment of its severity. The Base metrics help determine the initial severity score for a vulnerability and it’s usually provided by the vendor.

In CVSS v3.1, the base metric group consisted of four main metrics, Attack Vector (AV), Attack Complexity (AC), Privileges Required (PR), and User Interaction (UI).

The CVSS 4.0 framework introduced a metric called the Attack Requirements (AT) to increase the granularity and accuracy of the scoring system.

Attack Complexity (AC) vs. Attack Requirements (AT)

AC and AT could be confusing, so let’s differentiate between them.

AC: The Attack Complexity metric assesses the level of complexity required to exploit the vulnerability. It measures steps an attacker must undertake to bypass or overcome existing security measures, such as Address Space Layout Randomization (ASLR) or Data Execution Prevention (DEP).

AT: Attack requirements encompass the necessary deployment and execution conditions of the vulnerable system that enable the successful execution of an attack. An example of attack requirements could be a specific race condition.

CVSS 4.0

How will the Attack Requirements (AT) metric enhance the scoring system? Let’s see an example!

The Dirty COW vulnerability, officially designated as CVE-2016-5195, is a serious privilege escalation vulnerability that affects the Linux kernel. “Dirty COW” stands for “Dirty Copy-On-Write” and refers to a race condition flaw in the way the kernel’s memory subsystem handles certain copy-on-write (COW) operations.

Here is how this vulnerability is scored using CVSS v3.1:

CVSS 4.0

The attack complexity for CVE-2016-5195 was low. This is because exploiting the vulnerability did not require complex actions or advanced skills from an attacker. In fact, it was a relatively straightforward vulnerability that could be exploited with a simple privilege escalation technique.

The missing part here is the attack requirements, since the Dirty COW vulnerability is not easy to be successfully exploited. You actually need a race condition to happen, which introduces an additional layer of complexity for attackers. This makes the exploitation more challenging and less predictable. In some cases, the success of the attack may not solely depend on the attacker, as it could require brute force or a multitude of attempts to achieve the desired outcome. In the end, attackers may not even bother trying.

Taking the attack requirements into the CVSS consideration will enhance the scoring and get a more accurate score expressing the real risk.

CVSS 4.0

CVSS 4.0: Assessing Effects on Vulnerable and Subsequent Systems

In v3.1, the impact assessment was measured using the Scope (S) metric. The Scope metric is often regarded as one of the most complicated and less comprehended metrics, and it has indeed led to inconsistencies in the scoring results among different vendors.

In the new version, the scope metric is retired and replaced with the Impact Metrics that accounts for impact on both the vulnerable system and the Subsequent system.

  • Vulnerable System Confidentiality (VC), Integrity (VI), Availability (VA)
  • Subsequent System(s) Confidentiality (SC), Integrity (SI), Availability (SA)
CVSS 4.0

CVSS 4.0: Supplemental Metric Group

What is the Supplemental Metric Group?

CVSS v4.0 introduces a new “Supplemental Metric Group.” In this group, vendors provide additional context that could be used to prioritize the remediation effort or modify the score to your environment. These supplemental metrics are optional and do not impact the final calculated score.

The usage and interpretation of this contextual information may vary in different computing environments based on the consumer’s discretion.

Here are the metrics and its definition as described by FIRST:

SafetyThe Safety metric value measures the impact regarding the Safety of a human actor or participant that can be predictably injured as a result of the vulnerability being exploited.
AutomatableAnswers the question “Can an attacker automate exploitation of this vulnerability across multiple targets?”
RecoveryDescribes the resilience of a Component/System to recover services, in terms of performance and availability, after an attack has been performed.

  • Automatic (A): The Component/System recovers automatically after an attack.
  • User(U): The Component/System requires manual intervention by the user to recover services after an attack.
  • Irrecoverable (I): The Component/System is irrecoverable by the user after an attack.
Value DensityValue Density describes the resources that the attacker will gain control over with a single exploitation event. It has two possible values, diffuse and concentrated.

  • Diffuse: The system that contains the vulnerable component has limited resources. That is, the resources that the attacker will gain control over with a single exploitation event are relatively small.
  • Concentrated: The system that contains the vulnerable component is rich in resources. Heuristically, such systems are often the direct responsibility of “system operators” rather than users.
Vulnerability Response EffortHow difficult it is for consumers to provide an initial response to the impact of vulnerabilities for deployed products and services in their infrastructure.
Provider UrgencyTo facilitate a standardized method to incorporate additional provider-supplied assessment, an optional “pass-through” Supplemental Metric called Provider Urgency has been defined.

  • Red: highest urgency
  • Amber: moderate urgency
  • Green: reduced urgency
  • Clear: low or no urgency
CVSS 4.0

How to Benefit from the Supplemental Metric Group?

For example, many vulnerabilities today have impacts outside of the traditional C/I/A. In the healthcare sector, tangible harm can occur to humans due to a vulnerability exploit. In such a case, adding context related to safety could be a game changer.

Once CVSS 4.0 becomes available, you need to ensure incorporating the supplemental metrics assessment in your vulnerability management lifecycle.

CVSS 4.0: Threat Metrics

In CVSS v3.1, we have the Temporal Score metrics that provide information about the temporal aspects of a vulnerability, such as exploit availability, remediation level, and report confidence.

In other words, it measures the aspects that are expected to change overtime and need to be updated.

This group metric has some limitations. For example, Subjectivity, the Temporal Score involves subjective judgments, such as the reliability of exploit code, the effectiveness of remediation measures, and the confidence in vulnerability reports. These judgments can vary among different assessors or organizations, leading to inconsistencies in scoring.

In v4.0, Temporal Score has been renamed to Threat Metric Group and it now includes only one metric which is Exploit Maturity. Exploit Maturity measures the likelihood that a malicious actor will attempt an attack against the vulnerable system. There are three values that can be assigned to Exploit Maturity based on the information gathered from the vulnerability management consumers:

  • Attacked (A): Attacks targeting this vulnerability have been reported.
  • Proof of concept (P): Proof of concept code is publicly available.
  • Unreported (U): Neither attacks nor POC availability reported.
CVSS 4.0

CVSS 4.0: Environmental (Modified Base Metrics)

The environmental metric group captures the specific characteristics of a vulnerability that are relevant and unique to an individual user’s environment. It is used to calculate the overall CVSS score by taking into account the specific characteristics and attributes of the target environment in which the vulnerability exists.

Let’s look at an example.

Your vulnerability scanner detected a critical vulnerability in the web application framework used in your in-store queue management system. The vulnerability is remotely exploitable with AV:N, but your application is network isolated with no internet connection.

In this case, you can modify the attack vector (MAV) to a value of Adjacent instead of Network, considering the environmental aspects.

In the future, vulnerability management vendors are likely to incorporate environmental metrics into their systems. Vendors might offer increased flexibility and customization options, allowing organizations to define and configure their own set of environmental metrics based on their specific needs, risk tolerance, and industry regulations. This flexibility would enable organizations to align the scoring and prioritization of vulnerabilities with their unique operational context.

Automation and machine learning techniques can also be leveraged to analyze and correlate environmental data. Vendors may develop algorithms that automatically identify and factor in environmental metrics, such as asset criticality, network segmentation, or user privileges. This automation would streamline the vulnerability management process and enhance the accuracy of risk assessments.

Overall, incorporating environmental metrics into vulnerability management systems will provide organizations with a more tailored and accurate understanding of their risk exposure. By considering factors specific to their operational context, organizations can prioritize remediation efforts, allocate resources effectively, and make risk-based decisions that align with their business goals and compliance requirements.

CVSS 4.0: Takeaways

The new version strongly emphasizes integrating threat intelligence and environmental metrics into the scoring process, resulting in a more realistic and comprehensive risk assessment.

You need to think about how to consume the environmental and supplemental metrics and discuss aligning with the organizational Enterprise Risk Management (ERM) process to define the environmental severities.

While specific details on how these metrics will be implemented remain unclear, it’s important to ask your vulnerability management vendor how they will incorporate the new version in their solution and how you can maximize the benefit from this CVSS release.

The post CVSS Version 4.0: What’s New appeared first on Sysdig.

]]>
How to Deal with Hundreds of Fixes? Choosing the Right Vulnerability Management Solution https://sysdig.com/blog/how-to-deal-with-hundreds-of-fixes/ Wed, 12 Jul 2023 14:00:00 +0000 https://sysdig.com/?p=75580 Relying solely on the Common Vulnerability Scoring System (CVSS) is insufficient when it comes to effective vulnerability management. While the...

The post How to Deal with Hundreds of Fixes? Choosing the Right Vulnerability Management Solution appeared first on Sysdig.

]]>
Relying solely on the Common Vulnerability Scoring System (CVSS) is insufficient when it comes to effective vulnerability management. While the CVSS score provides a quantitative measure of a vulnerability’s severity, it fails to capture the contextual nuances that can significantly impact the actual risk to an organization. In this article, we will discuss how best to choose a vulnerability management solution.

Factors such as the network architecture, asset value, exploit availability, and the organization’s specific environment are not adequately accounted for in the current CVSS score calculations.

First recently announced CVSS v4.0 in preview. In this new version, there is an intention to incorporate additional metrics that capture the contextual aspects of vulnerability management. While specific details on how these metrics will be implemented in CVSS 4.0 remain unclear at this point, it is promising to see recognition of the need for a more comprehensive approach.

Assessing Vulnerabilities and Prioritizing Risk

To ensure comprehensive vulnerability management, it is crucial to establish a workflow that incorporates additional metrics beyond the CVSS score, such as exploitability analysis, asset criticality, business impact, Runtime Insights, fix availability, and workaround availability.

Let’s dig deeper with the following example:

In this example, we will use an HTTP web server image called security_playground. It’s always better to use your own example.

Severity assessment

The vulnerability severity is determined by calculating the CVSS score. The resulting CVSS score ranges from 0 to 10, with 10 being the most severe. The severity assessment is typically categorized as follows:

CVSS score 0.0 to 3.9: Low severity

CVSS score 4.0 to 6.9: Medium severity

CVSS score 7.0 to 8.9: High severity

CVSS score 9.0 to 10.0: Critical severity

Let’s scan our image. In this example, we used Sysdig CLI scanner – you can read more about Sysdig CLI scanner here.

~ % ./sysdig-cli-scanner docker.io/sysdiglabs/security-playground:latest --apiurl https://eu1.app.sysdig.com/

The results show 3,969 vulnerabilities in this image, 191 of them are critical.

Indeed, facing 3969 vulnerabilities in a single image can be overwhelming and challenging to handle. Incorporating additional metrics to add context to the results can greatly assist in prioritizing and managing these vulnerabilities effectively.

Exploitability analysis

Exploitability analysis assesses the likelihood of each vulnerability being exploited. A lot of vulns are theoretical and cannot be easily exploited in real life. Exploitability information is reported by security analysts. The exploitability of a vulnerability is confirmed when:

  • Attacks targeting this vulnerability have been reported
  • Proof of concept (POC) code is publically available

There are several reputable sources for feeds and databases that provide information about exploited vulnerabilities, such as National Vulnerability Database (NVD), Cybersecurity and Infrastructure Security Agency (CISA), and Exploit Database (Exploit-DB).

Some vulnerability scanners can provide this information as part of the scanning process. We filtered the scan results from the previous example and the good news is that out of the initial 3,969 vulnerabilities, we identified 203 as exploitable and only 6 of which are critical.

Runtime insights analysis

Most of the vulnerabilities reported in container environments are actually noise. Only vulnerabilities that are tied to packages used at runtime offer a real chance of exploitation. Runtime insights provide deep visibility into system calls to identify what packages are loaded at runtime.

A powerful runtime insights mechanism should be able to track every binary used at runtime and link that to the packages, then filter vulnerabilities based on loaded packages in memory that can be exploited.

Detecting run packages at runtime with Falco

In our example, one of the critical CVEs is CVE-2021-3711 reported in the openssl package. As you see, the affected packages are “libssl-dev, libssl1.1, openssl.”

With Falco, you can simply create a rule to monitor opened files inside each container.

 customRules:
  my_rules: |-
    - rule: Monitor Opened Files in Containers
      desc: Detect when files are opened inside containers
      condition: evt.type in (open,openat,openat2) and container and container.image != "host" and k8s.ns.name= "default"
      output: >
        Opened file: %fd.name
        Process: %proc.name
        Process ID: %proc.pid
        Container ID: %container.id
        Container Name: %container.name
      priority: NOTICE
      tags:
        - file_open

In this rule:

  • The condition field specifies the conditions for triggering the rule:
    • evt.type in (open,openat,openat2) captures the file open events
    • container and container.image != "host" and k8s.ns.name= "default" limited to containers that are not running with the “host” image and are within the Kubernetes namespace “default.”
  • The output field defines the output message when the rule is triggered. It includes the name of the opened file (%fd.name), the process name (%proc.name), the process ID (%proc.pid), and the user accessing the file (%user.name).
  • The priority field sets the priority level of the alert.
  • The tags field includes relevant tags to categorize the alert.

By enabling this rule for the workloads in the default namespace, Falco will log every syscall to open a file. By checking the Falco log and filtering for libssl, we can tell that libssl is opened and loading into the memory, then it can be exploited

That was an example of leveraging runtime insights to prioritize vulnerabilities. However, in a production environment, it is essential to automate this process. By automatically detecting binaries, linking them to their respective packages, and comparing them to the vulnerability scan results, you can efficiently filter and generate alerts based on the identified vulnerabilities.

Fix or workaround availability

When vulnerabilities are discovered, it is crucial to determine if there are known solutions or actions that can be taken to mitigate or resolve the security issue. Fix availability helps determine the feasibility and urgency of applying a remediation measure to address the vulnerability.

The availability of a fix can vary depending on the nature of the vulnerability and the software or system affected. It may be provided by the software vendor as a software update, patch, or specific configuration change. In some cases, interim workarounds or mitigation steps may be suggested until a permanent fix is available.

Example of fix availability:

A suggested fix for our openssl vulnerability is to upgrade the openssl version to 1.1.1d-0+deb10u2.

Example of workaround availability:

The log4j exploitation happens when the Log4j2 library can receive variable data from the LDAP and JNDI lookup. and execute it without verification. In this scenario, creating a security rule to block any unauthorized LDAP traffic to the log4j server would be a good workaround until the development team patches these deployments.

Asset criticality and business impact

Asset criticality and business impact are essential components of vulnerability management assessments. Based on the importance of the asset and the business impact to the organization, you can prioritize remediation efforts by focusing on the most valuable and critical assets. Usually, these metrics are measured in terms of confidentiality, integrity, and availability.

Outcome

By following this workflow, we were able to effectively narrow down the initially discovered 3,969 vulnerabilities to just 2 that require immediate attention.

The described steps show how vulnerability management assessment and prioritization work under the hood. However, in a production environment, implementing a robust vulnerability management solution that automates and facilitates these steps is crucial.

Finally, consolidating your vulnerability management system into a CNAPP solution and ensuring seamless integration within your ecosystem and software development lifecycle (SDLC) brings numerous advantages. By centralizing security functionalities, such as vulnerability management, within a single CNAPP solution, you can streamline operations, reduce complexity, and improve overall efficiency. Integration within your ecosystem and SDLC enables automated vulnerability scanning, assessment, and remediation processes at various stages, including development, testing, and deployment.

Conclusion

  • When you choose a vulnerability management solution, make sure how it will handle these steps.
  • Leveraging runtime insight to prioritize remediation effort is a core component in the vulnerability management lifecycle and requires a robust runtime engine security engine.
  • Try to consolidate your security tools by looking at a comprehensive CNAPP solution that integrates and fits in your ecosystem.

The post How to Deal with Hundreds of Fixes? Choosing the Right Vulnerability Management Solution appeared first on Sysdig.

]]>
Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management https://sysdig.com/blog/sysdig-risk-based-vulnerability-management/ Wed, 24 May 2023 15:54:37 +0000 https://sysdig.com/?p=72859 The number of detected common vulnerabilities and exposures (CVEs) has significantly increased in the past decade. In the last five...

The post Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management appeared first on Sysdig.

]]>
The number of detected common vulnerabilities and exposures (CVEs) has significantly increased in the past decade. In the last five years, security researchers reported over 100,000 new CVEs. The highest reported annual figure was in 2022, with over 25,000 new CVEs. This number can overwhelm any security team if it’s not managed correctly between assessment, reporting, remediation, and monitoring. 

The best approach to handle vulnerabilities in cloud-native space is to adopt risk-based vulnerability management, the concept Sysdig introduced last year to leverage Runtime Insights and focus on the in-use packages.

This approach will help security teams prioritize mitigation based on in-use exposure or risk, avoiding fatigue or stress caused by the accumulation of priority alerts.

Security and compliance teams can maximize vulnerability management operational efficiency if they can rapidly identify, track, and report risk trends.  

Sysdig Risk-based Vulnerability Overview

Sysdig Secure now offers a trend analysis dashboard, providing insight into how vulnerability scanning metrics varied over time. This level of information is strategic in the following ways:

  • Providing high-level executive reports showing the vulnerability risk trends in the environment.
  • Facilitating the risk data-driven decisions to maintain an acceptable level of risk.
  • Increase success in compliance programs by reducing vulnerability management policy failures.

Risk trends are the changes over time as new vulnerabilities are discovered and old vulnerabilities are remediated or excluded. Tracking risk trends helps you assess threats to your environment and make decisions to lower the risk level in your environment.

The new vulnerability management dashboard displays changes in the number of vulnerabilities over time, making it easy to identify problematic periods and dig deeper to address their potential causes.

For example, if you want to know if there has been any change in critical vulnerabilities in the last 30 days:

  • Review the metrics graph to see trends
  • Filter by severity: select only critical  

Suppose you see a significant change on a specific day. In that case, you can click on that day and see if a new vulnerability popped into your environment or if a new workload was added that introduces new vulnerabilities.

When you click on a particular day, the following dashboard widgets will be updated:

  • Top Pervasive Vulns
  • Top Recent Vulns
  • Top Critical Namespaces
Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management.

Filtering and scoping of data

In many cases, you may have a namespace or an application and want to see if it is trending in the right direction. 

In this scenario, you can customize the dashboard to focus on a particular namespace or cluster.

Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management.

With the risk-based vulnerability management concept in mind, you can lower the number of actionable vulnerabilities in your dashboard by adding the following filters:

  • Has Fix: Identifies if a fix is available to address the vulnerability.
  • Has Exploit: Indicates if there is a known path for exploiting the vulnerability.
  • In-Use: Vulnerable packages that are actually in use.
Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management.

Reporting results 

Reporting this data and showing how the security team’s effort leads to trending down the vulnerability risk is very important. Sysdig can provide that through the following:

  • Export Widget data to csv or pdf.
  • Any data viewable in the dashboard is available via a Public API.
  • Schedule reports based on CVE with affected asset information.

Stay Compliant

Compliance is not a one-time job. You set the baseline policies to define expectations and the accepted level of risk, but you keep maintaining and reassessing these policies.

Imagine you have some policies you created for either SLAs or compliance programs like PCI or GDPR. The vulnerability management dashboard will help you to understand what policies are failing the most, where they’re failing, and which controls are failing inside those policies.

In the following example, I’ve activated a predefined policy called “Sysdig Best Practices.” This policy will alert for any image with a critical vulnerability that has a fix for more than 30 days. Check how the Rule is predefined. 

Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management.

From the dashboard in the following screenshot, you can quickly tell that 15 assets are failing to comply with this policy. In other words, 15 assets have vulnerabilities with fixes available for more than 30 days. 

Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management.

You can export this list of assets to CSV and attach it to a Jira ticket to get addressed quickly.

Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management.

Conclusion

The number of discovered CVEs is increasing massively year after year. However, adopting a risk-based vulnerability management approach and tracking the risk trends in the environment helps security teams to maintain the risk at the accepted levels.

The Sysdig vulnerability management dashboard is your day-to-day tool to identify the riskiest areas in your cluster, detect policy failures as early as possible, and generate reports showing the vulnerability risk trending.

To learn more about risk trends, visit our documentation page. 

The post Track Risk Trends in your Container Images with Sysdig Risk-based Vulnerability Management appeared first on Sysdig.

]]>