Sysdig | Eduardo Mínguez https://sysdig.com/blog/author/eduminguez/ Fri, 23 Feb 2024 07:18:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://sysdig.com/wp-content/uploads/favicon-150x150.png Sysdig | Eduardo Mínguez https://sysdig.com/blog/author/eduminguez/ 32 32 End to End Vulnerability Scanning with Sysdig Secure https://sysdig.com/blog/end-to-end-vulnerability-scanning/ Wed, 25 Jan 2023 15:00:00 +0000 https://sysdig.com/?p=64570 Scanning for vulnerabilities is a best practice and a must-have step in your application lifecycle to prevent security attacks. Implementing...

The post End to End Vulnerability Scanning with Sysdig Secure appeared first on Sysdig.

]]>
Scanning for vulnerabilities is a best practice and a must-have step in your application lifecycle to prevent security attacks. Implementing a strong Cloud Native Application Protection Platform (CNAPP) certainly includes this step. It is also important where this step is performed, but why? Let’s dig into the details of vulnerability scanning with Sysdig.

Applications’ lifecycle involves a number of steps, from the developer workstation creating fine art in the shape of lines of code to the final production environment where the customers use a web application, mobile application, or anything else. Vulnerabilities can be introduced in any of those steps, so it is highly recommended to put some barriers to prevent them from ruining your environment.

The “defense in depth” concept recommends performing automatic vulnerability scanning on different steps of the application lifecycle (sometimes even overlapping them). This will reduce the number of vulnerabilities introduced in your production environment. Sysdig Secure can help.

Sysdig’s vulnerability management is distributed and provides flexible integration across the whole application lifecycle while offering centralized governance to define the policies or create reports. It also provides a constant feedback loop of vulnerabilities with all the context needed to be fixed in a developer friendly way (which packages and versions need to be updated).

Development

Let’s start from the beginning, the developer workstation.

As a developer, you have your own tools which you are comfortable with: your IDE, your CLI tools, your headphones, and your preferred music. You are creating some amazing new applications using your preferred language. In this creative process, you are not starting from scratch (it doesn’t make sense), but you rely on third-party frameworks or libraries that, at the same time, are relying on other third-party frameworks or libraries, that are relying on other third-party frameworks, and on, and on.

Once you finish the piece of code you are working on, you probably want to package it as a container as this is the standard way to deploy applications nowadays. But, before submitting your PR (pull request), you want to try to run a local deployment before committing your code, just in case.

sysdig-cli-scanner is a binary that you can download and run on your workstation (either x86_64 or arm64, Linux, or OSX!) and it will scan your container image for known vulnerabilities on your dependencies. It’s as simple as:

OS=$(uname -s | tr "[A-Z]" "[a-z]")
VERSION=$(curl -L -s https://download.sysdig.com/scanning/sysdig-cli-scanner/latest_version.txt)
ARCH="arm64"
curl -sL "https://download.sysdig.com/scanning/bin/sysdig-cli-scanner/${VERSION}/${OS}/${ARCH}/sysdig-cli-scanner" -o ~/bin/sysdig-cli-scanner
pushd ~/bin/
shasum -a 256 -c <(curl -sL "https://download.sysdig.com/scanning/bin/sysdig-cli-scanner/${VERSION}/${OS}/${ARCH}/sysdig-cli-scanner.sha256")
popd
SECURE_API_TOKEN=<your-api-token> ~/bin/sysdig-cli-scanner --apiurl <sysdig-api-url> <image-name>

For example, performing a vulnerability scanning with Sysdig using the Sysdig’s dummy vulnerable application on a M1 (arm64) Apple hardware using OSX 12 would look like:

SECURE_API_TOKEN="xxx" ~/bin/sysdig-cli-scanner mariadb:latest --apiurl https://eu1.app.sysdig.com
2023-01-27T11:48:33+01:00 Starting analysis with Sysdig scanner version 1.3.3
2023-01-27T11:48:33+01:00 Retrieving MainDB...
2023-01-27T11:48:33+01:00 Done, using cached DB
2023-01-27T11:48:33+01:00 Loading MainDB...
2023-01-27T11:48:33+01:00 Done
2023-01-27T11:48:33+01:00 Retrieving image...
2023-01-27T11:48:45+01:00 Done
2023-01-27T11:48:45+01:00 Scan started...
2023-01-27T11:48:45+01:00 Uploading result to backend...
2023-01-27T11:48:45+01:00 Done
2023-01-27T11:48:45+01:00 Total execution time 12.069945833s

Type: dockerImage
ImageID: sha256:a748acbaccae4dc8152ded948fa5a304df7b0888b4cea9116385e5e3bd812bfc
Digest: mariadb@sha256:8c15c3def7ae1bb408c96d322a3cc0346dba9921964d8f9897312fe17e127b90
BaseOS: ubuntu 22.04
PullString: mariadb:latest

42 vulnerabilities found
0 Critical (0 fixable)
1 High (1 fixable)
25 Medium (23 fixable)
5 Low (0 fixable)
11 Negligible (4 fixable)

             PACKAGE               TYPE           VERSION              SUGGESTED FIX       CRITICAL  HIGH  MEDIUM  LOW  NEGLIGIBLE  EXPLOIT
  github.com/opencontainers/runc  golang          v1.0.1                   v1.1.2             0       1      1      0       0          0
  libmysqlclient21                  os    8.0.31-0ubuntu0.22.04.1  8.0.32-0buntu0.22.04.1     0       0      18     0       0          0
  libgssapi-krb5-2                  os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libk5crypto3                      os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libkrb5-3                         os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libkrb5support0                   os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libpam-modules                    os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0
  libpam-modules-bin                os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0
  libpam-runtime                    os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0
  libpam0g                          os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0

                                                                POLICIES EVALUATION
    Policy: Sysdig Best Practices FAILED (1 failures - 0 risks accepted)

Policies evaluation FAILED at 2023-01-27T11:48:45+01:00
Full image results here: https://eu1.app.sysdig.com/secure/#/scanning/assets/results/173e24c1b551bf2190c7491c5dda6070/overview (id 173e24c1b551bf2190c7491c5dda6070)

Let’s highlight a couple of facts:

  • The total execution time took 12 seconds, but the scan took barely a second:
2023-01-27T11:48:45+01:00 Scan started...
2023-01-27T11:48:45+01:00 Uploading result to backend...
2023-01-27T11:48:45+01:00 Done
  • The entire information on what packages are vulnerable and the suggested fix are also available right there:
             PACKAGE               TYPE           VERSION              SUGGESTED FIX       CRITICAL  HIGH  MEDIUM  LOW  NEGLIGIBLE  EXPLOIT
  github.com/opencontainers/runc  golang          v1.0.1                   v1.1.2             0       1      1      0       0          0
  libmysqlclient21                  os    8.0.31-0ubuntu0.22.04.1  8.0.32-0buntu0.22.04.1     0       0      18     0       0          0
  libgssapi-krb5-2                  os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libk5crypto3                      os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libkrb5-3                         os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libkrb5support0                   os           1.19.2-2            1.19.2-2ubuntu0.1        0       0      1      0       0          0
  libpam-modules                    os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0
  libpam-modules-bin                os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0
  libpam-runtime                    os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0
  libpam0g                          os        1.4.0-11ubuntu2        1.4.0-11ubuntu2.1        0       0      0      0       1          0

The scan results could also be observed in the Sysdig URL, where you can see the same results but with more detail (and pretty colors!):

The Sysdig UI shows a detailed view of the vulnerabilities found:

The packages and versions affected:

The policies evaluation:

And some detail about the particular image:

linux/arm64 container images supported.

CI/CD

Let’s assume you already fixed all those vulnerabilities by updating the libraries dependencies and the PR has been submitted. The next step in the build chain is running a CI/CD pipeline to build the application, build the container image, run some tests… and check for vulnerabilities again. Wait, what? Why again?

  • Who can guarantee the vulnerability scan has been done religiously by all the developers locally in their workstations before submitting the pull request?
  • What if the developer performed the scan a couple of days ago and a new vulnerability has been found?

Additionally, as the vulnerability scan takes only a few seconds, it makes sense to run it again.

But not just that, the CI/CD scan can use different policies than the ones used in previous steps (read more about vulnerabilities policies). What about container image best practices, such as not running as root? Or PCI Audit policies? What about the “My company baseline” policy?

You can perform vulnerability scanning with Sysdig at development level and enforce some more restrictions in your CI/CD as a gate for production workloads.

You can decouple the different policies and run them on different steps of the software supply chain! Using the sysdig-cli-scanner is just as easy as adding a --policy flag for the policies you want to check against.

As explained before, the sysdig-cli-scanner can be executed in virtually any CI/CD system out there, it is just a single binary. If you want to learn more, we have some examples on how to integrate it with some popular tools such as Jenkins (hint: for Jenkins there is an official plugin already), GitHub actions, GitLab CI/CD, or Azure Pipelines.

When the news on Log4j came out, we received calls from our customers asking what the impact was. We were able to scan our containers quickly for vulnerabilities and we knew immediately if there were any issues. Using Sysdig Secure, we were able to find out in less than five minutes what the potential risk would be.
Sam Brown
Director, Information Security, Expel

Registry

The policies have been enforced already in the CI/CD pipeline, so, essentially, the last step of the CI/CD should be pushing the container image to the container registry. Then, why scan container images at the registry level again?

The idea is to follow the zero trust security model, which says “never trust, always verify.” What if the CI/CD was bypassed and someone pushed the container image directly? What about those images that were scanned weeks ago? Were new vulnerabilities discovered since that last scan?

But there is more. What about third-party container images? It is pretty common in enterprise environments to have air-gapped architectures where the container images needed for an application, and provided by third parties (such as SQL databases, event streaming platforms, in-memory databases, web servers, etc.), are mirrored into internal container registries. Those images bypass your CI/CD pipeline but you should still perform vulnerability scanning on them.

Sysdig supports container registry scanning and it scans all the container images periodically or based on events, such as pushing a new container image to the registry.

The scan results can be observed in the Sysdig web interface under the Vulnerabilities -> Registries section:

Registry Scanning is currently in a “Controlled Availability” phase. Please contact your Sysdig representative if you want to try it.

Runtime

All the guardrails are in place, the container image is stored safely in the registry, and it is time to run it in a production environment. The Sysdig agent also performs vulnerability scanning of the container images running in the Kubernetes cluster. The question is why?

First and foremost, having insights into the container images running in the production environment is very useful. If you find a vulnerability affecting a container image that is not running, then it is not so urgent to fix it versus a container image running wildly on production.

But there is more.

Prioritization based on “in-use” exposure

The Sysdig agent performs kernel level instrumentation (via a kernel module or an eBPF application) to observe every single Linux syscall. This means it is able to identify everything that happens under the hood, including the running processes, the files opened, or the network connections, so it can determine which processes are actively running in your container or which libraries are being used. Connecting that awesome feature with the vulnerability scanning capabilities means it can identify the vulnerable packages that are being used, so ideally they are fixed first. This is what we call “Risk Spotlight” and based on customer feedback, we have discovered that up to 95% of vulnerabilities are considered noise.

I’m saving an hour and a half per vulnerability by not having to investigate when the package is not in use
Michal Pazucha
Security Architect, Beekeeper

The Sysdig runtime scanner is deployed by default with the the new Vulnerability Management engine (using the nodeAnalyzer.secure.vulnerabilityManagement.newEngineOnly=true parameter), and “Risk Spotlight” can be enabled easily by following the official documentation and setting the nodeAnalyzer.runtimeScanner.settings.eveEnabled=true parameter when you deploy the Sysdig agent using the Helm charts.

Host scanning

A compromised container image is bad. Depending on the vulnerability or the ability of the threat actor, it can perform lateral movements to other containers or, worst case, to the host itself. Fortunately, all of the security isolation mechanisms on containers make this scenario rare (but not impossible). However, directly compromising a host is even worse. If the threat actor is able to compromise the host (depending on the level of compromise, of course), they have access to virtually all the containers running on top of it.

That’s why scanning the container host is also important (remember “zero trust”?). The Sysdig agent performs this procedure every 12 hours to avoid stalled information. As demonstrated before, the vulnerability scanning procedure only takes a few seconds so it is a no brainer not doing it.

Sysdig vulnerability management supports the most common Linux operating systems, even cloud or image-based/immutable operating systems (such as Google Container-Optimized OS (COS), RHCOS or Flatcar). Basically, they use a packaging system under the hood (rpm-ostree for RHCOS or Gentoo’s ebuilds for Flatcar), which means that there is a SBOM to check against. Also, a number of package types are supported, such as Java, Golang, or Python packages.

Host scanning is deployed by default using the sysdig-deploy Helm chart version 1.5.0+ and HostScanner container version 0.3.0+, and exposed on the same Sysdig web interface under the Vulnerabilities -> Runtime section by filtering the result by host asset type:

Reporting

Using Sysdig reporting capabilities, the security team can quickly find which containers or hosts are vulnerable and where they are running. Reports are generated by applying filters to focus on what matters the most and are useful to understand which vulnerabilities affect different teams or images so they can be fixed.

The following screenshot shows a report of running images that are vulnerable to the infamous Log4J CVE-2021-44228 that is being sent to our inbox at 9 a.m. everyday.

And this one for Debian hosts in the “demo-kube-aws” cluster containing vulnerabilities which CVSS score is > 7:

Accepting risks

What if you are aware of a vulnerability but it is a low priority to fix (it is not being used or the dependency would be planned for removal soon). You can “Accept risk” it!

Accepting Risk makes an exception to the Vulnerability Policy; it doesn’t make the CVE disappear. It still shows it in the list, but voids the policy violation associated with that CVE.

You can accept risk based on different contexts, such as:

  • Global: the CVE is accepted globally
  • Container image or host: the CVE is accepted for that particular container image or host
  • Package: the CVE is accepted for a particular package (or package + version)

Be careful with the accepted scope or context; overly broad exceptions can create false negatives.

Accepting risks is as simple as selecting the “Accept risk” and filling the form with the details:

Then, a shield icon will be placed to indicate it:

The Policies -> Risk Acceptance section shows all the risk acceptances, including the ones that have already expired.

Neat!

Conclusions

Sysdig Secure vulnerability management provides a single pane of glass across the entire applications’ lifecycle, from the developer workstation, through the CI/CD pipeline, stored in registries, to the final production environment. Vulnerabilities can be introduced in any of those steps and at any time, so it is highly recommended to add as many layers of security as you can to prevent them from ruining your environment.

Start a Free Trial today!

The post End to End Vulnerability Scanning with Sysdig Secure appeared first on Sysdig.

]]>
Helm security and best practices https://sysdig.com/blog/how-to-secure-helm/ Tue, 15 Nov 2022 15:30:00 +0000 https://sysdig.com/?p=59836 Helm is being used broadly to deploy Kubernetes applications as it is an easy way to publish and consume them...

The post Helm security and best practices appeared first on Sysdig.

]]>
Helm is being used broadly to deploy Kubernetes applications as it is an easy way to publish and consume them via a couple of commands, as well as integrate them in your GitOps pipeline. But is Helm security taken seriously? Can you trust it blindly?

This post explains the benefits of using Helm, the pitfalls, and offers a few recommendations for Helm security. Let’s get started!

Why Helm?

Helm is a graduated open-source CNCF project originally created by DeisLabs. If you want to know more about how it works, we recommend you read the Helm 101 article of our Learning Cloud Native hub.

Let’s face it, managing an application lifecycle on Kubernetes is hard. If you want to just deploy an application, it requires at least a Deployment, usually tweaking the application configuration via modifications performed to a ConfigMap or a Secret, deploying CRDs if needed, and more. Clearly, it’s not really straightforward.

You can create your own deployment files with some environment variables and use envsubst to substitute them at runtime (envsubst < deploy.yml | kubectl apply -f -) to have a “DIY templating engine,” but that is probably not an optimal solution.

Kustomize improves the previous DIY solution but it has some limitations as well (it is mainly focused on templating, not packaging). Jsonnet can also be used for templating.

Helm security is not a priority and it is largely up to the user to make good use of it. Helm is not perfect, but it tries to make that process easier by providing a simple command interface, a repository with more than 9000 charts available called Artifact Hub (and the ability to host your own charts on your own repository), and a templating engine (with over 60 available functions, mostly based on the Go template language). That allows you to package complex applications to make them easily deployable by just providing specific parameters.

For example, you can deploy a whole MySQL cluster with replication enabled (a non-trivial task, let’s be honest) by just using the architecture=replication parameter.

It also has some advanced features, such as hooks (to run specific tasks at specific points on the deployment process such as ‘pre-install’), and can be integrated with GitOps tools, such as ArgoCD or Flux. You can leverage library charts or named templates, and even run post-render tasks (e.g., to run Kustomize).

Helm security – How to secure Helm

We’ve covered a lot of ground, but we didn’t pay any attention to any Helm security aspects and most charts are not secure by default.

Securing Helm

There are a few angles to tackle depending on the process we want to cover. Are we just consuming the Helm charts, the Kubernetes objects created by the charts, or are we talking about custom Helm charts?

Custom Helm charts

If writing your own Helm charts, a few general recommendations apply, as well as some security focused ones:

  • Store the charts in a Git repository. This may seem obvious in 2022, but Git will give you some benefits just by using it, such as easy rollbacks or the ability to track changes.
  • Store the Helm charts in a proper repository. Charts can be served via HTTP but everything is HTTPS these days, right?
  • Use helm lint or any other linter you prefer to verify the Helm charts are properly formed. You don’t want to break the production environment for a silly typo.

    For example, in this basic Helm chart file without a proper version, Helm lint complains about it:

    
    apiVersion: v2
    name: hello-world
    description: A Helm chart for Kubernetes
    type: application
    appVersion: "0.0.1"
    $ helm lint --strict
    ==> Linting .
    [ERROR] Chart.yaml: version is required
    [INFO] Chart.yaml: icon is recommended
    [ERROR] templates/: validation: chart.metadata.version is required
    [ERROR] : unable to load chart
    		validation: chart.metadata.version is required
    Error: 1 chart(s) linted, 1 chart(s) failed
    
  • Use consistent versioning on your charts (Helm follows the SemVer2 standard). It is helpful for reproducibility and to be able to respond quickly in a situation where you need to update your charts because a vulnerability has been found. If your charts are unversioned or using “latest”, which one would you update?
    There are two different versions you can use: the version of the chart itself (version in the Chart.yaml file) and the version of the application (appVersion).
    $ helm show chart falcosecurity/falco | grep -E '^version|^appVersion'
    appVersion: 0.33.0
    version: 2.2.0
    

    Don’t forget to Keep a Changelog (like Falco does).

  • Create test scenarios for your Helm charts to cover your use cases. The idea is to validate the success of the Helm deployment by creating Kubernetes objects (as in Helm templates) that will test your deployed chart by running helm test <RELEASE_NAME>. For example, a test can be just a simple pod running on the same namespace where your application has been deployed, that queries your application API to see if it has been deployed properly:
apiVersion: v1
kind: Pod
metadata:
  …
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['{{ .Values.service.name }}:{{ .Values.service.port }}']
  restartPolicy: Never

Usually, tests are stored in the templates/tests/ folder and are required to have the “helm.sh/hook": test annotation to identify themselves as tests.

$ helm test hello-world
NAME: hello-world
...
Phase:          Succeeded
$ kubectl get po -n hello-world
NAME                           READY   STATUS      RESTARTS   AGE
hello-world-78b98b4c85-kbt58   1/1     Running     0          91s
hello-world-test               0/1     Completed   0          67s
  • Sign your charts easily with helm package –sign (and verify them with helm install --verify). Asserting the integrity of the software components is the most common task when securing the software supply chain. This usually means verifying a digital signature (either included with the software itself or close to it). Helm uses a PGP-based digital signature to create provenance records stored in provenance files (.prov), which are stored alongside a packaged chart. Let’s see an example:
$ helm package --sign --key 'Eduardo Minguez' hello-world --keyring ~/.gnupg/secring.gpg
Password for key "Eduardo Minguez (gpg key) <edu@example.com>" >
Successfully packaged chart and saved it to: /home/edu/git/my-awesome-stuff/hello-world-0.0.1.tgz

And this is what the provenance file looks like:

$ cat hello-world-0.0.1.tgz.prov
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
...
name: hello-world
...
files:
  hello-world-0.0.1.tgz: sha256:b3f75d753ffdd7133765c9a26e15b1fa89784e18d9dbd8c0c51037395eeb332e
-----BEGIN PGP SIGNATURE-----
wsFcB…
-----END PGP SIGNATURE-----%

If the signature doesn’t match, Helm will complain:

$ helm verify hello-world-0.0.1.tgz
Error: openpgp: invalid signature: hash tag doesn't match

Running helm install --verify will automatically check the provenance files:

$ helm install --verify myrepo/mychart-1.2.3

Or, you can just pull the chart and verify it:

$ helm pull --verify myrepo/mychart-1.2.3

The public key needs to be trusted beforehand for --verify to work, so you must make it publicly available somewhere, otherwise it will fail:

$ helm pull --verify myrepo/mychart-1.2.3
Error: openpgp: signature made by unknown entity
$ cat security/pubkey.gpg | gpg --import --batch
$ helm pull --verify myrepo/mychart-1.2.3
Signed by:..

There is also a sigstore Helm plugin to use Rekor as signature storage, which is even better.

  • Automate all the previous steps (testing, versioning, signing, and releasing) in a CI/CD pipeline to make sure they are consistent with the best practices on every change, and to avoid potential problems when doing manual changes.
    You can use the helm/charts-repo-actions-demo for inspiration on how to create a GitHub actions workflow to test and release a chart:
      - name: Run chart-releaser
        uses: helm/chart-releaser-action@v1.4.0
        with:
          charts_dir: charts
          config: cr.yaml
        env:
          CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

Kubernetes objects

When creating the Kubernetes objects via templates, Helm doesn’t provide any security measures out of the box. You are on your own and can apply any bad practice you want, such as deploying a container with a root user or with full capabilities (OK, you may want to do it). Let’s talk about some recommendations:

  • Use Role-based access control (RBAC) to limit the object’s permissions (don’t use cluster-admin for everything). For example, the falcosidekick Helm chart creates a Role, ServiceAccount, and RoleBinding to minimize the required permissions used in the K8s Deployment:
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ include "falcosidekick.fullname" . }}
…
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: {{ include "falcosidekick.fullname" . }}
…
rules:
- apiGroups:
    - ""
  resources:
    - endpoints
  verbs:
    - get
…
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
…
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: {{ include "falcosidekick.fullname" . }}
subjects:
- kind: ServiceAccount
  name: {{ include "falcosidekick.fullname" . }}
…
  • Provide sane defaults. For example, if your chart includes a MySQL pod, don’t use a default password so any user could know about it. Instead, generate it randomly or force the user to specify it. However, there are a couple of things to consider, including how to deal with upgrades in this GitHub issue and this blog post. You can use the lookup function and the resource policy annotation to prevent overwriting when upgrading, as follows:
{{- if not (lookup "v1" "Secret" .Release.Namespace "hello-world") }}
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
  annotations:
    "helm.sh/resource-policy": "keep"
type: Opaque
stringData:
  password: {{ randAlphaNum 24 }}
{{- end }}
falcosidekick:
  # -- Enable falcosidekick deployment.
  enabled: false

Which is then used in the Chart:

dependencies:
  - name: falcosidekick
    condition: falcosidekick.enabled

All the rest of Kubernetes recommendations apply (including the CIS benchmarks, for example), so make sure you scan your Kubernetes object definitions for best practices. If your preferred tool doesn’t support Helm charts, don’t worry. You can always render the Kubernetes objects in a previous step with the Helm template command, as follows:

$ helm template falco falcosecurity/falco --namespace falco --create-namespace --set driver.kind=ebpf > all.yaml

And then verify them as:

$ myawesometool --verify all.yaml

Using Helm charts

  • Don’t trust the Helm charts blindly, especially third-party ones. Fortunately, as we’ve seen, the helm template command renders and outputs the Kubernetes objects created by the Helm chart, so it is a good practice to at least take a quick look at the results before deploying it in your Kubernetes cluster. You probably don’t want to use joaquinito2051‘s charts.
  • As explained before, use helm verify to check the digital signatures of the charts you use to make sure you are using the charts you are supposed to.
  • Uninstall unused releases: If you’re no longer using a Helm release, uninstall it to reduce your attack surface.
  • Always try to keep the Helm Charts you use updated (as well as the helm binary and their plugins!). Let’s face it, mistakes and bugs happen so it is a good idea to always use the latest version with the latest fixes for both the Helm chart itself or for the objects the Helm chart creates (for example, if it uses a container image that has been found vulnerable). This also applies to subcharts. There are a couple of options to verify what an upgrade will change, including using the helm diff plugin:
$ helm diff --install foo --set image.tag=1.14.0 .

Or the ability to render the manifests via helm template and use kubectl to make the diffs:

$ helm template --is-upgrade --no-hooks --skip-crds foo --set image.tag=1.14.0 . | kubectl diff --server-side=false -f -

However, there are some corner cases when using both approaches, and ideally you should check both to cover all the scenarios. See the kubectl and Helm diff challenges article for more details.

Create an AWS SSM SecureString object:

$ aws ssm put-parameter --name mysecret --value "secret0" --type SecureString

Check the Helm parameter required. In this example, “secretdata“:

$ cat hello-world/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
stringData:
  password: {{ .Values.secretdata }}

Verify it:

$ helm secrets --backend vals template hello-world -s templates/secret.yaml --set secretdata="ref+awsssm://mysecret"
---
# Source: hello-world/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
stringData:
  password: secret0
$ helm install mychart my-chart --post-renderer my-script.sh

Where the my-script.sh script can be mostly everything, including running kustomize to apply environment variables, verifying a specific parameter has not been used, a script that calls a webhook to get some data, Windows batch scripts, and more! Your imagination is the limit!

Helm security conclusion

Helm is a useful tool to manage the Kubernetes applications lifecycle. While the security aspect is not enforced by default, this article has covered some best practices and Helm security recommendations for consuming and creating Helm charts.

The post Helm security and best practices appeared first on Sysdig.

]]>
Container Image Scanning on Jenkins with Sysdig https://sysdig.com/blog/docker-scanning-jenkins/ Wed, 26 Oct 2022 15:00:00 +0000 https://sysdig.com/?p=9949 Scanning a container image for vulnerabilities or bad practices on Jenkins using Sysdig Secure is a straightforward process. This article...

The post Container Image Scanning on Jenkins with Sysdig appeared first on Sysdig.

]]>
Scanning a container image for vulnerabilities or bad practices on Jenkins using Sysdig Secure is a straightforward process. This article demonstrates a step-by-step example of how to do it using the Sysdig Secure Jenkins plugin.


This blog post is focused on the vulnerability scanner available since April 2022. If you are using the legacy scanner, see the official documentation for more information about it.


You can go straight to the pipeline definition here.

How Sysdig Secure Scans for Image Vulnerabilities

With image scanning you can discover vulnerabilities in your container images at a much earlier point in the production pipeline. Detecting compromised images before they are pushed to a container registry, or before the containers are deployed in your production environments makes the whole process far more secure. It means your team gets to shift security left, finding and fixing problems much more quickly.

Sysdig categorizes the vulnerability scan results in two sections, depending on the point where the scanning was performed:

  • Pipeline: Before the runtime phase (in the developer workstation, in a CI/CD pipeline, etc.) performed by the sysdig-cli-scanner tool.
  • Runtime: When the image is running in the execution node and the scanning is performed by a Sysdig agent.

In this article, we will cover how to perform scanning on the pipeline step using Jenkins, as it is a best practice to adopt.

Running the scanner against a container image is as simple as running the sysdig-cli-scanner tool with a few flags (see the official documentation for more information), such as:

SECURE_API_TOKEN=<your-api-token> ./sysdig-cli-scanner --apiurl <sysdig-api-url> <image-name> --policy <my-policy>

The image is scanned locally on the host where the tool is executed, on your laptop or a container running the pipeline, and only the scanning results are sent to the Sysdig Secure backend.

The Sysdig Secure Jenkins plugin wraps the sysdig-cli-scanner so it can be consumed easily in your Jenkins environment. It can be used either in a Pipeline job or added as a build step to a Freestyle job to automate the process of running an image analysis, evaluating custom policies against images and performing security scans.

Vulnerability scanning with Jenkins

Jenkins is an open source automation server, allowing you to automate software development tasks creating powerful CI/CD (continuous integration/continuous delivery or deployment) workflows triggered by different events. It is one of the most known CI/CD tools and it has a rich plugin ecosystem for integrations and tools ready to be used (1800+ community plugins!).

Image scanning has become a critical step in CI/CD workflows by introducing security earlier in the development process (security shift-left). Our workflow will build a container image where the definition is stored in a GitHub repository, then it will locally scan the image using the Sysdig Secure Jenkins plugin. The scan results will then be sent to Sysdig. If the scan fails, the workflow breaks, preventing the image from being uploaded into a registry.

Scanning container images with Sysdig Secure Jenkins plugin


The Sysdig Secure Jenkins plugin version used in this example is the 2.2.5 with the new scanning engine. If you’re using the legacy scanner, see the official documentation and the README-legacy.md file in the Sysdig Secure Jenkins plugin GitHub repository .

Prerequisites

The requirements for getting Sysdig Image Scanning up and running are straightforward:

  • A Jenkins server already running (this example uses Jenkins version 2.361.2)


A quick way to run a Jenkins server for test purposes only (it uses docker-in-docker, which is not recommended from a security perspective) is to follow the Jenkins official documentation.

  • A Git repository where the container image definition is stored (e.g., GitHub).
  • A Sysdig Secure account where the results of the scans will be collected. You can request a free trial if you don’t have one.
  • A container Dockerfile ready to be built. You can fork and use our example, but using your own container is more fun!

Once you are ready, let’s move on!


The steps performed in this example can be done also using Jenkins’ configuration as code.

Install the Sysdig Secure Jenkins plugin

The Sysdig Secure plugin is published as a Jenkins plugin and is available for installation on any Jenkins server using the Plugin Manager in the web UI through the Manage Jenkins > Manage Plugins view, available to administrators of a Jenkins environment.

Credentials

For sensitive data, such as the registry password or the API token, it is recommended to create credentials using the Manage Jenkins > Manage Credentials view, available to administrators of a Jenkins environment.

In this example, we will use two credentials:

  • sysdig-secure-api-token: The Sysdig API token is required to query the Sysdig API and to send the scan results. See the official documentation for more information on how to get it.
  • registry-credentials: To store the username and password (or access token) used to push images into the container registry.

Set up an image scanning pipeline on Jenkins

Jenkins has a few different ways to express the automation, including using the basic “Freestyle Project” (where you define the different steps in the UI) or using Jenkins pipelines. Pipelines are expressed as groovy code stored in aJenkinsfile file. There are also two different pipeline syntax, “declarative” or “scripted”, and they can be coded into the UI directly or stored in the code repository with the application code itself for easy consumption.

Pipelines can be parametrized and as complex as needed, but the example workflow used in this blog post (stored in the sysdiglabs/secure-inline-scan-examples repository) is a simple version on what can be achieved with Jenkins pipelines.

To create the pipeline, select the “New Item” -> “Pipeline” buttons and type the name of your pipeline. Then, paste the Jenkinsfile content on the “Pipeline” text box.

Let’s see the pipeline definition step by step.

pipeline {
  environment {
    image = "docker.io/edusysdig/myawesomeimage" + ":$BUILD_NUMBER"
    registryCredential = "registry-credentials"
    repository = 'https://github.com/sysdiglabs/secure-inline-scan-examples.git'
    SYSDIG_API_TOKEN_CRED = credentials('sysdig-secure-api-token')
    api_endpoint = 'https://eu1.app.sysdig.com'
    myimage = ''
  }
  agent any

Those first lines are intended to define the pipeline environment variables:

  • The image variable stores the full image path (including the registry "docker.io/edusysdig/myawesomeimage") as well as the tag (":$BUILD_NUMBER"). The BUILD_NUMBER variable (representing the build ID in Jenkins) is used as a trick to have different images built on different runs.
  • The registryCredential variable points to the "registry-credentials" credential Jenkins object that contains the registry credentials created earlier.
  • repository stores the URL where our container image is stored ('https://github.com/sysdiglabs/secure-inline-scan-examples.git').
  • The SYSDIG_API_TOKEN_CRED stores the content of the sysdig-secure-api-token credential created earlier.
  • api_endpoint stores the appropriate Sysdig vulnerability scanning endpoint depending on your region, see the official documentation.
  • myimage is an empty (for now) global variable that will be used later in the pipeline. It is defined here to be common to all the steps.

The agent section specifies where the pipeline (or just a specific stage) will be executed.

Then, the pipeline is defined on stages that contains single or multiple steps:

  stages {
    stage('Cloning Git') {
      steps {
        git branch: 'main', url: repository
      }
    }

The first stage is to get the repository content using the git function, where we specify the branch and the repository URL (stored in the repository environment variable).

    stage('Building image') {
      steps{
        script {
          myimage = docker.build(image, "./jenkins/new-scan-engine/")
        }
      }
    }

The second stage leverages the Docker pipeline plugin to build the container image. The docker.build function receives the image argument to name the container image being built, as well as the context where the docker build operation will be performed (in this case the "./jenkins/new-scan-engine/" folder of the repo that also contains theDockerfile file). The myimage result will be pushed to the container registry if the scan finishes successfully.


In this case, we used script to switch to a scripted pipeline section, leveraging the myimage variable later in the pipeline.

    stage('Scanning Image') {
        steps {
            sysdigImageScan engineCredentialsId: 'sysdig-secure-api-token', imageName: "docker://" + image, engineURL: api_endpoint
        }
    }

This is the stage where the Sysdig Secure Jenkins plugin came into action. It is as easy as it looks! Just specify the credentials, the image, and an optional Sysdig Secure URL. Then you are good to go!

There are a few other parameters in the official documentation that can be specified, such as the ability to allow the pipeline to progress even if the scan found vulnerabilities (bailOnFail). Those parameters can also be specified globally, configuring the plugin directly in the “Manage Jenkins” -> “Configure System” section:

Finally, the last step:

    stage('Deploy Image') {
      steps{
        script {
          docker.withRegistry('', registryCredential) {
            myimage.push()
            myimage.push('latest')
          }
        }
      }
    }
  }
}

The last stage is intended to push the container image to the registry after the scan finishes successfully (if it is). The docker.withRegistry function receives two arguments: the registry where the image is going to be pushed to (in this case, we use the default docker.io registry so that’s why the content is empty) and the credentials used to push the image to the container registry.

Finally, the image is pushed twice: the first time with the image:tag we provided, and the second with the 'latest' tag (hint: it is just pushed once, the second push just tags it in the repo).

Image scanning on Jenkins: Lights, camera, action!

Once everything is in place, let’s run the pipeline by selecting the ‘Build Now’ button:

If everything went well, the pipeline finishes successfully and all the steps are green:

On every run, the Sysdig Secure Jenkins plugin generates some JSON files to describe the output of the execution:

As well as a summary on the “Sysdig Secure Report” section:

You can also observe the logs in the “Console Output” section:

The analysis results are posted to your Sysdig Secure account under Vulnerability -> Pipeline:

Success! No vulnerabilities were found and the image has been published by pushing it to the registry.

This example scan used the default “Sysdig Best Practices” policy (you can see it on the logs), but you can create and customize the policies you want to check against. See the official documentation to learn more about how to create and customize policies, including not just vulnerability policies, but also image best practices.

If the scan process wasn’t successful because a policy failed, the workflow stops and the image is not pushed to the registry (which is a good idea), as you can see here:

There weren’t any vulnerabilities found in this example (yay!), but if we look at another application, such as https://github.com/sysdiglabs/security-playground, we can see the details in the Jenkins UI directly:

And the Sysdig UI shows those as well:

You can filter the ones that have fixes already and/or are exploitable, focusing on the most urgent ones to fix or update:

You can not only see vulnerabilities, but also some not best practices:

Automating the execution via triggers

We didn’t cover how to run the pipeline automatically on every repository change. This is a slightly more advanced topic as there are different approaches depending on your particular needs, including pooling the repository every X minutes or configuring webhooks so when a change happens, Jenkins is notified. You can read more about it in the following screenshot:

Conclusions

As you can see, Jenkins pipelines are a powerful tool to automate your CI/CD. Now, it is easy and straightforward to include Sysdig Secure Inline Scan in your workflow, scanning images for vulnerabilities, enforcing best practices at build time, and providing several benefits over traditional image scanning within the registry:

  • Implementing image scanning in the CI/CD pipeline means that if vulnerabilities are found, you prevent the image from being published at all.
  • As analysis is performed inline (locally) in the runner and the image is not sent anywhere else, including outside of the environment in which it’s built. During the analysis, only metadata information, and not the actual contents, is extracted from the image.
  • The metadata obtained from the analysis can be reevaluated later if new vulnerabilities are discovered or policies are modified without requiring a new scanning of the image for alerting. If you want to break the build, you still need to trigger a new scan within the build.
  • Sysdig Secure provides out-of-the-box policies to enforce and adhere to various container compliance standards (like NIST 800-190 and PCI).

Sysdig Secure image scanning can be integrated seamlessly with most CI/CD pipeline tools.

If you are not yet using Sysdig Secure for scanning your images, wait no longer and request a demo now!

The post Container Image Scanning on Jenkins with Sysdig appeared first on Sysdig.

]]>
Image scanning for GitLab CI/CD https://sysdig.com/blog/gitlab-ci-cd-image-scanning/ Wed, 12 Oct 2022 15:00:00 +0000 https://sysdig.com/?p=17283 Scanning a container image for vulnerabilities or misconfigurations on your GitLab CI/CD using Sysdig Secure is a straightforward process. This...

The post Image scanning for GitLab CI/CD appeared first on Sysdig.

]]>
Scanning a container image for vulnerabilities or misconfigurations on your GitLab CI/CD using Sysdig Secure is a straightforward process. This article demonstrates a step-by-step example of how to do it.


The following proof of content showcased how to leverage the sysdig-cli-scanner with GitLab CI/CD. Although possible, this procedure is not officially supported by Sysdig, so we recommend checking the documentation to adapt these steps to your environment.
This blog post is focused on the vulnerability scanner available since April 2022. If you are using the legacy scanner, see the official documentation for more information about it.


You can go straight to the pipeline definition here.

Image vulnerability scanning with Sysdig Secure

Image scanning allows DevOps teams to shift security left by detecting known vulnerabilities and validating container build configuration early in their pipelines before the containers are deployed in production, or images are pushed into any container registry. This allows detecting and fixing issues faster, avoids vulnerabilities in production or credential leaks, and improves the delivery to production time, all in a much more secure way.

The Sysdig Image Scanning process is based on policies that can be customized to include different rules, including ImageConfig checks (e.g., leakage of sensitive information) and checks for not just OS packages (rpm, deb, etc.), but also language specific packages and libraries (java, python, etc.).

Sysdig vulnerability scanning classifies images differently depending on where the scanning procedure is performed:

  • Pipeline: Before the runtime phase (in the developer workstation, in a CI/CD pipeline, etc.) performed by the sysdig-cli-scanner tool.
  • Runtime: When the image is running in the execution node and the scanning is performed by a Sysdig agent.

In this article, we will cover how to perform scanning on the pipeline step using GitLab CI/CD, as it is a best practice to adopt.

Running the scanner against a container image is as simple as running the sysdig-cli-scanner tool with a few flags (see the official documentation for more information), such as:

SECURE_API_TOKEN=<your-api-token> ./sysdig-cli-scanner --apiurl <sysdig-api-url> <image-name> --policy <my-policy>

The image is scanned locally on the host where the tool is executed, on your laptop or a container running the pipeline, and only the scanning results are sent to the Sysdig Secure backend.

Vulnerability scanning with GitLab CI/CD

GitLab CI/CD is an open source continuous integration and delivery server integrated with the GitLab software development and collaboration platform.

Once you have configured GitLab CI/CD for your repo, every time a developer pushes a commit to the tracked repository branches, the pipeline scripts will be automatically triggered.

You can use these pipelines to automate many processes. Common tasks include QA testing, building software distribution artifacts (like container images or Linux packages), validating configuration, vulnerabilities, and compliance.

Image scanning has become a critical step in CI/CD workflows by introducing security earlier in the development process (security shift-left). Our workflow will build a container image, then it will locally scan the image using the sysdig-cli-scanner tool. The scan results will then be sent to Sysdig. If the scan evaluation fails, the workflow breaks, preventing the image from being uploaded into a registry.. Otherwise, the container image is pushed to the GitLab container registry.

Creating a GitLab CI/CD pipeline

The versions used in this example are:

  • Runner: 15.4.0~beta.5.gdefc7017 using Docker executor with image ruby:2.5.
  • Sysdig-cli-scanner version 1.2.9-rc, commit: e716ba6.

If you’re using the legacy scanner, the pipeline definition is different. See an example provided in the sysdiglabs/secure-inline-scan-example repository.

Prerequisites

The requirements for getting Sysdig Image Scanning up and running are straightforward:

  • A GitLab repository with administrative privileges.
  • A Sysdig Secure account where the results of the scans will be collected. You can request a free trial if you don’t have one.
  • A container Dockerfile ready to be built. You can fork and use our example, but using your own container is more fun!

Once you are ready, let’s move on!

Configure GitLab CI/CD in your repository

GitLab CI/CD is enabled by default on all new projects but if you did not activate GitLab CI/CD for your repository, the first step is making sure it is enabled. Navigate to your repository Settings, CI/CD section (https://gitlab.com/<user>/<repository>/edit), expand the “Visibility, project features, permissions” section, and make sure the “CI/CD” toggle is active:


Make sure the “Container registry” toggle is enabled as we will use it to store the container image as well.

Masked variables

For sensitive data such as the registry password or the API token, it is recommended to create masked variables instead in the repository Settings -> CI/CD -> Variables -> “Add variable.”

In this example, we will add the SYSDIG_SECURE_TOKEN to store the Sysdig API token required to query the Sysdig API and send the scan results. See the Sysdig official documentation for more information on how to get it.


It is worth mentioning that the variable should be masked to avoid potential security issues or leaks. See the official documentation for more information.

Container registry

We will leverage GitLab’s container registry to store the container image once the scan has been successfully completed.

There are a few special CI/CD variables to use the Container registry (CI_REGISTRY*) that are populated automatically by GitLab, so there is no need to specify them in our pipeline if we want to use it. Cool!

GitLab’s official documentation explains this in more detail, but the following is an example of the variables’ content once they are automatically populated:

  • CI_REGISTRY="registry.example.com"
  • CI_REGISTRY_IMAGE="registry.example.com/gitlab-org/gitlab-foss"
  • CI_REGISTRY_USER="gitlab-ci-token"
  • CI_REGISTRY_PASSWORD="[masked]"

Set up an image scanning GitLab CI/CD pipeline

GitLab CI/CD pipelines are defined by YAML files in the .gitlab-ci.yml file inside your repository by default. Pipelines can have variables and be broken into different steps, where every one can have different properties or run multiple commands/scripts.

Pipelines can be edited by using the Pipeline editor in the GitLab UI or just committing changes to the .gitlab-ci.yml file inside your repository.

Let’s see the pipeline definition in detail:

Variables and stages definition

variables:
  SYSDIG_SECURE_ENDPOINT: "https://eu1.app.sysdig.com"
  CI_IMAGE_TAG: "my-tag"
stages:
  - build
  - scan
  - push

We use a couple of variables to store the Sysdig API endpoint as well as the container image tag we want to use for the container image we are building. Also, we define the stages we are going to use to build, scan, and push the container image.

Build stage

image:build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - /kaniko/executor --dockerfile Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_IMAGE_TAG --no-push --oci-layout-path $(pwd)/build/ --tarPath $(pwd)/build/$CI_IMAGE_TAG.tar
  artifacts:
    paths:
      - build/
    expire_in: 1 days

The image building step leverages the Kaniko project to build the container image using the instructions from your Dockerfile, and generates a new local image in the $(pwd)/build/$CI_IMAGE_TAG.tar file that will be scanned in the next step.

Scan stage

image:scan:
  stage: scan
  before_script:
    - export SECURE_API_TOKEN=$SYSDIG_SECURE_TOKEN
  script:
    - curl -LO https://download.sysdig.com/scanning/bin/sysdig-cli-scanner/$(curl -L -s https://download.sysdig.com/scanning/sysdig-cli-scanner/latest_version.txt)/linux/amd64/sysdig-cli-scanner
    - chmod +x ./sysdig-cli-scanner
    - ./sysdig-cli-scanner --console-log  --apiurl $SYSDIG_SECURE_ENDPOINT file://$(pwd)/build/$CI_IMAGE_TAG.tar
  artifacts:
    paths:
      - build/
    expire_in: 1 days
    when: always
  needs:
    - image:build

At this stage, we will be scanning the image for vulnerabilities and validating configuration, storing the results on the Sysdig backend. One of the benefits of Sysdig’s local scanning approach is that you don’t lose control over your images, as the image doesn’t need to be pushed to any registry or exposed externally in order to scan it. Instead, the scanning happens inside the runner and only the results are sent to Sysdig Secure.

The scanning process is as simple as downloading a binary file and executing it with a few parameters (including the SECURE_API_TOKEN environment variable from the SYSDIG_SECURE_TOKEN variable created before) against the container image built before.

Sysdig Secure will return an error code for this stage if the image contains any of the stop conditions configured in your policy (e.g., a critical vulnerability). Stopping the pipeline will prevent pushing vulnerable images to the container image registry.

Push stage

image:push:
  stage: push
  image:
    name: gcr.io/go-containerregistry/crane:debug
    entrypoint: [""]
  script:
    - crane auth login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - crane push build/$CI_IMAGE_TAG.tar $CI_REGISTRY_IMAGE:$CI_IMAGE_TAG
  needs:
    - image:scan

The final step authenticates and pushes the container image using crane, only if the image scanning step has been successful.

Image scanning on GitLab

Provided you have a Dockerfile in your repository and a valid Secure API token, the commit of the pipeline you just created should trigger the execution, build the image, and scan it.

You can navigate to the CI/CD -> Pipelines section of the repo to see the result of the execution:

And click into every step of the execution to get a detailed view:

The analysis results are posted to your Sysdig Secure account under Vulnerability -> Pipeline:

Success! No vulnerabilities were found, the image has been published by pushing it to the registry, and the assets are stored automatically in the cache for the next executions to save time and avoid unnecessary downloads.

This example scan used the default “Sysdig Best Practices” policy (you can see it on the logs), but you can create and customize the policies you want to check against, including not just vulnerability policies, but also image best practices.

If the scan failed because a policy failed, the workflow stops and the image is not pushed to the registry (which is a good idea) as you can see here:

There weren’t any vulnerabilities found in this example (yay!), but if we look at another application, such as https://github.com/sysdiglabs/dummy-vuln-app, we can see some were discovered:

You can filter the ones that have fixes already and/or are exploitable, focusing on the most urgent ones to fix or update:

You can not only see vulnerabilities, but also some not best practices:

Conclusions

As you can see, GitLab CI/CD is a powerful tool to automate your CI/CD pipelines directly on your GitLab repository. Now, it is easy and straightforward to include Sysdig Secure Inline Scan in your workflow, scanning images for vulnerabilities, enforcing best practices at build time, and providing several benefits over traditional image scanning within the registry:

  • Implementing image scanning in the CI/CD pipeline means that if vulnerabilities or misconfigurations are found, you prevent the image from being published at all.
  • As analysis is performed inline (locally) in the runner, the image is not sent anywhere else, including outside of the environment in which it’s built. During the analysis, only metadata information, and not the actual contents, is extracted from the image.
  • Sysdig Secure provides out-of-the-box policies to enforce and adhere to various container compliance standards (like NIST 800-190 and PCI).

Sysdig Secure Image Scanning can be integrated seamlessly with most CI/CD pipeline tools.

If you are not using Sysdig Secure for scanning your images yet, wait no longer and request a demo now!

The post Image scanning for GitLab CI/CD appeared first on Sysdig.

]]>
Image Scanning with GitHub Actions https://sysdig.com/blog/image-scanning-github-actions/ Mon, 26 Sep 2022 15:00:08 +0000 https://sysdig.com/?p=21328 Scanning a container image for vulnerabilities or bad practices on your GitHub Actions using Sysdig Secure is a straightforward process....

The post Image Scanning with GitHub Actions appeared first on Sysdig.

]]>
Scanning a container image for vulnerabilities or bad practices on your GitHub Actions using Sysdig Secure is a straightforward process. This article demonstrates a step-by-step example of how to do it.


The following proof of content showcased how to leverage the sysdig-cli-scanner with GitHub Actions. Although possible, it is not officially supported by Sysdig, so we recommend checking the documentation to adapt these steps to your environment.

This blog post is focused on the vulnerability scanner available since April 2022. If you are using the legacy scanner, see the official documentation for more information about it.


You can go straight to the pipeline definition here.

Image vulnerability scanning with Sysdig Secure

Image scanning allows DevOps teams to shift security left by detecting known vulnerabilities and validating container build configuration early in their pipelines before the containers are deployed in production, or images are pushed into any container registry. This allows detecting and fixing issues faster, avoids vulnerabilities in production or credential leaks, and improves the delivery to production time, all in a much more secure way.

The Sysdig image scanning process is based on policies that can be customized to include different rules, including ImageConfig checks (for example, leakage of sensitive information) and checks for not just OS packages, but also third-party packages (java, python, etc.).

Sysdig vulnerability scanning classifies images differently depending on where the scanning procedure is performed:

  • Pipeline: before the runtime phase (in the developer workstation, in a CI/CD pipeline, etc.) performed by the sysdig-cli-scanner tool
  • Runtime: when the image is running in the execution node and the scanning is performed by a Sysdig agent

In this article, we will cover how to perform scanning on the pipeline step using GitHub Actions, as it is a best practice to adopt.

Running the scanner against a container image is as simple as running the sysdig-cli-scanner tool with a few flags (see the official documentation for more information), such as:

SECURE_API_TOKEN=<your-api-token> ./sysdig-cli-scanner \
  --apiurl <sysdig-api-url> <image-name> --policy <my-policy>

The image is scanned locally on the host where the tool is executed, on your laptop or a container running the pipeline, and only the scanning results are sent to the Sysdig Secure backend.

Vulnerability scanning with GitHub Actions

GitHub Actions allow you to automate software development tasks directly in your Git repositories, creating powerful CI/CD (continuous integration/continuous delivery or deployment) workflows triggered by different events. A growing number of preexisting actions for the most common tasks can be found in the GitHub Marketplace, or you can create a customized workflow and write your own actions.

Image scanning has become a critical step in CI/CD workflows by introducing security earlier in the development process (security shift-left). Our workflow will build a container image, then it will locally scan the image using the sysdig-cli-scanner tool. The scan results will then be sent to Sysdig. If the scan fails, the workflow breaks, preventing the image from being uploaded into a registry.

Creating a GitHub Action

The versions used in this example are:

If using the legacy scanner, the pipeline definition is different. It requires the use of the sysdiglabs/scan-action@v3 action.

See an example provided in the sysdiglabs/secure-inline-scan-example repository.

Prerequisites

The requirements for getting Sysdig Image Scanning up and running are straightforward:

  • A GitHub repository and administrative permissions, as they are required to enable Actions and to manage Secrets.
  • A Sysdig Secure account where the results of the scans will be collected. You can request a free trial if you don’t have one.
  • A container Dockerfile ready to be built. You can fork and use our example, but using your own container is more fun!

Once you are ready, let’s move on!

Enable GitHub Actions in your repository

If you did not activate Actions for your repository, the first step is making sure they are enabled. Navigate to your repository Settings and look for the Actions section (https://github.com/<user>/<repo>/settings/actions).

There, make sure the Allow all actions and reusable workflows option is selected:

Once actions are enabled, you should get an Actions tab at the top navigation bar on your repository main page, like this:

Repository secrets

For sensitive data such as the registry password or the API token, it is recommended to create repository secrets instead in the repository Settings -> Secrets -> Actions -> “New repository secret”.

In this example we will use three repository secrets:

  • REGISTRY_USER: self-explanatory
  • REGISTRY_PASSWORD: self-explanatory
  • SECURE_API_TOKEN: The Sysdig API token is required to query the Sysdig API and to send the scan results. See the official documentation for more information on how to get it.

Those variables will be referenced in the workflow using the syntax ${{ secrets.VARIABLE_NAME }}, such as ${{ secrets.REGISTRY_PASSWORD }}.

Set up an image scanning workflow on Github

GitHub Actions is the feature that allows the automation of CI/CD software workflows directly in your Github Repositories. It borrows the name from the Actions, and automated tasks that are combined to create workflows. So our journey will start by creating and configuring a new workflow.

From the Actions tab, select the Skip this and set up a workflow yourself link:

You’ll be presented with a default workflow like this:



GitHub Actions workflows are defined by YAML files in the .github/workflows/ directory inside your repository. That means that there is no need to go through the Actions UI. You can create a new workflow without taking your hands off the keyboard by simply adding .github/workflows/my-shiny-workflow.yml into your repository, then committing and pushing the changes. For more details about using GitHub Actions, check the official documentation.
You can see the whole example workflow in the sysdiglabs/secure-inline-scan-examples repository.

Let’s change the default main.yml name to something more appropriate, like build-scan-and-push.yaml, and edit the workflow YAML in the embedded editor to use the following steps:

env:
    SYSDIG_SECURE_ENDPOINT: "https://eu1.app.sysdig.com"
    REGISTRY_HOST: "quay.io"
    IMAGE_NAME: "mytestimage"
    IMAGE_TAG: "my-tag"
    DOCKERFILE_CONTEXT: "github/"
name: Container build, scan and push
on: [push, pull_request]

jobs:
  build-scan-and-push:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2

    - name: Build and save
      uses: docker/build-push-action@v3
      with:
        context: ${{ env.DOCKERFILE_CONTEXT }}
        tags: ${{ env.REGISTRY_HOST }}/${{ secrets.REGISTRY_USER }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}
        load: true

We are first defining a few environment variables that contain some parameters that will be used later on, including the API endpoint where the scanner results are sent (see the official documentation for more information about this and where to find yours) and the registry details where the image will be pushed.

Then we define a Workflow named “Container build, scan and push” (that is the name that will be displayed in the Actions UI in your repository). The workflow is triggered every time a push or pull request happens in the repo, and we are defining a job with job_id build-scan-and-push that runs-on an ubuntu-latest runner hosted by GitHub, and executes a list of steps, sequentially. Of course, there are many types of triggers available. You can execute multiple jobs in parallel and define dependencies and data flow between jobs. Additionally, you can execute on different types of workers, including your self-hosted runners, and much more.

Let’s review the first three steps:

  • The first step named “Checkoutuses the action actions/checkout@v2, which you can find on https://github.com/actions/checkout, to make our repository code available in the runner.
  • The second, named “Set up Docker Buildx,” is using the docker/setup-buildx-action@v2 official Docker action to prepare the environment to build container images (see https://github.com/docker/setup-buildx-action for more information about this particular action).
  • The third, “Build and save,” is using the docker/build-push-action@v3 official Docker action to build the container image and store it locally. It doesn’t push the image to the registry yet, it will be pushed later if the scan is successful (see https://github.com/docker/build-push-action for more information). It uses the environment variables we configured before (for example ${{ env.DOCKERFILE_CONTEXT }}), as well as the secrets (for example ${{ secrets.REGISTRY_USER }}).

Let’s skip the “Setup cache” and the “Download sysdig-cli-scanner if needed” for now and focus on the image scanning, by adding:

    - name: Scan the image using sysdig-cli-scanner
      env:
        SECURE_API_TOKEN: ${{ secrets.SECURE_API_TOKEN }}
      run: |
        ${GITHUB_WORKSPACE}/cache/sysdig-cli-scanner \
          --apiurl ${SYSDIG_SECURE_ENDPOINT} \
          docker://${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/${IMAGE_NAME}:${IMAGE_TAG} \
          --console-log \
          --dbpath=${GITHUB_WORKSPACE}/cache/db/ \
          --cachepath=${GITHUB_WORKSPACE}/cache/scanner-cache/

This is a simple command execution where we run the sysdig-cli-scanner binary to scan the image. Let’s see the parameters in more detail:

  • --apiurl ${SYSDIG_SECURE_ENDPOINT}: The Sysdig API endpoint where the scan results are sent as well as the databases are downloaded.
  • docker://${REGISTRY_HOST}/${{ secrets.REGISTRY_USER }}/${IMAGE_NAME}:${IMAGE_TAG} The container image that was built in the previous step and was stored locally in the container cache.
  • --dbpath=${GITHUB_WORKSPACE}/cache/db/ : The path where the vulnerability database is stored (more about this later).
  • --cachepath=${GITHUB_WORKSPACE}/cache/scanner-cache/ : The path where the scanner caches are stored (more about this later).
  • Also, the ${{ secrets.SECURE_API_TOKEN }} secret is converted into an environment variable to make it available to the sysdig-cli-scanner binary.

Finally, let’s push the container image to the registry if the scan has been successful:

    -  name: Login to the registry
       uses: docker/login-action@v2
       with:
        registry: ${{ env.REGISTRY_HOST }}
        username: ${{ secrets.REGISTRY_USER }}
        password: ${{ secrets.REGISTRY_PASSWORD }}
    - name: Push
      uses: docker/build-push-action@v3
      with:
        context: ${{ env.DOCKERFILE_CONTEXT }}
        push: true
        tags: ${{ env.REGISTRY_HOST }}/${{ secrets.REGISTRY_USER }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}

Those two steps are straightforward. The first one will log into the registry and the second one just pushes the previous container image that was built before to the proper registry.

Caching

Someone will ask themselves, where is the sysdig-cli-scanner binary stored? And the answer is it is downloaded from Sysdig the first time, then it is cached unless there is a new version of that binary available.

How is that so? Let’s see:

    - name: Download sysdig-cli-scanner if needed
      run:  |
        curl -sLO https://download.sysdig.com/scanning/sysdig-cli-scanner/latest_version.txt
        mkdir -p ${GITHUB_WORKSPACE}/cache/db/
        if [ ! -f ${GITHUB_WORKSPACE}/cache/latest_version.txt ] || [ $(cat ./latest_version.txt) != $(cat ${GITHUB_WORKSPACE}/cache/latest_version.txt) ]; then
          cp ./latest_version.txt ${GITHUB_WORKSPACE}/cache/latest_version.txt
          curl -sL -o ${GITHUB_WORKSPACE}/cache/sysdig-cli-scanner "https://download.sysdig.com/scanning/bin/sysdig-cli-scanner/$(cat ${GITHUB_WORKSPACE}/cache/latest_version.txt)/linux/amd64/sysdig-cli-scanner"
          chmod +x ${GITHUB_WORKSPACE}/cache/sysdig-cli-scanner
        else
          echo "sysdig-cli-scanner latest version already downloaded"
        fi

This step is just a bash script that checks the latest version of the sysdig-cli-scanner. If it is different from the one already available in the environment, it is downloaded (it also creates the database folders if they don’t exist).

Okay, but every workflow execution is stateless, how do you save that file? The answer is by using the cache GitHub action. The cache is a space available to store things, such as library dependencies, assets, etc.

In this example, we will leverage that space to store not just the sysdig-cli-scanner binary, but also the vulnerability databases and other scanner assets:

    - name: Setup cache
      uses: actions/cache@v3
      with:
        path: cache
        key: ${{ runner.os }}-cache-${{ hashFiles('**/sysdig-cli-scanner', '**/latest_version.txt', '**/db/main.db.meta.json', '**/scanner-cache/inlineScannerCache.db') }}
        restore-keys: ${{ runner.os }}-cache-

This action needs to be executed before the download step to gather the data from the cache and it is executed automatically at the end of the workflow to store the assets as a result of the workflow execution.

The key and restore-keys settings are used to invalidate the cache if something changes and to store only the assets we want to save and restore. See the official documentation to learn more about this action.


You can see the whole example workflow in the sysdiglabs/secure-inline-scan-examples repository.

Image scanning on Github: Lights, camera, action!

Provided you have a Dockerfile in your repository and a valid Secure API token, the commit of the workflow you just created should trigger the execution of the Workflow, build the image, and scan it.

You can navigate to the Actions section of the repo to see the result of the Workflow execution:

And click into the Workflow execution to get a detailed view of every step, get the logs, etc.:



You can observe the “Post” steps are not included in our workflow but instead, they have been executed automatically, including the “Post Setup Cache” one to store the assets in the cache.

The analysis results are posted to your Sysdig Secure account under Vulnerability -> Pipeline:

Success! No vulnerabilities were found, the image has been published by pushing it to the registry, and the assets are stored automatically in the cache for the next executions to save time and avoid unnecessary downloads.

This example scan used the default “Sysdig Best Practices” policy (you can see it on the logs), but you can create and customize the policies you want to check against. See the official documentation to learn more about how to create and customize policies, including not just vulnerability policies, but also image best practices.

If the scan failed because a policy failed, the workflow stops and the image is not pushed to the registry (which is a good idea) as you can see here:

There weren’t any vulnerabilities found in this example (yay!), but if we look at another application, such as https://github.com/sysdiglabs/dummy-vuln-app, we can see some were discovered:

You can filter the ones that have fixes already and/or are exploitable, focusing on the most urgent ones to fix or update:

You can not only see vulnerabilities, but also some not best practices:

Conclusions

As you can see, GitHub Actions are a powerful tool to automate your CI/CD pipelines directly on your GitHub repository. Now, it is easy and straightforward to include the Sysdig container image scanning capabilities in your workflow, scanning images for vulnerabilities, enforcing best practices at build time, and providing several benefits over traditional image scanning within the registry:

  • Implementing image scanning in the CI/CD pipeline means that if vulnerabilities are found, you prevent the image from being published at all.
  • As analysis is performed inline (locally) in the runner, the image is not sent anywhere else, including outside of the environment in which it’s built. During the analysis, only metadata information, and not the actual contents, is extracted from the image.
  • The metadata obtained from the analysis can be reevaluated later if new vulnerabilities are discovered or policies are modified without requiring a new scanning of the image for alerting. If you want to break the build, you still need to trigger a new scan within the build.
  • Sysdig Secure provides out-of-the-box policies to enforce and adhere to various container compliance standards (like NIST 800-190 and PCI).

Sysdig Secure image scanning can be integrated seamlessly with most CI/CD pipeline tools.

If you are not using Sysdig Secure for scanning your images yet, wait no longer and request a demo now!

The post Image Scanning with GitHub Actions appeared first on Sysdig.

]]>
Container Image Scanning for Azure Pipelines with Sysdig https://sysdig.com/blog/container-image-scanning-for-azure-pipelines-with-sysdig/ Mon, 19 Sep 2022 15:00:08 +0000 https://sysdig.com/?p=18960 Scanning a container image for vulnerabilities or bad practices in your Azure Pipelines using Sysdig Secure is a straightforward process....

The post Container Image Scanning for Azure Pipelines with Sysdig appeared first on Sysdig.

]]>
Scanning a container image for vulnerabilities or bad practices in your Azure Pipelines using Sysdig Secure is a straightforward process. This article demonstrates a step by step example on how to do it.

The following proof of concept showcases how to leverage the sysdig-cli-scanner in Azure Pipelines. Although possible, it is not officially supported by Sysdig, so we recommend checking the documentation to adapt these steps to your environment.

This blog post is focused on the vulnerability scanner available since April 2022. If you are using the legacy scanner, see the official documentation for more information about it.

Sysdig vulnerability scanning classifies images differently depending on where the scanning procedure is performed:

  • Pipeline: prior to the runtime phase (in the developer workstation, in a CI/CD pipeline, etc.) performed by the sysdig-cli-scanner tool
  • Runtime: when the image is running in the execution node and the scanning is performed by a Sysdig agent

Sysdig vulnerabilities UI screenshot

In this article, we will cover how to perform a scanning on the pipeline step using Azure Pipelines, as it is a best practice to adopt.

Running the scanner against a container image is as simple as running the sysdig-cli-scanner tool with a few flags (see the official documentation for more information), such as:

SECURE_API_TOKEN=<your-api-token> ./sysdig-cli-scanner --apiurl <sysdig-api-url> <image-name> --policy <my-policy>

The image is scanned locally on the host where the tool is executed, on your laptop or on a container running the pipeline, and only the scanning results are sent to the Sysdig Secure backend.

Azure Pipelines

Azure DevOps gives teams tools like code repositories, reports, project management, automated builds, lab management, testing, and release management. Azure Pipelines is a component of the Azure DevOps bundle and it automates the execution of CI/CD tasks, like running tests against your code, building the container images when a commit is pushed to your git repository, or performing vulnerability scanning on the container image.

Azure Pipeline for vulnerability scanning

An Azure Pipeline defines a bulk of tasks, written in a YAML file, that will be executed automatically when there’s an event, which is typically when there’s a new commit or a pull request in a linked repository.

Azure pipeline example steps diagram
This allows you to automatically build and push images into registries (like Azure Container Registry), and then deploy them into Kubernetes.

In the example that we use to illustrate this blog post, we will be pushing commits to a GitHub repository that will trigger an Azure Pipeline. Then, the pipeline will build our project into a local image, scan it for vulnerabilities, and publish it to a container registry.
Azure pipeline and Sysdig scan procedure diagram

The example application

In order to demonstrate a specific example, we will leverage an simple golang application stored in a GitHub repository that listens into port 8080/tcp and returns a string based on the path you request:

package main
import (
    "fmt"
    "log"
    "net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "I love %s!\n", r.URL.Path[1:])
}
func main() {
    http.HandleFunc("/", handler)
    log.Fatal(http.ListenAndServe(":8080", nil))
}

This simple program is built and containerized as:

FROM golang:1.18-alpine as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN go build -o /love
FROM alpine
COPY --from=builder /love /love
EXPOSE 8080
ENTRYPOINT [ "/love" ]

Giving Azure Pipelines access to GitHub repositories

Azure will access our GitHub repository to download the code needed to build our project and generate the container image. It will also get the azure-pipelines.yaml file stored in the same repository that contains the tasks that conform the pipeline.
Assuming you already have an Azure DevOps account and created a project, follow the next steps to give Azure Pipelines access to GitHub repositories:

If your repository is empty, you will start with a blank yaml file where you need to do the modifications either in the Azure Pipelines editor or by pushing the code directly to GitHub
In our case, we already have the pipeline created as a file in our GitHub repository, so the code shown there is what exists in the repository. We will later explain every step of the pipeline in detail.

Giving Azure Pipelines access to publish container images

This time, we will leverage quay.io to publish our container images.
Assuming you already have:

  • A quay.io account
  • And an ’empty’ quay repository

It is required to:

  • Create a ‘robot’ account
  • And then configure Azure Pipelines service connection

Let’s do it:

Don’t forget the connection name. You will use it later to access the registry.

Azure Pipeline secret variables

The sysdig-cli-scanner requires a Sysdig API token to be able to send the scanning results to your Sysdig account.
This can be found in the Settings -> User profile section of your Sysdig account (see Retrieve the Sysdig API Token for more information), and it is a good idea to store it encrypted using secret variables.

Azure Pipeline YAML definition for image scanning with Sysdig Secure

Everything is now in place, so let’s dig into the pipeline definition. Basically, the workflow is as follows:

  1. Build the container image and store it locally
  2. Download the sysdig-cli-scanner tool if needed
  3. Perform the scan
  4. Push the container image to a remote registry

The workflow also leverages the Azure Pipeline cache to avoid downloading the binary, the databases, and the container images if they are available.
Let’s go through the different steps in the pipeline.

The versions used in this example are Azure agent version: ‘2.209.0’, Image: ubuntu-20.04 (Version: 20220828.1, Included Software: https://github.com/actions/runner-images/blob/ubuntu20/20220828.1/images/linux/Ubuntu2004-Readme.md), Runner Image Provisioner: 1.0.0.0-main-20220825-1 and Sysdig-cli-scanner version 1.2.6-rc (commit: 17bb64a)


If using the legacy scanner, the pipeline definition is different. It requires to use the sysdiglabs/secure-inline-scan container image. See an example provided in the sysdiglabs/secure-inline-scan-example repository.

Preparation

A pretty standard configuration, it is triggered when pushing into the main branch and it uses the default container image (ubuntu-latest).

trigger:
- main
pool:
  vmImage: ubuntu-latest

We also define a few variables that will be used in different steps:

  • The CACHE_FOLDER where the assets are saved
  • The Sysdig Secure API endpoint (SYSDIG_SECURE_ENDPOINT)
  • The image details (REGISTRY_HOST, IMAGE_NAME, and IMAGE_TAG)
  • And the name of the connection to the container registry created before (REGISTRY_CONNECTION)
variables:
  CACHE_FOLDER: $(Pipeline.Workspace)/cache/
  SYSDIG_SECURE_ENDPOINT: "https://eu1.app.sysdig.com"
  REGISTRY_HOST: "quay.io"
  IMAGE_NAME: "e_minguez/my-example-app"
  IMAGE_TAG: "latest"
  REGISTRY_CONNECTION: "quayio-e_minguez"

Cache

Setup cache for the binary and assets (such as the vulnerability database) to avoid downloading it every single time:

- task: Cache@2
  inputs:
    key: |
      sysdig-cli-scanner-cache | "$(Agent.OS)" | "$(CACHE_FOLDER)/sysdig-cli-scanner" | "$(CACHE_FOLDER)/latest_version.txt" | "$(CACHE_FOLDER)/db/main.db.meta.json" | "$(CACHE_FOLDER)/scanner-cache/inlineScannerCache.db"
    restoreKeys: |
      sysdig-cli-scanner-cache | "$(Agent.OS)"
      sysdig-cli-scanner-cache
    path: $(CACHE_FOLDER)
  displayName: Cache sysdig-cli-scanner and databases

Setup cache for the container images to avoid downloading them every single time

- task: Cache@2
  displayName: Docker cache
  inputs:
    key: 'docker | "$(Agent.OS)" | cache'
    path: $(Pipeline.Workspace)/docker
    cacheHitVar: CACHE_RESTORED

Load the container images from the cache if they are available:

- script: |
    docker load -i $(Pipeline.Workspace)/docker/cache.tar
  displayName: Docker restore
  condition: and(not(canceled()), eq(variables.CACHE_RESTORED, 'true'))

Build

Build the container image using the Docker@2 task:

- task: Docker@2
  inputs:
    command: 'build'
    Dockerfile: 'love/Containerfile'
    buildContext: 'love/'
    repository: $(REGISTRY_HOST)/$(IMAGE_NAME)
    tags: $(IMAGE_TAG)
    addPipelineData: false
    addBaseImageData: false

This build process generates one container image per tag. We could build more than one image per commit with different tags, like the commit hash or commit tags, but we are keeping things simple.

Scan

Download the latest version of the sysdig-cli-scanner binary only if needed:

- script: |
    curl -sLO https://download.sysdig.com/scanning/sysdig-cli-scanner/latest_version.txt
    mkdir -p $(CACHE_FOLDER)/db/
    if [ ! -f $(CACHE_FOLDER)/latest_version.txt ] || [ $(cat ./latest_version.txt) != $(cat $(CACHE_FOLDER)/latest_version.txt) ]; then
      cp ./latest_version.txt $(CACHE_FOLDER)/latest_version.txt
      curl -sL -o $(CACHE_FOLDER)/sysdig-cli-scanner "https://download.sysdig.com/scanning/bin/sysdig-cli-scanner/$(cat $(CACHE_FOLDER)/latest_version.txt)/linux/amd64/sysdig-cli-scanner"
      chmod +x $(CACHE_FOLDER)/sysdig-cli-scanner
    else
      echo "sysdig-cli-scanner latest version already downloaded"
    fi
  displayName: Download the sysdig-cli-scanner if needed

Run the scanner. This is the real deal, and as you see, it is super simple:

- script: |
    $(CACHE_FOLDER)/sysdig-cli-scanner \
      --apiurl $(SYSDIG_SECURE_ENDPOINT) \
      --console-log \
      --dbpath=$(CACHE_FOLDER)/db/ \
      --cachepath=$(CACHE_FOLDER)/scanner-cache/ \
      docker://$(REGISTRY_HOST)/$(IMAGE_NAME):$(IMAGE_TAG) \
  displayName: Run the sysdig-cli-scanner
  env:
    SECURE_API_TOKEN: $(TOKEN)

NOTE: We converted the TOKEN secret variable to an environment variable as required by the sysdig-cli-scanner tool.

Final tasks

Push the container image to the repository:

- task: Docker@2
  displayName: Push the container image
  inputs:
    containerRegistry: $(REGISTRY_CONNECTION)
    repository: $(IMAGE_NAME)
    command: push
    tags: $(IMAGE_TAG)
    addPipelineData: false
    addBaseImageData: false

Save the container images used for future pipeline executions:

- script: |
    mkdir -p $(Pipeline.Workspace)/docker
    docker save $(docker images -q) -o $(Pipeline.Workspace)/docker/cache.tar
  displayName: Docker save
  condition: and(not(canceled()), or(failed(), ne(variables.CACHE_RESTORED, 'true')))

Running the pipeline

Finally, we are ready to execute the pipeline and see it in action in Azure!
After pushing a commit to our GitHub repository, we can see the pipeline working in the Azure web console. Here are the results of our pipeline:
Successful Azure pipeline output screenshot
As you can see, it worked. The scan finished properly and it was performed in a second.

Image scanning results within Sysdig Secure

Back to Sysdig Secure, we can further analyze these results.

The legacy scanner results are slightly different, see the official documentation for more details.

Sysdig vulnerability with 0 issues found screenshot
There weren’t any vulnerabilities found in this basic application (yay!), but if we look at another application, such as https://github.com/sysdiglabs/dummy-vuln-app, we can see some were discovered:
Sysdig vulnerability scan with some issues found screenshot
Sysdig vulnerability scan UI showing some vulnerabilities found screenshot
You can filter the ones that have fixes already and/or are exploitable, so you can focus on the most urgent ones to fix or update:
Sysdig vulnerability scan UI showing filtered vulnerabilities by the ones that has a fix already available screenshot
You can see not just vulnerabilities but also some not best practices:
Sysdig vulnerability scanning UI showing the list of failed policies

Conclusions

Using Sysdig, we can scan the images we create in Azure DevOps Pipelines in a really straightforward process. Thanks to local scanning capabilities, you can scan our images without having them leave your infrastructure and even scan images that are locally built.
By detecting issues earlier in the CI/CD pipeline, image scanning allows DevOps teams to shift security left, improve delivery to production time, and raise the confidence of running their images in production.

The post Container Image Scanning for Azure Pipelines with Sysdig appeared first on Sysdig.

]]>
Fixing potential security issues in your Infrastructure as Code at the source with Sysdig https://sysdig.com/blog/security-infrastructure-as-code-sysdig/ Wed, 14 Sep 2022 13:04:17 +0000 https://sysdig.com/?p=53980 Infrastructure as Code (IaC) is a powerful mechanism to manage your infrastructure, but with great power comes great responsibility. If...

The post Fixing potential security issues in your Infrastructure as Code at the source with Sysdig appeared first on Sysdig.

]]>
Infrastructure as Code (IaC) is a powerful mechanism to manage your infrastructure, but with great power comes great responsibility. If your IaC files have security problems (for example, a misconfigured permission because of a typo), this will be propagated along your CI/CD pipeline until it is hopefully discovered at runtime, where most of the security issues are scanned or found. What if you can fix potential security issues in your infrastructure at the source?

What is Infrastructure as Code?

IaC is a methodology of treating the building blocks of your infrastructure (virtual machines, networking, containers, etc.) as code using different techniques and tools. This means instead of manually creating your infrastructure, such as VMs, containers, networks, or storage, via your favorite infrastructure provider web interface, you define them as code and then those are created/updated/managed by the tools you choose (terraform, crossplane, pulumi, etc.).

The benefits are huge. You can manage your infrastructure as if it was code (it _is_ code now) and leverage your development best practices (automation, testing, traceability, versioning control, etc.) to your infrastructure assets. There is tons of information out there around this topic, but the following resource is a good starting point.

Why is securing your Infrastructure as Code assets important as an additional security layer?

Most security tools detect potential vulnerabilities and issues at runtime, which is too late. In order to fix them, either a reactive manual process needs to be performed (for example, directly modifying a parameter in your k8s object with kubectl edit) or ideally, the fix will happen at source and then it will be propagated all along your supply chain. This is what is called “Shift Security Left.” Move from fixing the problem when it is too late to fixing it before it happens. This principle is at the core of the Cloud Native Application Protection Platform (CNAPP) concept.

According to Red Hat’s “2022 state of Kubernetes security report,” 57% of respondents worry the most about the runtime phase of the container life cycle. But wouldn’t it be better if those potential issues can be discovered directly into the code definition instead?
Shifting security left in an application lifecycle graph

Introducing Sysdig Git Infrastructure as Code Scanning

Based on the current “CIS Kubernetes” and “Sysdig K8s Best Practices” benchmarks, Sysdig Secure scans you Infrastructure as Code manifests at the source. Currently, it supports scanning YAML, Kustomize, Helm. or Terraform files representing Kubernetes workloads (stay tuned for future releases), and it integrates seamlessly with your development workflow by showing potential issues directly in the pull requests on the repositories hosted in GitHub, GitLab, Bitbucket, or Azure DevOps. See more information in the official documentation.

As a proof of concept, let’s see it in action in a small EKS cluster using the example guestbook application as our “Infrastructure as Code,” where we will also apply GitOps practices to manage our application lifecycle with ArgoCD.

This is a proof of concept of a GitOps integration with Sysdig IaC scanning. The versions used on this PoC are ArgoCD 2.4.0, Sysdig Agent 12.8.0, and Sydig Charts v1.0.3.

NOTE: Want to know more about GitOps? See How to apply security at the source using GitOps.

Preparations

This is how our EKS cluster (created as “eksctl create cluster -n edu --region eu-central-1 --node-type m4.xlarge --nodes 2“)

looks like:

❯ kubectl get nodes
NAME                                          STATUS   ROLES    AGE    VERSION
ip-10-0-2-210.eu-central-1.compute.internal   Ready    <none>   108s   v1.20.15-eks-99076b2
ip-10-0-3-124.eu-central-1.compute.internal   Ready    <none>   2m4s   v1.20.15-eks-99076b2

Installing Argo CD is as easy as following the instructions in the official documentation:

❯ kubectl create namespace argocd
❯ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.0/manifests/install.yaml

With Argo in place, let’s create our example application. We will leverage the example guestbook Argo CD application already available at https://github.com/argoproj/argocd-example-apps.git by creating our own fork directly at GitHub:

Or, using the GitHub cli tool:

❯ cd ~/git
❯ gh repo fork https://github.com/argoproj/argocd-example-apps.git --clone
✓ Created fork e-minguez/argocd-example-apps
Cloning into 'argocd-example-apps'...
...
From github.com:argoproj/argocd-example-apps
 * [new branch]      master     -> upstream/master
✓ Cloned fork

Now, configure Argo CD to deploy our application in our k8s cluster via the web interface. To access the Argo CD web interface, we are required to get the password (it is randomized at installation time) as well as make it externally available. In this example a port-forward is used:

❯ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
s8ZzBlGRSnPbzmtr
❯ kubectl port-forward svc/argocd-server -n argocd 8080:443 &

Then, we can access the Argo CD UI using a web browser pointing to http://localhost:8080 :

Or, using Kubernetes objects directly:

❯ cat << EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-example-app
  namespace: argocd
spec:
  destination:
    namespace: my-example-app
    server: https://kubernetes.default.svc
  project: default
  source:
    path: guestbook/
    repoURL: https://github.com/e-minguez/argocd-example-apps.git
    targetRevision: HEAD
  syncPolicy:
    automated: {}
    syncOptions:
    - CreateNamespace=true
EOF

After a few moments, Argo will deploy your application from your git repository, including all the objects:

Screenshot of Argo CD UI showing an example application successfully synced
❯ kubectl get all -n my-example-app
NAME                                READY   STATUS    RESTARTS   AGE
pod/guestbook-ui-85985d774c-n7dzw   1/1     Running   0          14m
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/guestbook-ui   ClusterIP   172.20.217.82   <none>        80/TCP    14m
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/guestbook-ui   1/1     1            1           14m
NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/guestbook-ui-85985d774c   1         1         1       14m

Success!

The application defined as code is already running in the Kubernetes cluster and the deployment has been automated using the GitOps practices. However, we didn’t take into consideration any security aspect of it. Is the definition of my application secure enough? Did we miss anything? Let’s see what we can find out.

Configuring Sysdig Secure to scan our new shiny repository is as easy as adding a new git repository integration:

Pull request policy evaluation

Now, let’s see it in action. Create a pull request with some code changes, for example to increase the number of replicas from 1 to 2:

Or, by using the cli:

❯ git switch -c my-first-pr
Switched to a new branch 'my-first-pr'
❯ sed -i -e 's/  replicas: 1/  replicas: 2/g' guestbook/guestbook-ui-deployment.yaml
❯ git add guestbook/guestbook-ui-deployment.yaml
❯ git commit -m 'Added more replicas'
[my-first-pr c67695e] Added more replicas
 1 file changed, 1 insertion(+), 1 deletion(-)
❯ git push
Enumerating objects: 7, done.
…

Almost immediately, Sysdig Secure will perform a scan of the repository folder and will notify potential issues:

Screenshot of Sysdig's pull request policy evaluation in action

Here, you can see some potential issues based on the CIS Kubernetes V1.6 benchmark as well as the Sysdig Kubernetes best practices ordered by severity. An example is the “Container with writable root file system” one located in the deployment file of our example application.

You can apply those recommendations by modifying your source code, but why don’t we make Sysdig Secure do it for you?

Remediating the issues at source automagically

Let’s deploy the Sysdig Secure agent via helm in our cluster so it can inspect our objects running, including a couple of new flags to enable the KSPM features.

❯ helm repo add sysdig https://charts.sysdig.com
❯ helm repo update
❯ export SYSDIG_ACCESS_KEY="XXX"
❯ export SAAS_REGION="eu1"
❯ export CLUSTER_NAME="mycluster"
❯ export COLLECTOR_ENDPOINT="ingest-eu1.app.sysdig.com"
❯ export API_ENDPOINT="eu1.app.sysdig.com"
❯ helm install sysdig sysdig/sysdig-deploy \
    --namespace sysdig-agent \
    --create-namespace \
    --set global.sysdig.accessKey=${SYSDIG_ACCESS_KEY} \
    --set global.sysdig.region=${SAAS_REGION} \
    --set global.clusterConfig.name=${CLUSTER_NAME} \
    --set agent.sysdig.settings.collector=${COLLECTOR_ENDPOINT} \
    --set nodeAnalyzer.nodeAnalyzer.apiEndpoint=${API_ENDPOINT} \
    --set global.kspm.deploy=true
# after a few moments
❯ kubectl get po -n sysdig-agent
NAME                                    READY   STATUS    RESTARTS   AGE
nodeanalyzer-node-analyzer-bw5t5        4/4     Running   0          9m14s
nodeanalyzer-node-analyzer-ccs8d        4/4     Running   0          9m5s
sysdig-agent-8sshw                      1/1     Running   0          5m4s
sysdig-agent-smm4c                      1/1     Running   0          9m16s
sysdig-kspmcollector-5f65cb87bb-fs78l   1/1     Running   0          9m22s

As an exercise for the reader, this step can be achieved the GitOps way using Argo CD with the helm chart

After a few minutes, the agent is deployed and has reported back the Kubernetes status in the new Posture -> “Actionable Compliance” section where the security requirements can be observed:

Screenshot of Sysdig's compliance view

Let’s fix the “Container Image Pull” policy control (see the official documentation for the detailed list of policy controls available).

There, you can see the remediation proposal, a Kubernetes patch, and a “Setup Pull Request” section. But will it?

Indeed! Sysdig Secure is now also able to compare the source and the runtime status of your Kubernetes objects and can even fix it for you, from source to run.

There’s no need for complex operations or manual fixes that create snowflakes. Instead, fix it at the source.

Final thoughts

Adding IaC scanning, security, and compliance mechanisms to your toolbox will help your organization find and fix potential security issues directly at the source (shifting security left) of your supply chain. Sysdig Secure can even create the remediation directly for you!

Get started now with a free trial and see for yourself.

The post Fixing potential security issues in your Infrastructure as Code at the source with Sysdig appeared first on Sysdig.

]]>
How to apply security at the source using GitOps https://sysdig.com/blog/gitops-iac-security-source/ Tue, 26 Jul 2022 15:01:28 +0000 https://sysdig.com/?p=52513 If your GitOps deployment model has security issues (for example, a misconfigured permission because of a typo), this will be...

The post How to apply security at the source using GitOps appeared first on Sysdig.

]]>
If your GitOps deployment model has security issues (for example, a misconfigured permission because of a typo), this will be propagated until it is hopefully discovered at runtime, where most of the security events are scanned or found.

What if you can fix potential security issues in your infrastructure at the source?

Let’s start with the basics.

What is Git?

Git is an open source distributed version control system. It tracks changes made in files (usually text files such as source code) allowing and fostering a collaborative work model. It is the de facto standard in version control systems nowadays.

You can have your own git repo locally on your laptop, host it on your own server, or use some provider such as GitLab or GitHub.

There are different “flows” on how to manage a repository (git-flow, github-flow, etc.), but a basic example on how git is used is something like this: Changes in the files are “committed” by users by “forking” the repository and doing the proper changes in a “branch”.

Then, the user creates a request (either “pull request”, “merge request”, or just “send a patch”) to include those changes in the repository.

After that, usually a discussion happens between the “owner” and the user creating the request, and if everything goes fine the change is accepted and added to the repository.

NOTE: If you want to know more, here is much more detailed information about the git pull request mechanism.

To see a real world example, just browse your favorite open source GitLab or GitHub repository and browse the Pull Request (or Merge Request) tab (or see this for a fun one). You can see the proposed changes, comments, labels, who proposed the changes, tools running validations against the proposed changes, notifications sent to people watching the repository, etc.

What is GitOps?

To put it simply, GitOps is just a methodology that uses a git repository as the single source of truth for your software assets so you can leverage the git deployment model (pull requests, rollbacks, approvals, etc.) to your software.

There are books (The Path to GitOps, GitOps and Kubernetes or GitOps Cloud-native Continuous Deployment), whitepapers, and more blog posts than we can manage to count but let us elaborate on the GitOps purpose by taking a quick look on how things evolved in the last few years.

Before the cloud, adding a new server to host your application took weeks. You had to ask for permissions, purchase it, and perform a lot of manual tasks. Then, virtualization made things much easier. You request a virtual machine with some specs and after a few minutes, you have access to it.

Then, the cloud. Requesting servers, network, storage, and even databases, messaging queues, containers, machine learning stuff, serverless… is just an API call away! You request it and a few seconds later, you get it, just like that. You just need to pay for what you use. This also means the infrastructure can be managed as code performing API calls… and where do you store your code? In a git repository (or any other content version system).

The GitOps term was coined back in 2017 by Weaveworks, and paraphrasing OpenGitOps, a GitOps system is based on the following principles:

  • Declarative: it defines “what.”
  • Versioned and immutable: hence “git.”
  • Pulled automatically: an agent observes the desired state and the changes happening in the code.
  • Continuously reconciled: did someone mention Kubernetes?

The essence of the GitOps methodology is basically a Kubernetes controller or controllers (or agents) running on your cluster that observes the Kubernetes objects running on top of it (defined by a CustomResource) comparing the current state against the state specified in the Git repo. If it doesn’t match, it remediates the application by applying the manifests found in the repository.

NOTE: There are slightly different approaches to GitOps, for example, push vs. pull, how to handle the configuration management, etc. Those are advanced topics, but for now, let’s stick to the basics.

The following diagram shows a simplified GitOps system:

GitOps diagram showing a developer sending changes, the GitOps process and the agent deployed on Kubernetes observing the changes
  • A code change is submitted to the Git repository by the user.
  • Then, a process is triggered on the repository to incorporate those changes into the application itself, including running automation tools against that new code to validate it.
  • Once everything is in place, the GitOps agent running in the Kubernetes cluster, which is observing the repository, performs the reconciliation between the desired state (the code in the repository) and the current state (the objects running on the Kubernetes cluster itself).

Being based on Git means frictionless for developers. They don’t need to worry about a new tool to interact with, but rather apply the same practices used to manage the code in the Git repository.

Speaking about GitOps tools, there are a few already available, including open source tools such as Flux or ArgoCD, both CNCF incubating projects.

To get a feeling on what an application definition looks like via GitOps, this is an example of a simple application (stored in a GitHub repository) managed by Flux or ArgoCD.

With Flux:

---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
  name: my-example-app
  namespace: hello-world
spec:
  interval: 30s
  ref:
    branch: master
  url: https://github.com/xxx/my-example-apps.git
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: my-example-app
  namespace: hello-world
spec:
  interval: 5m0s
  path: ./myapp
  prune: true
  sourceRef:
    kind: GitRepository
    name: my-example-app
  targetNamespace: hello-world

With ArgoCD:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-example-app
  namespace: hello-world
spec:
  destination:
    namespace: my-example-app
    server: https://kubernetes.default.svc
  project: default
  source:
    path: myapp/
    repoURL: https://github.com/xxx/my-example-apps.git
    targetRevision: HEAD
  syncPolicy:
    automated: {}
    syncOptions:
    - CreateNamespace=true

Both reference the Git repository where the application manifests are stored (Deployments), the NameSpaces, and a few more details.

GitOps vs. IaC

Infrastructure as Code is a methodology of treating the building blocks of your infrastructure as code using different techniques and tools. This means that instead of manually creating your infrastructure such as VMs, containers, networks, or storage via your favorite infrastructure provider web interface manually, you define them as code, and then those are created/updated/managed by the tools you choose, such as terraform, crossplane, or pulumi, among others.

The benefits are huge. You can manage your infrastructure as if it was code (it is code now) and leverage your development best practices (automation, testing, traceability, versioning control, etc.) to your infrastructure assets. In fact, there is a trend of using “Infrastructure as Software” as a term instead because it is much more than just code.

There is tons of information out there on this topic, but the following resource is a good starting point.

As you have probably figured, GitOps leverages Infrastructure as Code as the declarative model to define the infrastructure. In fact, IaC is one of the GitOps cornerstones! But it is much more as IaC doesn’t mandate the rest of the GitOps principles.

GitOps vs. DevOps

There are lots of definitions of the “DevOps” term. It depends who you ask but to put it simply, “DevOps is the combination of practices and tools to build and deliver software reducing friction and to a high speed.”

DevOps methodologies can leverage GitOps as GitOps provides a framework that matches DevOps practices but it is not strictly necessary.

What about NoOps?

NoOps was coined by Forrester in 2011 and it is a radical approach to handling operations where the IT environment is abstracted and automated to the point there is no need to manage it manually.

GitOps helps to reduce the manual changes by remediating those with the desired state in the Git repository, but applying a real NoOps to the whole IT environment is an aspirational goal rather than a real goal as of today.

Is GitOps just for Kubernetes?

No. Kubernetes, the controller pattern, and the declarative model to define Kubernetes objects are a perfect match for a GitOps methodology, but it doesn’t mean GitOps methodologies cannot be applied without Kubernetes. There are a few challenges if using GitOps outside of Kubernetes, such as handling the idempotency, the deletion/creation of the assets, secrets managements, etc. But the GitOps principles can be applied without Kubernetes (and applying a little bit of creativity).

GitOps & Security

Let’s talk about the security aspects now. Most security tools detect potential vulnerabilities and issues at runtime (too late). In order to fix them, either a reactive manual process needs to be performed (e.g., modifying directly a parameter in your k8s object with kubectl edit) or ideally, the fix will happen at source and will be propagated all along your supply chain. This is what is called “Shift Security Left”. From fixing the problem when it is too late to fixing it before it happens.

This doesn’t mean every security issue can be fixed at the source, but adding a security layer directly at the source can prevent some issues.

First of all, the general security recommendations apply.

  • Reduce the attack surface
  • Encrypt the secrets (using External Secrets or Sealed Secrets)
  • Network segmentation
  • RBAC
  • Keep software up to date
  • Enforce least privilege
  • Monitor & measure

Let’s see a few scenarios where the GitOps methodology can improve your security in general:

  • Avoid/Refuse manual changes (avoiding Drift). The Git repository is the source of truth. If you try to modify the application definition, the GitOps tool will revert those changes by applying the version stored in the Git repository.
GitOps UI showing the differences between the object running on the cluster and the definition stored in the Git repository
GitOps UI showing the remediation performed because of the differences
  • Rollback changes. Imagine you introduced a potential security issue in a specific commit by modifying some parameter in your application deployment. Leveraging the Git capabilities, you can rollback the change if needed directly at the source and the GitOps tool will redeploy your application without user interaction.
GitOps UI showing the reverted changes
  • Fast response. If you find you are using a vulnerable container image in your application (e.g., MariaDB), you just need to create a PR to update the tag into the deployment file and the GitOps tool will use the new tag in a new deployment.
Detail of the deployment performed by the GitOps tool shown in the GitOps UI
  • Traceability. Using Git capabilities, you can easily check when a file was changed, the changes themselves and the user that promoted the changes. You’ve got an audit log for free.
GitHub UI showing a log of the committed changes
  • Disaster recovery. Again, the Git repository is the source of truth. If you need to redeploy your application because something happened, the definition is there (of course you need to have a disaster recovery plan for other stuff such as the data itself).
  • Access Control. You can apply different permissions to different users on your Git repositories and even policies such as “only merge a change after two positive reviews”.

Those benefits are good enough to justify using GitOps methodologies to improve your security posture and they came out of the box, but GitOps is a combination of a few more things. We can do much more. GitHub, GitLab, and other Git repositories providers allow you to run actions or pipelines based on the changes you perform in your Git repository, including by a pull request, so the possibilities are endless. A few examples:

  • Linting. The definition of the application is code, what if the definition is checked for wrong syntax, missing parameters, and more? There are tools (such as the megalinter) that can be executed against the changes you’ve performed so you avoid surprises later on.
Megalinter output showing the tests executed, alerts and details
GitHub UI showing a GitHub action log deploying a Kind Kubernetes cluster
  • Vulnerability scanning. By checking the container images you are using for vulnerabilities before they are deployed in your environment.
GitHub UI showing a detailed view of policies checking the code directly in the pull-request
  • Policy-as-code. Leveraging OPA, you can even apply policies to your manifests to check for potential issues or custom policies
GitHub UI showing the output of a GitHub action running OPA policies against the change performed

Final thoughts

The GitOps methodology brings a few improvements to the deployment model and security benefits to the table without having to add another tool. It brings you one step closer to a unified Cloud Native Application Protection Platform (CNAPP).

It improves the security posture by adding a “shift left” layer directly to the source code and thanks to the flexibility of the pull-request model, you can easily add extra security checks without affecting or modifying the runtime.

The post How to apply security at the source using GitOps appeared first on Sysdig.

]]>