In this blog post you’ll learn how to set up image vulnerability scanning for AWS CodePipeline and AWS CodeBuild using the Sysdig Platform.
AWS provides several tools for DevOps teams: CodeCommit for version control, CodeBuild for building and testing code, and CodeDeploy for automatic code deployment. The block on top of all these tools is CodePipeline that allows them to visualize and automate these different stages.
Image Scanning for AWS CodePipeline raises the confidence that DevOps teams have in the security of their deployments, detecting known vulnerabilities and validating container build configuration early in their pipelines. By detecting those issues before the images are published into a container registry (like Amazon Elastic Container Registry) or deployed in production, fixes can be applied faster, improving delivery to production time.
The process of scanning images with Sysdig includes:
Analyzing the Dockerfile and image metadata to detect security sensitive configurations, such as:
- Running as privileged (root) user, without the
USER
command. - Using images tagged as
latest
, rather than specific versions with full traceability. - Exposing insecure ports.
- Not using the
HEALTHCHECK
instruction. - Many other configurable Dockerfile checks.
Checking software packages against well-known vulnerabilities databases:
- OS packages.
- Third party libraries that our application makes use of, such as Python pip, Ruby gems, Node npm, Java jars, etc.
And user-defined policies or compliance requirements that you want to check for every image, such as:
- Software packages blacklists.
- Base images whitelists.
- Image scanning parameters that match with the NIST 800-190 or PCI compliance lists.
Using Sysdig’s local scanning, you won’t lose control over your images as they don’t need to be sent to the backend or exposed to a staging repository. After the image is built, it will be scanned in the same node where you run your pipeline, and only the scanning results will be sent to the Sysdig Secure backend, whether it be our SaaS or your self-hosted instance.
Planning AWS CodePipeline and CodeBuild for vulnerability scanning
AWS provides a CI/CD DevOps platform where you can create pipelines that will be automatically executed when an event of our choosing occurs. The most common events used to trigger pipelines are new commits pushed to a code repository.
The AWS CodePipeline in this article will be invoked after a commit is pushed to a GitHub repository. The Pipeline will then use AWS CodeBuild to build the repository code into an image, then push the image to a registry like ECR where it can be deployed into Kubernetes.
Building the AWS CodePipeline and CodeBuild infrastructure
In search of simplicity, all the infrastructure needed for this example is defined as code and can be recreated using Terraform manifests, available in our inline-scan-aws-infra
repository.
The Terraform manifests define one AWS CodeBuild project that we’ll later set up for building and scanning our image.
They also define one AWS CodePipeline configured to:
- Retrieve the code from GitHub and save it in an AWS S3 Bucket.
- Invoke the execution of the AWS CodeBuild project.
And finally, they’ll store the credentials for the image registry and the Sysdig Secure API token in the AWS Parameter Store, which is the recommended way to handle secrets to avoid leaks.
Create those by running:
terraform apply
Check further configuration details in the repository documentation.
AWS CodeBuild YAML definition for Inline Scanning with Sysdig Secure
Once the infrastructure is created, we can start setting up the building process. The steps for our build are:
- Install and execute Docker.
- Login into DockerHub and pull Sysdig’s image for inline scanning.
- Build the image from the downloaded code.
- Scan the new image and send the results to Sysdig Secure.
- Publish the image to the registry if it passes the scan.
AWS CodeBuild will perform the steps defined in a buildspec.yml file, inside the root path of our code repository:
version: 0.2 env: variables: IMAGE_NAME: "sysdiglabs/dummy-vuln-app:latest" SCAN_IMAGE_NAME: "sysdiglabs/secure-inline-scan:latest" parameter-store: DOCKER_LOGIN_USER: dockerLoginUser DOCKER_LOGIN_PASSWORD: dockerLoginPassword SYSDIG_SECURE_TOKEN: sysdigApiToken phases: install: runtime-versions: docker: 18 commands: - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done" pre_build: commands: - docker login -u $DOCKER_LOGIN_USER -p $DOCKER_LOGIN_PASSWORD - docker pull $SCAN_IMAGE_NAME build: commands: - docker build . -t $IMAGE_NAME post_build: commands: - docker run --rm -v /var/run/docker.sock:/var/run/docker.sock $SCAN_IMAGE_NAME /bin/inline_scan analyze -s https://secure.sysdig.com -k $SYSDIG_SECURE_TOKEN $IMAGE_NAME - docker push $IMAGE_NAME
Let’s break it down.
env: variables: IMAGE_NAME: "sysdiglabs/dummy-vuln-app:latest" SCAN_IMAGE_NAME: "sysdiglabs/secure-inline-scan:latest" parameter-store: DOCKER_LOGIN_USER: dockerLoginUser DOCKER_LOGIN_PASSWORD: dockerLoginPassword SYSDIG_SECURE_TOKEN: sysdigApiToken
First we are going to define environment variables for our pipeline:
IMAGE_NAME
: the name of the image we are going to build and scan.SCAN_IMAGE_NAME
: the name of the image that contains the inline scanning script.DOCKER_LOGIN_USER
andDOCKER_LOGIN_PASSWORD
: the DockerHub credentials we need to push the image to the repository. As these credentials must not be publicly available, we will save them in the AWS Parameter Store. These parameters were created by Terraform in the previous section.SYSDIG_SECURE_TOKEN
: the token from Sysdig Secure so we can send the results from the scans. This token must be saved as a credential as well, and it will be referenced from the Parameter Store. The parameter was created using the Terraform manifests.
We save the sensitive credentials in the AWS Systems Manager > Parameter Store using the Terraform manifests.
Now we define the actual build, the steps to execute when there’s a new commit:
install: runtime-versions: docker: 18 commands: - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2& - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
The pipeline will ensure we have Docker 18 installed in the runner. The runner will start Docker as a daemon and wait until it’s running.
pre_build: commands: - docker login -u $DOCKER_LOGIN_USER -p $DOCKER_LOGIN_PASSWORD - docker pull $SCAN_IMAGE_NAME
Before building the image, it will log into the Docker registry and pull the image with the inline scanning script.
build: commands: - docker build . -t $IMAGE_NAME
Then, the runner will build one image using the Dockerfile in the git repository.
post_build: commands: - docker run --rm -v /var/run/docker.sock:/var/run/docker.sock $SCAN_IMAGE_NAME /bin/inline_scan analyze -s https://secure.sysdig.com -k $SYSDIG_SECURE_TOKEN $IMAGE_NAME - docker push $IMAGE_NAME
The scanning tool will perform the scan within the runner in the same machine where the image was built, all without the container image ever leaving your AWS account.
Within Sysdig Secure, we can configure multiple scanning policies with different checks to define what is considered dangerous and should be warned or blocked.
Assigning scanning policies for our image is as easy as linking the registry, repository, or tag to a policy.
If the inline_scan
script finds any STOP
condition from our scanning policy, the script will be aborted with a return code different from 0
, and the pipeline execution will be considered as FAILED
. Then, the execution of the pipeline will end here.
However, if everything is correct and the image passes the scan, the pipeline will continue and we will push the image to the Docker Registry.
Let’s execute the pipeline and see it in action!
Here are the results of the pipeline. Uh-oh! Something went wrong:
If we navigate into Details and open the Execution details, we will see the CodeBuild execution that failed.
Inside the Phase details tab we can see how it failed in the POST_BUILD
stage. We can also identify the last command executed, which is the one that generated the error.
Let’s check the full information in Sysdig Secure.
Image scanning results in Sysdig Secure
The image will be scanned in the local machine, but the results will be sent to Sysdig Secure for further analysis and we will be able to download the full report:
We can see that:
- The Dockerfile exposes the port 22.
- There are some non-OS packages for Python that contain HIGH and CRITICAL-rated vulnerabilities.
- The OS package linux-libc-dev contains some HIGH-rated vulnerabilities.
Exposed port 22 is a STOP condition in the scan results.
Operating system vulnerabilities found in the image.
Non-operating system vulnerabilities, from libraries that our software makes use of.
Sysdig Secure provides more features that enhance the security of your DevOps workflow:
- Trigger alerts when there’s a new image being pushed in the repository, either a new or an overwritten tag.
- Reevaluate policies if they change against existing scanning metadata and notify if the results change.
- Notify if there’s a new CVE that affects any existing image.
Conclusions
Image Scanning for AWS CodePipeline allows DevOps teams to detect problems earlier in the CI/CD pipeline, when the context is recent; this improves security, delivery, and raises their confidence in running their images in production.
Using Sysdig, we can scan the images we build in AWS CodePipeline, without having them leave the infrastructure, and without needing a staging registry; this enables the possibility of running multiple scans in parallel and improving the throughput.