Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
KubeArmor is a runtime security enforcement system for containers and nodes. It uses security policies (defined as Kubernetes Custom Resources like KSP, HSP, and CSP) to define allowed, audited, or blocked actions for workloads. The system monitors system activity using kernel technologies such as eBPF and enforces the defined policies by integrating with the underlying operating system's security modules like AppArmor, SELinux, or BPF-LSM, sending security alerts and telemetry through a log feeder.
KubeArmor leverages such as , , or to enforce the user-specified policies. KubeArmor generates rich alerts/telemetry events with container/pod/namespace identities by leveraging eBPF.
Protect critical paths such as cert bundles MITRE, STIGs, CIS based rules Restrict access to raw DB table
Process Whitelisting Network Whitelisting Control access to sensitive assets
Process execs, File System accesses Service binds, Ingress, Egress connections Sensitive system call profiling
Kubernetes Deployment Containerized Deployment VM/Bare-Metal Deployment
Security Policy for Pods/Containers [] []
Cluster level security Policy for Pods/Containers [] []
Security Policy for Hosts/Nodes [] [] ...
,
Minutes:
Calendar invite: ,
KubeArmor uses 's system call utility functions.
KubeArmor is of the Cloud Native Computing Foundation.
KubeArmor roadmap is tracked via
This recipe explains how to use KubeArmor directly on a VM/Bare-Metal machine, and we tested the following steps on Ubuntu hosts.
The recipe installs kubearmor
as systemd process and karmor
cli tool to manage policies and show alerts/telemetry.
Install KubeArmor (VER is the kubearmor release version)
Note that the above command doesn't installs the recommended packages, as we ship object files along with the package file. In case you don't have BTF, consider removing
--no-install-recommends
flag.
Check the status of KubeArmor using sudo systemctl status kubearmor
or use sudo journalctl -u kubearmor -f
to continuously monitor kubearmor logs.
Following policy is to deny execution of sleep
binary on the host:
Save the above policy to hostpolicy.yaml
and apply:
Now if you run sleep
command, the process would be denied execution.
Note that
sleep
may not be blocked if you run it in the same terminal where you apply the above policy. In that case, please open a new terminal and runsleep
again to see if the command is blocked.
KubeArmor supports following types of workloads:
Following distributions are tested for VM/Bare-metal based installations:
Note Full: Supports both enforcement and observability Partial: Supports only observability
It would be very much appreciated if you can test kubearmor on a platform not listed above and if you have access to. Once tested you can update this document and raise a PR.
Download the or KubeArmor.
K8s orchestrated: Workloads deployed as k8s orchestrated containers. In this case, Kubearmor is deployed as a . Note, KubeArmor supports policy enforcement on both k8s-pods () as well as k8s-nodes ().
Containerized: Workloads that are containerized but not k8s orchestrated are supported. KubeArmor installed in can be used to protect such workloads.
VM/Bare-Metals: Workloads deployed on Virtual Machines or Bare Metal i.e. workloads directly operating as host/system processes. In this case, Kubearmor is deployed in .
Please approach the Kubearmor community on or a GitHub issue to express interest in adding the support.
Provider
K8s engine
OS Image
Arch
Audit Rules
Blocking Rules
LSM Enforcer
Remarks
Onprem
x86_64, ARM
x86_64
Ubuntu >= 16.04
x86_64
Microsoft
Ubuntu >= 18.04
x86_64
Oracle
x86_64
IBM
Ubuntu
x86_64
Talos
Talos
x86_64
AWS
Amazon Linux 2 (kernel >=5.8)
x86_64
AWS
Ubuntu
x86_64
AppArmor
AWS
x86_64
AWS
x86_64
AWS
Ubuntu
ARM
AppArmor
AWS
Amazon Linux 2
ARM
SELinux
RedHat
x86_64
SELinux
RedHat
x86_64
RedHat
x86_64
Rancher
x86_64
Rancher
x86_64
Oracle
ARM
SELinux
VMware
TBD
x86_64
Mirantis
Ubuntu>=20.04
x86_64
AppArmor
Digital Ocean
Debian GNU/Linux 11 (bullseye)
x86_64
Alibaba Cloud
Alibaba Cloud Linux 3.2104 LTS
x86_64
SUSE
SUSE Enterprise 15
Full
Full
Debian
Full
Full
Ubuntu
18.04 / 16.04 / 20.04
Full
Full
RedHat / CentOS
RHEL / CentOS <= 8.4
Full
Partial
RedHat / CentOS
RHEL / CentOS >= 8.5
Full
Full
Fedora
Fedora 34 / 35
Full
Full
Rocky Linux
Rocky Linux >= 8.5
Full
Full
AWS
Amazon Linux 2022
Full
Full
AWS
Amazon Linux 2023
Full
Full
RaspberryPi (ARM)
Debian
Full
Full
ArchLinux
ArchLinux-6.2.1
Full
Full
Alibaba
Alibaba Cloud Linux 3.2104 LTS 64 bit
Full
Full
Welcome to the KubeArmor tutorial! In this first chapter, we'll dive into one of the most fundamental concepts in KubeArmor: Security Policies. Think of these policies as the instruction manuals or rulebooks you give to KubeArmor, telling it exactly how applications and system processes should behave.
In any secure system, you need rules that define what is allowed and what isn't. In Kubernetes and Linux, these rules can get complicated, dealing with things like which files a program can access, which network connections it can make, or which powerful system features (capabilities) it's allowed to use.
KubeArmor simplifies this by letting you define these rules using clear, easy-to-understand Security Policies. You write these policies in a standard format that Kubernetes understands (YAML files, using something called Custom Resource Definitions or CRDs), and KubeArmor takes care of translating them into the low-level security configurations needed by the underlying system.
These policies are powerful because they allow you to specify security rules for different parts of your system:
KubeArmorPolicy (KSP): For individual Containers or Pods running in your Kubernetes cluster.
KubeArmorHostPolicy (HSP): For the Nodes (the underlying Linux servers) where your containers are running. This is useful for protecting the host system itself, or even applications running directly on the node outside of Kubernetes.
KubeArmorClusterPolicy (CSP): For applying policies across multiple Containers/Pods based on namespaces or labels cluster-wide.
Imagine you have a web server application running in a container. This application should only serve web pages and access its configuration files. It shouldn't be trying to access sensitive system files like /etc/shadow
or connecting to unusual network addresses.
Without security policies, if your web server container gets compromised, an attacker might use it to access or modify sensitive data, or even try to attack other parts of your cluster or network.
KubeArmor policies help prevent this by enforcing the principle of least privilege. This means you only grant your applications and host processes the minimum permissions they need to function correctly.
Use Case Example: Let's say you have a simple application container that should never be allowed to read the /etc/passwd
file inside the container. We can use a KubeArmor Policy (KSP) to enforce this rule.
KubeArmor policies are defined as YAML files that follow a specific structure. This structure includes:
Metadata: Basic information about the policy, like its name
. For KSPs, you also specify the namespace
it belongs to. HSPs and CSPs are cluster-scoped, meaning they don't belong to a specific namespace.
Selector: This is how you tell KubeArmor which containers, pods, or nodes the policy should apply to. You typically use Kubernetes labels for this.
Spec (Specification): This is the core of the policy where you define the actual security rules (what actions are restricted) and the desired outcome (Allow, Audit, or Block).
Let's look at a simplified structure:
Explanation:
apiVersion
and kind
: Identify this document as a KubeArmor Policy object.
metadata
: Gives the policy a name (block-etc-passwd-read
) and specifies the namespace (default
) it lives in (for KSP).
spec
: Contains the security rules.
selector
: Uses matchLabels
to say "apply this policy to any Pod in the default
namespace that has the label app: my-web-app
".
file
: This section defines rules related to file access.
matchPaths
: We want to match a specific file path.
- path: /etc/passwd
: The specific file we are interested in.
action: Block
: If any process inside the selected containers tries to access /etc/passwd
, the action should be to Block
that attempt.
This simple policy directly addresses our use case: preventing the web server (app: my-web-app
) from reading /etc/passwd
.
Let's break down the three types:
KubeArmorPolicy
KSP
Containers / Pods (Scoped by Namespace)
matchLabels
, matchExpressions
KubeArmorHostPolicy
HSP
Nodes / Host OS
nodeSelector
(matchLabels
)
KubeArmorClusterPolicy
CSP
Containers / Pods (Cluster-wide)
selector
(matchExpressions
on namespace
or label
)
KubeArmorPolicy (KSP)
Applies to pods within a specific Kubernetes namespace.
Uses selector.matchLabels
or selector.matchExpressions
to pick which pods the policy applies to, based on their labels.
Example: Block /bin/bash
execution in all pods within the dev
namespace labeled role=frontend
.
KubeArmorHostPolicy (HSP)
Applies to the host operating system of the nodes in your cluster.
Uses nodeSelector.matchLabels
to pick which nodes the policy applies to, based on node labels.
Example: Prevent the /usr/bin/ssh
process on nodes labeled node-role.kubernetes.io/worker
from accessing /etc/shadow
.
KubeArmorClusterPolicy (CSP)
Applies to pods across multiple namespaces or even the entire cluster.
Uses selector.matchExpressions
which can target namespaces (key: namespace
) or labels (key: label
) cluster-wide.
Example: Audit all network connections made by pods in the default
or staging
namespaces. Or, block /usr/bin/curl
execution in all pods across the cluster except those labeled app=allowed-tools
.
These policies become Kubernetes Custom Resources when KubeArmor is installed. You can see their definitions in the KubeArmor source code under the deployments/CRD
directory:
You've written a policy YAML file. What happens when you apply it to your Kubernetes cluster using kubectl apply -f your-policy.yaml
?
Policy Creation: You create the policy object in the Kubernetes API Server.
KubeArmor Watches: The KubeArmor DaemonSet (a component running on each node) is constantly watching the Kubernetes API Server for KubeArmor policy objects (KSP, HSP, CSP).
Policy Discovery: KubeArmor finds your new policy.
Target Identification: KubeArmor evaluates the policy's selector
(or nodeSelector
) to figure out exactly which pods/containers or nodes this policy applies to.
Translation: For each targeted container or node, KubeArmor translates the high-level rules defined in the policy's spec
(like "Block access to /etc/passwd
") into configurations for the underlying security enforcer (which could be AppArmor, SELinux, or BPF, depending on your setup and KubeArmor's configuration - we'll talk more about these later).
Enforcement: The security enforcer on that specific node is updated with the new low-level rules. Now, if a targeted process tries to do something forbidden by the policy, the enforcer steps in to Allow
, Audit
, or Block
the action as specified.
Here's a simplified sequence:
This flow shows how KubeArmor acts as the bridge between your easy-to-write YAML policies and the complex, low-level security mechanisms of the operating system.
Every rule in a KubeArmor policy (within the spec
section) specifies an action
. This tells KubeArmor what to do if the rule's condition is met.
Allow: Explicitly permits the action. This is useful for creating "whitelist" policies where you only allow specific behaviors and implicitly block everything else.
Audit: Does not prevent the action but generates a security alert or log message when it happens. This is great for testing policies before enforcing them or for monitoring potentially suspicious activity without disrupting applications.
Block: Prevents the action from happening and generates a security alert. This is for enforcing strict "blacklist" rules where you explicitly forbid certain dangerous behaviors.
Remember the "Note" mentioned in the provided policy specifications: For system call monitoring (syscalls
), KubeArmor currently only supports the Audit
action, regardless of what is specified in the policy YAML.
In this chapter, you learned that KubeArmor Security Policies (KSP, HSP, CSP) are your rulebooks for defining security posture in your Kubernetes environment. You saw how they use Kubernetes concepts like labels and namespaces to target specific containers, pods, or nodes. You also got a peek at the basic structure of these policies, including the selector for targeting and the spec for defining rules and actions.
Understanding policies is the first step to using KubeArmor effectively to protect your workloads and infrastructure. In the next chapter, we'll explore how KubeArmor identifies the containers and nodes it is protecting, which is crucial for the policy engine to work correctly.
Welcome back to the KubeArmor tutorial! In the previous chapter, we learned about KubeArmor's Security Policies (KSP, HSP, CSP) and how they define rules for what applications and processes are allowed or forbidden to do. We saw that these policies use selectors (like labels and namespaces) to tell KubeArmor which containers, pods, or nodes they should apply to.
But how does KubeArmor know which policy to apply when something actually happens, like a process trying to access a file? When an event occurs deep within the operating system (like a process accessing /etc/shadow
), the system doesn't just say "a pod with label app=my-web-app
did this". It provides low-level details like Process IDs (PID), Parent Process IDs (PPID), and Namespace IDs (like PID Namespace and Mount Namespace).
This is where the concept of Container/Node Identity comes in.
Think of Container/Node Identity as KubeArmor's way of answering the question: "Who is doing this?".
When a system event happens on a node – maybe a process starts, a file is opened, or a network connection is attempted – KubeArmor intercepts this event. The event data includes technical details about the process that triggered it. KubeArmor needs to take these technical details and figure out if the process belongs to:
A specific Container (which might be part of a Kubernetes Pod or a standalone Docker container).
Or, the Node itself (the underlying Linux operating system, potentially running processes outside of containers).
Once KubeArmor knows who is performing the action (the specific container or node), it can then look up the relevant security policies that apply to that identity and decide whether to allow, audit, or block the action.
Imagine you have a KubeArmorPolicy (KSP) that says: "Block any attempt by containers with the label app: sensitive-data
to read the file /sensitive/config
.":
Now, suppose a process inside one of your containers tries to open /sensitive/config
.
Without Identity: KubeArmor might see an event like "Process with PID 1234 and Mount Namespace ID 5678 tried to read /sensitive/config". Without knowing which container PID 1234 and MNT NS 5678 belong to, KubeArmor can't tell if this process is running in a container labeled app: sensitive-data
. It wouldn't know which policy applies!
With Identity: KubeArmor sees the event, looks up PID 1234 and MNT NS 5678 in its internal identity map, and discovers "Ah, that PID and Namespace belong to Container ID abc123def456...
which is part of Pod my-sensitive-pod-xyz
in namespace default
, and that pod has the label app: sensitive-data
." Now it knows this event originated from a workload targeted by the block-sensitive-file-read
policy. It can then apply the Block
action.
So, identifying the workload responsible for a system event is fundamental to enforcing policies correctly.
KubeArmor runs as a DaemonSet on each node in your Kubernetes cluster (or directly on a standalone Linux server). This daemon is responsible for monitoring system activity on that specific node. To connect these low-level events to higher-level workload identities (like Pods or Nodes), KubeArmor does a few things:
Watching Kubernetes (for K8s environments): The KubeArmor daemon watches the Kubernetes API Server for events related to Pods and Nodes. When a new Pod starts, KubeArmor gets its details:
Pod Name
Namespace Name
Labels (this is key for policy selectors!)
Container details (Container IDs, Image names)
Node Name where the Pod is scheduled. KubeArmor stores this information.
Interacting with Container Runtimes: KubeArmor talks to the container runtime (like Docker or containerd) running on the node. It uses the Container ID (obtained from Kubernetes or by watching runtime events) to get more low-level details:
Container PID (the process ID of the main process inside the container as seen from the host OS).
Container Namespace IDs (specifically the PID Namespace ID and Mount Namespace ID). These IDs are crucial because system events are often reported with these namespace identifiers.
Monitoring Host Processes: KubeArmor also monitors processes running directly on the host node (outside of containers).
KubeArmor builds and maintains an internal map that links these low-level identifiers (like PID Namespace ID + Mount Namespace ID) to the corresponding higher-level identities (Container ID, Pod Name, Namespace, Node Name, Labels).
Let's visualize how this identity mapping happens and is used:
This diagram shows the two main phases:
Identity Discovery: KubeArmor actively gathers information from Kubernetes and the container runtime to build its understanding of which system identifiers belong to which workloads.
Event Correlation: When a system event occurs, KubeArmor uses the identifiers from the event (like Namespace IDs) to quickly look up the corresponding workload identity in its map.
The KubeArmor code interacts with Kubernetes and Docker/containerd to get this identity information.
For Kubernetes environments, KubeArmor's k8sHandler
watches for Pod and Node events:
This snippet shows that KubeArmor isn't passively waiting; it actively watches the Kubernetes API for changes using standard Kubernetes watch mechanisms. When a Pod is added, updated, or deleted, KubeArmor receives an event and updates its internal state.
For Docker (and similar logic exists for containerd), KubeArmor's dockerHandler
can inspect running containers to get detailed information:
This function is critical. It takes a containerID
and retrieves its associated Namespace IDs (PidNS
, MntNS
) by reading special files in the /proc
filesystem on the host, which link the host PID to the namespaces it belongs to. It also retrieves labels and other useful information directly from the container runtime's inspection data.
This collected identity information is stored internally. For example, the SystemMonitor
component maintains a map (NsMap
) to quickly look up a workload based on Namespace IDs:
These functions from processTree.go
show how KubeArmor builds and uses the core identity mapping: it stores the relationship between Namespace IDs (found in system events) and the Container ID, allowing it to quickly identify which container generated an event.
KubeArmor primarily identifies workloads using the following:
This allows KubeArmor to apply the correct security policies, whether they are KSPs (targeting Containers/Pods based on labels/namespaces) or HSPs (targeting Nodes based on node labels).
Understanding Container/Node Identity is key to grasping how KubeArmor works. It's the crucial step where KubeArmor translates low-level system events into the context of your application workloads (containers in pods) or your infrastructure (nodes). By maintaining a map of system identifiers to workload identities, KubeArmor can accurately determine which policies apply to a given event and enforce your desired security posture.
In the next chapter, we'll look at the component that takes this identified event and the relevant policy and makes the decision to allow, audit, or block the action.
KubeArmor supports attack prevention, not just observability and monitoring. More importantly, the prevention is handled inline: even before a process is spawned, a rule can deny execution of a process. Most other systems typically employ "post-attack mitigation" that kills a process/pod after malicious intent is observed, allowing an attacker to execute code on the target environment. Essentially KubeArmor uses inline mitigation to reduce the attack surface of a pod/container/VM. KubeArmor leverages best of breed Linux Security Modules (LSMs) such as AppArmor, BPF-LSM, and SELinux (only for host protection) for inline mitigation. LSMs have several advantages over other techniques:
KubeArmor does not change anything with the pod/container.
KubeArmor does not require any changes at the host level or at the CRI (Container Runtime Interface) level to enforce blocking rules. KubeArmor deploys as a non-privileged DaemonSet with certain capabilities that allows it to monitor other pods/containers and the host.
A given cluster can have multiple nodes utilizing different LSMs. KubeArmor abstracts away complexities of LSMs and provides an easy way to enforce policies. KubeArmor manages complexity of LSMs under-the-hood.
Post-exploit Mitigation works by killing a suspicious process in response to an alert indicating malicious intent.
Attacker is allowed to execute a binary. Attacker could disable security controls, access logs, etc to circumvent attack detection.
By the time a malicious process is killed, sensitive contents could have already been deleted, encrypted, or transmitted.
This approach has multiple problems:
It is often difficult to predict which LSM (AppArmor or SELinux) would be available on the target node.
BPF-LSM is not supported by Pod Security Context.
It is difficult to manually specify an AppArmor or SELinux policy. Changing default AppArmor or SELinux policies might result in more security holes since it is difficult to decipher the implications of the changes and can be counter-productive.
Different managed cloud providers use different default distributions. Google GKE COS uses AppArmor by default, AWS Bottlerocket uses BPF-LSM and SELinux, and AWS Amazon Linux 2 uses only SELinux by default. Thus it is challenging to use Pod Security Context in multi-cloud deployments.
References:
kubeadm, , , microk8s
, AppArmor
, AppArmor
All
, AppArmor
All
, AppArmor
>=7
, AppArmor
<=8.4
>=8.5
>=9.2
, AppArmor
, AppArmor
/
(KSP)
(HSP)
(CSP)
And their corresponding Go type definitions are in . You don't need to understand Go or CRD internals right now, just know that these files formally define the structure and rules for creating KubeArmor policies that Kubernetes understands.
, “post-exploitation detection/mitigation is at the mercy of an exploit writer putting little to no effort into avoiding tripping these detection mechanisms.”
allows one to specify or policies.
This guide assumes you have access to a . If you want to try non-k8s mode, for instance systemd mode to protect/audit containers or processes on VMs/bare-metal, check .
Check the to verify if your platform is supported.
You can find more details about helm related values and configurations .
[!NOTE] kArmor CLI provides a Developer Friendly way to interact with KubeArmor Telemetry. You can stream KubeArmor telemetry independently of kArmor CLI tool and integrate it with your chosen SIEM (Security Information and Event Management) solutions. on how to achieve this integration. This guide assumes you have kArmor CLI to access KubeArmor Telemetry but you can view it on your SIEM tool once integrated.
If you don't see Permission denied please refer to debug this issue
If you don't see Permission denied please refer to debug this issue.
defines what happens to the operations that are not in the allowed list. Should it be audited (allow but alert), or denied (block and alert)?
If you don't see Permission denied please refer to debug this issue
Container
Container ID, PID Namespace ID, Mount Namespace ID, Pod Name, Namespace, Labels
Kubernetes API, Container Runtime
Node
Node Name, Node Labels, Operating System Info
Kubernetes API, Host OS APIs
Welcome back! In the previous chapter, we learned how KubeArmor figures out who is performing an action on your system by understanding Container/Node Identity. We saw how it maps low-level system details like Namespace IDs to higher-level concepts like Pods, containers, and nodes, using information from the Kubernetes API and the container runtime.
Now that KubeArmor knows who is doing something, it needs to decide if that action is allowed. This is the job of the Runtime Enforcer.
Think of the Runtime Enforcer as the actual security guard positioned at the gates and doors of your system. It receives the security rules you defined in your Security Policies (KSP, HSP, CSP). But applications and the operating system don't directly understand KubeArmor policy YAML!
The Runtime Enforcer's main task is to translate these high-level KubeArmor rules into instructions that the underlying operating system's built-in security features can understand and enforce. These OS security features are powerful mechanisms within the Linux kernel designed to control what processes can and cannot do. Common examples include:
AppArmor: Used by distributions like Ubuntu, Debian, and SLES. It uses security profiles that define access controls for individual programs (processes).
SELinux: Used by distributions like Fedora, CentOS/RHEL, and Alpine Linux. It uses a system of labels and rules to control interactions between processes and system resources.
BPF-LSM: A newer mechanism using eBPF programs attached to Linux Security Module (LSM) hooks to enforce security policies directly within the kernel.
When an application or process on your node or inside a container attempts to do something (like open a file, start a new process, or make a network connection), the Runtime Enforcer (via the configured OS security feature) steps in. It checks the translated rules that apply to the identified workload and tells the operating system whether to Allow, Audit, or Block the action.
Let's go back to our example: preventing a web server container (with label app: my-web-app
) from reading /etc/passwd
.
In Chapter 1, we wrote a KubeArmor Policy for this:
In Chapter 2, we saw how KubeArmor's Container/Node Identity component identifies that a specific process trying to read /etc/passwd
belongs to a container running a Pod with the label app: my-web-app
.
Now, the Runtime Enforcer takes over:
It knows the action is "read file /etc/passwd
".
It knows the actor is the container identified as having the label app: my-web-app
.
It looks up the applicable policies for this actor and action.
It finds the block-etc-passwd-read
policy, which says action: Block
for /etc/passwd
.
The Runtime Enforcer, using the underlying OS security module, tells the Linux kernel to Block the read attempt.
The application trying to read the file will receive a "Permission denied" error, and the attempt will be stopped before it can succeed.
KubeArmor is designed to be flexible and work on different Linux systems. It doesn't assume a specific OS security module is available. When KubeArmor starts on a node, it checks which security modules are enabled and supported on that particular system.
You can configure KubeArmor to prefer one enforcer over another using the lsm.lsmOrder
configuration option. KubeArmor will try to initialize the enforcers in the specified order (bpf
, selinux
, apparmor
) and use the first one that is available and successfully initialized. If none of the preferred ones are available, it falls back to any other supported, available LSM. If no supported enforcer can be initialized, KubeArmor will run in a limited capacity (primarily for monitoring, not enforcement).
You can see KubeArmor selecting the LSM in the NewRuntimeEnforcer
function (from KubeArmor/enforcer/runtimeEnforcer.go
):
This snippet shows that KubeArmor checks for available LSMs (lsms
) and attempts to initialize its corresponding enforcer module (be.NewBPFEnforcer
, NewAppArmorEnforcer
, NewSELinuxEnforcer
) based on configuration and availability. The first one that succeeds becomes the active EnforcerType
.
Once an enforcer is selected and initialized, the KubeArmor Daemon on the node loads the relevant policies for the workloads it is protecting and translates them into the specific rules required by the chosen enforcer.
When KubeArmor needs to enforce a policy on a specific container or node, here's a simplified flow:
Policy Change/Discovery: A KubeArmor Policy (KSP, HSP, or CSP) is applied or changed via the Kubernetes API. The KubeArmor Daemon on the relevant node detects this.
Identify Affected Workloads: The daemon determines which specific containers or the host node are targeted by this policy change using the selectors and its internal Container/Node Identity mapping.
Translate Rules: For each affected workload, the daemon takes the high-level policy rules (e.g., Block access to /etc/passwd
) and translates them into the low-level format required by the active Runtime Enforcer (AppArmor, SELinux, or BPF-LSM).
Load Rules into OS: The daemon interacts with the operating system to load or update these translated rules. This might involve writing files, calling system utilities (apparmor_parser
, chcon
), or interacting with BPF system calls and maps.
OS Enforcer Takes Over: The OS kernel's security module (now configured by KubeArmor) is now active.
Action Attempt: A process within the protected workload attempts a specific action (e.g., opening /etc/passwd
).
Interception: The OS kernel intercepts this action using hooks provided by its security module.
Decision: The security module checks the rules previously loaded by KubeArmor that apply to the process and resource involved. Based on the action
(Allow, Audit, Block) defined in the KubeArmor policy (and translated into the module's format), the security module makes a decision.
Enforcement:
If Block
, the OS prevents the action and returns an error to the process.
If Allow
, the OS permits the action.
If Audit
, the OS permits the action but generates a log event.
Event Notification (for Audit/Block): (As we'll see in the next chapter), the OS kernel generates an event notification for blocked or audited actions, which KubeArmor then collects for logging and alerting.
Here's a simplified sequence diagram for the enforcement path after policies are loaded:
This diagram shows that the actual enforcement decision happens deep within the OS kernel, powered by the rules that KubeArmor translated and loaded. KubeArmor isn't in the critical path for every action attempt; it pre-configures the kernel's security features to handle the enforcement directly.
Let's see how KubeArmor interacts with the different OS enforcers.
AppArmor Enforcer:
AppArmor uses text-based profile files stored typically in /etc/apparmor.d/
. KubeArmor translates its policies into rules written in AppArmor's profile language, saves them to a file, and then uses the apparmor_parser
command-line tool to load or update these profiles in the kernel.
This snippet shows the key steps: generating the profile content, writing it to a file path based on the container/profile name, and then executing the apparmor_parser
command with the -r
(reload) and -W
(wait) flags to apply the profile to the kernel.
SELinux Enforcer:
SELinux policy management is complex, often involving compiling policy modules and managing file contexts. KubeArmor's SELinux enforcer focuses primarily on basic host policy enforcement (in standalone mode, not typically in Kubernetes clusters using the default SELinux integration). It interacts with tools like chcon
to set file security contexts based on policies.
This snippet shows KubeArmor executing the chcon
command to modify the SELinux security context (label) of files, which is a key way SELinux enforces access control.
BPF-LSM Enforcer:
The BPF-LSM enforcer works differently. Instead of writing text files and using external tools, it loads eBPF programs directly into the kernel and populates eBPF maps with rule data. When an event occurs, the eBPF program attached to the relevant LSM hook checks the rules stored in the map to make the enforcement decision.
This heavily simplified snippet shows how the BPF enforcer loads BPF programs and attaches them to kernel LSM hooks. It also hints at how container identity (Container/Node Identity) is used (via pidns
, mntns
) as a key to organize rules within BPF maps (BPFContainerMap
), allowing the kernel's BPF program to quickly look up the relevant policy when an event occurs. The AddContainerIDToMap
function, although simplified, demonstrates how KubeArmor populates these maps.
Each enforcer type requires specific logic within KubeArmor to translate policies and interact with the OS. The Runtime Enforcer component provides this abstraction layer, allowing KubeArmor policies to be enforced regardless of the underlying Linux security module, as long as it's supported.
The action
specified in your KubeArmor policy (Security Policies) directly maps to how the Runtime Enforcer instructs the OS:
Allow: The translated rule explicitly permits the action. The OS security module will let the action proceed.
Audit: The translated rule allows the action but is configured to generate a log event. The OS security module lets the action proceed and notifies the kernel's logging system.
Block: The translated rule denies the action. The OS security module intercepts the action and prevents it from completing, typically returning an error to the application.
This allows you to use KubeArmor policies not just for strict enforcement but also for visibility and testing (Audit
).
The Runtime Enforcer is the critical piece that translates your human-readable KubeArmor policies into the low-level language understood by the operating system's security features (AppArmor, SELinux, BPF-LSM). It's responsible for loading these translated rules into the kernel, enabling the OS to intercept and enforce your desired security posture for containers and host processes based on their identity.
By selecting the appropriate enforcer for your system and dynamically updating its rules, KubeArmor ensures that your security policies are actively enforced at runtime. In the next chapter, we'll look at the other side of runtime security: observing system events, including those that were audited or blocked by the Runtime Enforcer.
Welcome back to the KubeArmor tutorial! In the previous chapter, we explored the System Monitor, KubeArmor's eyes and ears inside the operating system, responsible for observing runtime events like file accesses, process executions, and network connections. We learned that the System Monitor uses a powerful kernel technology called eBPF to achieve this deep visibility with low overhead.
In this chapter, we'll take a closer look at BPF (Extended Berkeley Packet Filter), or eBPF as it's more commonly known today. This technology isn't just used by the System Monitor; it's also a key enforcer type available to the Runtime Enforcer component in the form of BPF-LSM. Understanding eBPF is crucial to appreciating how KubeArmor works at a fundamental level within the Linux kernel.
Imagine the Linux kernel as the central operating system managing everything on your computer or server. Traditionally, if you wanted to add new monitoring, security, or networking features deep inside the kernel, you had to write C code, compile it as a kernel module, and load it. This is risky because bugs in kernel modules can crash the entire system.
eBPF provides a safer, more flexible way to extend kernel functionality. Think of it as a miniature, highly efficient virtual machine running inside the kernel. It allows you to write small programs that can be loaded into the kernel and attached to specific "hooks" (points where interesting events happen).
Here's the magic:
Safe: eBPF programs are verified by a kernel component called the "verifier" before they are loaded. The verifier ensures the program won't crash the kernel, hang, or access unauthorized memory.
Performant: eBPF programs run directly in the kernel's execution context when an event hits their hook. They are compiled into native machine code for the processor using a "Just-In-Time" (JIT) compiler, making them very fast.
Flexible: They can be attached to various hooks for monitoring or enforcement, including system calls, network events, tracepoints, and even Linux Security Module (LSM) hooks.
Data Sharing: eBPF programs can interact with user-space programs (like the KubeArmor Daemon) and other eBPF programs using shared data structures called BPF Maps.
KubeArmor needs to operate deep within the operating system to provide effective runtime security for containers and nodes. It needs to:
See Everything: Monitor low-level system calls and kernel events across different container namespaces (Container/Node Identity).
Act Decisively: Enforce security policies by blocking forbidden actions before they can harm the system.
Do it Efficiently: Minimize the performance impact on your applications.
eBPF is the perfect technology for this:
Deep Visibility: By attaching eBPF programs to kernel hooks, KubeArmor's System Monitor gets high-fidelity data about system activities as they happen.
High-Performance Enforcement: When used as a Runtime Enforcer via BPF-LSM, eBPF programs can quickly check policies against events directly within the kernel, blocking actions instantly without the need to switch back and forth between kernel and user space for every decision.
Low Overhead: eBPF's efficiency means it adds minimal latency to system calls compared to older kernel security mechanisms or relying purely on user-space monitoring.
Kernel Safety: KubeArmor can extend kernel behavior for security without the risks associated with traditional kernel modules.
Let's look at how BPF powers both sides of KubeArmor's runtime protection:
As we saw in Chapter 4, the System Monitor observes events. This is primarily done using eBPF.
How it works: Small eBPF programs are attached to kernel hooks related to file, process, network, etc., events. When an event triggers a hook, the eBPF program runs. It collects relevant data (like the path, process ID, Namespace IDs) and writes this data into a special shared memory area called a BPF Ring Buffer.
Getting Data to KubeArmor: The KubeArmor Daemon (KubeArmor Daemon) in user space continuously reads events from this BPF Ring Buffer.
Context: The daemon uses the Namespace IDs from the event data to correlate it with the specific container or node (Container/Node Identity) before processing and sending the alert via the Log Feeder.
Simplified view of monitoring data flow:
This shows the efficient flow: the kernel triggers a BPF program, which quickly logs data to a buffer that KubeArmor reads asynchronously.
Let's revisit a simplified code concept for the BPF monitoring program side (C code compiled to BPF):
Explanation:
struct event
: Defines the structure of the data sent for each event.
kubearmor_events
: Defines a BPF map of type RINGBUF
. This is the channel for kernel -> user space communication.
SEC("kprobe/sys_enter_openat")
: Specifies where this program attaches - at the entry of the openat
system call.
bpf_ringbuf_reserve
: Allocates space in the ring buffer for a new event.
bpf_ktime_get_ns
, bpf_get_current_task
, bpf_get_current_comm
, bpf_probe_read_str
: BPF helper functions used to get data from the kernel context (timestamp, task info, command name, string from user space).
bpf_ringbuf_submit
: Sends the prepared event data to the ring buffer.
On the Go side, KubeArmor's System Monitor uses the cilium/ebpf
library to load this BPF object file and read from the kubearmor_events
map (the ring buffer).
Explanation:
loadMonitorObjects
: Loads the compiled BPF program and map definitions from the .o
file.
perf.NewReader(objs.KubearmorEvents, ...)
: Opens a reader for the specific BPF map named kubearmor_events
defined in the BPF code. This map is configured as a ring buffer.
mon.SyscallPerfMap.Read()
: Blocks until an event is available in the ring buffer, then reads the raw bytes sent by the BPF program.
The rest of the readEvents
function (simplified out, but hinted at in Chapter 4 context) involves parsing these bytes back into a struct, looking up the container/node identity, and processing the event.
This demonstrates how BPF allows a low-overhead kernel component (the BPF program writing to the ring buffer) and a user-space component (KubeArmor Daemon reading from the buffer) to communicate efficiently.
When KubeArmor is configured to use the BPF-LSM Runtime Enforcer, BPF programs are used not just for monitoring, but for making enforcement decisions in the kernel.
How it works: BPF programs are attached to Linux Security Module (LSM) hooks. These hooks are specifically designed points in the kernel where security decisions are made (e.g., before a file is opened, before a program is executed, before a capability is used).
Policy Rules in BPF Maps: KubeArmor translates its Security Policies into a format optimized for quick lookup and stores these rules in BPF Maps. There might be nested maps where an outer map is keyed by Namespace IDs (Container/Node Identity) and inner maps store rules specific to paths, processes, etc., for that workload.
Decision Making: When an event triggers a BPF-LSM hook, the attached eBPF program runs. It uses the current process's Namespace IDs to look up the relevant policy rules in the BPF maps. Based on the rule found (or the default posture if no specific rule matches), the BPF program returns a value to the kernel indicating whether the action should be allowed (0) or blocked (-EPERM
, which is kernel speak for "Permission denied").
Event Reporting: Even when an action is blocked, the BPF-LSM program (or a separate monitoring BPF program) will often still send an event to the ring buffer so KubeArmor can log the blocked attempt.
Simplified view of BPF-LSM enforcement flow:
This diagram shows the pre-configuration step (KubeArmor loading the program and rules) and then the fast, kernel-internal decision path when an event occurs.
Let's revisit a simplified BPF C code concept for enforcement (part of enforcer.bpf.c):
Explanation:
struct outer_key
: Defines the key structure for the outer map (kubearmor_containers
), using pid_ns
and mnt_ns
from the process's identity.
kubearmor_containers
: A BPF map storing references to other maps (or rule data directly in simpler cases), allowing rules to be organized per container/namespace.
SEC("lsm/bprm_check_security")
: Attaches this program to the LSM hook that is called before a new program is executed.
BPF_PROG(...)
: Macro defining the BPF program function.
get_outer_key
: Helper function to get the Namespace IDs for the current task.
bpf_map_lookup_elem(&kubearmor_containers, &okey)
: Looks up the map (or data) associated with the current process's namespace IDs.
The core logic involves reading event data (like the program path), looking up the corresponding rule in the BPF maps, and returning 0
to allow or -EPERM
to block, based on the rule's action
flag (RULE_DENY
).
Events are also reported to the ring buffer (kubearmor_events
) for logging, similar to the monitoring path.
On the Go side, the BPF-LSM Runtime Enforcer component loads these programs and, crucially, populates the BPF Maps with the translated policies.
Explanation:
loadEnforcerObjects
: Loads the compiled BPF enforcement code.
link.AttachLSM
: Attaches a specific BPF program (objs.EnforceProc
) to a named kernel LSM hook (lsm/bprm_check_security
).
be.BPFContainerMap = objs.KubearmorContainers
: Gets a handle (reference) to the BPF map defined in the C code. This handle allows the Go program to interact with the map in the kernel.
AddContainerPolicies
: This conceptual function shows how KubeArmor translates high-level policies into a kernel-friendly format (e.g., flags like RULE_DENY
, RULE_EXEC
) and uses BPFContainerMap.Update
to populate the maps. The Namespace IDs (pidns
, mntns
) are used as keys to ensure policies are applied to the correct container context.
This illustrates how KubeArmor uses user-space code to set up the BPF environment in the kernel, loading programs and populating maps. Once this is done, the BPF programs handle enforcement decisions directly within the kernel when events occur.
BPF technology involves several key components:
BPF Programs
Small, safe programs written in a C-like language, compiled to BPF bytecode
Kernel
Monitor events, Enforce policies at hooks
BPF Hooks
Specific points in the kernel where BPF programs can be attached
Kernel
Entry/exit of syscalls, tracepoints, LSM hooks
BPF Maps
Efficient key-value data structures for sharing data
Kernel (accessed by both kernel BPF and user space)
Store policy rules, Store event data (ring buffer), Store identity info
BPF Verifier
Kernel component that checks BPF programs for safety before loading
Kernel
Ensures KubeArmor's BPF programs are safe
BPF JIT
Compiles BPF bytecode to native machine code for performance
Kernel
Makes KubeArmor's BPF operations fast
BPF Loader
User-space library/tool to compile C code, load programs/maps into kernel
User Space
KubeArmor Daemon uses cilium/ebpf
library as loader
In this chapter, you've taken a deeper dive into BPF (eBPF), the powerful kernel technology that forms the backbone of KubeArmor's runtime security capabilities. You learned how eBPF enables KubeArmor to run small, safe, high-performance programs inside the kernel for both observing system events (System Monitor) and actively enforcing security policies at low level hooks (Runtime Enforcer via BPF-LSM). You saw how BPF Maps are used to share data and store policy rules efficiently in the kernel.
Understanding BPF highlights KubeArmor's modern, efficient approach to container and node security. In the next chapter, we'll bring together all the components we've discussed by looking at the central orchestrator on each node
Welcome back to the KubeArmor tutorial! In the previous chapters, we've built up our understanding of how KubeArmor defines security rules using Security Policies, how it figures out who is performing actions using Container/Node Identity, and how it configures the underlying OS to actively enforce those rules using the Runtime Enforcer.
But even with policies and enforcement set up, KubeArmor needs to constantly know what's happening inside your system. When a process starts, a file is accessed, or a network connection is attempted, KubeArmor needs to be aware of these events to either enforce a policy (via the Runtime Enforcer) or simply record the activity for auditing and visibility.
This is where the System Monitor comes in.
Think of the System Monitor as KubeArmor's eyes and ears inside the operating system on each node. While the Runtime Enforcer acts as the security guard making decisions based on loaded rules, the System Monitor is the surveillance system and log recorder that detects all the relevant activity.
Its main job is to:
Observe: Watch for specific actions happening deep within the Linux kernel, like:
Processes starting or ending.
Files being opened, read, or written.
Network connections being made or accepted.
Changes to system privileges (capabilities).
Collect Data: Gather detailed information about these events (which process, what file path, what network address, etc.).
Add Context: Crucially, it correlates the low-level event data with the higher-level Container/Node Identity information KubeArmor maintains (like which container, pod, or node the event originated from).
Prepare for Logging and Processing: Format this enriched event data so it can be sent for logging (via the Log Feeder) or used by other KubeArmor components.
The System Monitor uses advanced kernel technology, primarily eBPF, to achieve this low-overhead, deep visibility into system activities without requiring modifications to the applications or the kernel itself.
Let's revisit our web server example. We have a policy to Block the web server container (app: my-web-app
) from reading /etc/passwd
.
You apply the Security Policy.
KubeArmor's Runtime Enforcer translates this policy and loads a rule into the kernel's security module (say, BPF-LSM).
An attacker compromises your web server and tries to read /etc/passwd
.
The OS kernel intercepts this attempt (via the BPF-LSM hook configured by the Runtime Enforcer).
Based on the loaded rule, the Runtime Enforcer's BPF program blocks the action.
So, the enforcement worked! The read was prevented. But how do you know this happened? How do you know someone tried to access /etc/passwd
?
This is where the System Monitor is essential. Even when an action is blocked by the Runtime Enforcer, the System Monitor is still observing that activity.
When the web server attempts to read /etc/passwd
:
The System Monitor's eBPF programs, also attached to kernel hooks, detect the file access attempt.
It collects data: the process ID, the file path (/etc/passwd
), the type of access (read).
It adds context: it uses the process ID and Namespace IDs to look up in KubeArmor's internal map and identifies that this process belongs to the container with label app: my-web-app
.
It also sees that the Runtime Enforcer returned an error code indicating the action was blocked.
The System Monitor bundles all this information (who, what, where, when, and the outcome - Blocked) and sends it to KubeArmor for logging.
Without the System Monitor, you would just have a failed system call ("Permission denied") from the application's perspective, but you wouldn't have the centralized, context-rich security alert generated by KubeArmor that tells you which container specifically tried to read /etc/passwd
and that it was blocked by policy.
The System Monitor provides the crucial visibility layer, even for actions that are successfully prevented by enforcement. It also provides visibility for actions that are simply Audited by policy, or even for actions that are Allowed but that you want to monitor.
The System Monitor relies heavily on eBPF programs loaded into the Linux kernel. Here's a simplified flow:
Initialization: When the KubeArmor Daemon starts on a node, its System Monitor component loads various eBPF programs into the kernel.
Hooking: These eBPF programs attach to specific points (called "hooks") within the kernel where system events occur (e.g., just before a file open is processed, or when a new process is created).
Event Detection: When a user application or system process performs an action (like open("/etc/passwd")
), the kernel reaches the attached eBPF hook.
Data Collection (in Kernel): The eBPF program at the hook executes. It can access information about the event directly from the kernel's memory (like the process structure, file path, network socket details). It also gets the process's Namespace IDs Container/Node Identity.
Event Reporting (Kernel to User Space): The eBPF program packages the collected data (raw event + Namespace IDs) into a structure and sends it to the KubeArmor Daemon in user space using a highly efficient kernel mechanism, typically an eBPF ring buffer.
Data Reception (in KubeArmor Daemon): The System Monitor component in the KubeArmor Daemon continuously reads from this ring buffer.
Context Enrichment: For each incoming event, the System Monitor uses the Namespace IDs provided by the eBPF program to look up the corresponding Container ID, Pod Name, Namespace, and Labels in its internal identity map (the one built by the Container/Node Identity component). It also adds other relevant details like the process's current working directory and parent process.
Log/Alert Generation: The System Monitor formats all this enriched information into a structured log or alert message.
Forwarding: The formatted log is then sent to the Log Feeder component, which is responsible for sending it to your configured logging or alerting systems.
Here's a simple sequence diagram illustrating this:
This diagram shows how the eBPF programs in the kernel are the first point of contact for system events, collecting the initial data before sending it up to the KubeArmor Daemon for further processing, context addition, and logging.
Let's look at tiny snippets from the KubeArmor source code to see hints of how this works.
The eBPF programs (written in C, compiled to BPF bytecode) define the structure of the event data they send to user space. In KubeArmor/BPF/shared.h
, you can find structures like event
:
This shows the event
structure containing key fields like timestamps, Namespace IDs (pid_id
, mnt_id
), the type of event (event_id
), the syscall result (retval
), the command name, and potentially file paths (data
). It also defines the kubearmor_events
map as a BPF_MAP_TYPE_RINGBUF
, which is the mechanism used by eBPF programs in the kernel to efficiently send these event
structures to the KubeArmor Daemon in user space.
On the KubeArmor Daemon side (in Go), the System Monitor component (KubeArmor/monitor/systemMonitor.go
) reads from this ring buffer and processes the events.
This Go code shows:
The SyscallPerfMap
reading from the eBPF ring buffer in the kernel.
Raw event data being sent to the SyscallChannel
.
A loop reading from SyscallChannel
, parsing the raw bytes into a SyscallContext
struct.
Using ctx.PidID
and ctx.MntID
(Namespace IDs) to call LookupContainerID
and get the containerID
.
Packaging the raw context (ContextSys
), parsed arguments (ContextArgs
), and the looked-up ContainerID
into a ContextCombined
struct.
Sending the enriched ContextCombined
event to the ContextChan
.
This ContextCombined
structure is the output of the System Monitor – it's the rich event data with identity context ready for the Log Feeder and other components.
The System Monitor uses different eBPF programs attached to various kernel hooks to monitor different types of activities:
Process
Process execution (execve
, execveat
), process exit (do_exit
), privilege changes (setuid
, setgid
)
Tracepoints, Kprobes, BPF-LSM
File
File open (open
, openat
), delete (unlink
, unlinkat
, rmdir
), change owner (chown
, fchownat
)
Kprobes, Tracepoints, BPF-LSM
Network
Socket creation (socket
), connection attempts (connect
), accepting connections (accept
), binding addresses (bind
), listening on sockets (listen
)
Kprobes, Tracepoints, BPF-LSM
Capability
Use of privileged kernel features (capabilities)
BPF-LSM, Kprobes
Syscall
General system call entry/exit for various calls
Kprobes, Tracepoints
The specific hooks used might vary slightly depending on the kernel version and the chosen Runtime Enforcerconfiguration (AppArmor/SELinux use different integration points than pure BPF-LSM), but the goal is the same: intercept and report relevant system calls and kernel security hooks.
The System Monitor acts as a fundamental data source:
It provides the event data that the Runtime Enforcer's BPF programs might check against loaded policies in the kernel (BPF-LSM case). Note that enforcement happens at the hook via the rules loaded by the Enforcer, but the Monitor still observes the event and its outcome.
It uses the mappings maintained by the Container/Node Identity component to add context to raw events.
It prepares and forwards structured event logs to the Log Feeder.
Essentially, the Monitor is the "observer" part of KubeArmor's runtime security. It sees everything, correlates it to your workloads, and reports it, enabling both enforcement (via the Enforcer's rules acting on these observed events) and visibility.
In this chapter, you learned that the KubeArmor System Monitor is the component responsible for observing system events happening within the kernel. Using eBPF technology, it detects file access, process execution, network activity, and other critical operations. It enriches this raw data with Container/Node Identity context and prepares it for logging and analysis, providing essential visibility into your system's runtime behavior, regardless of whether an action was allowed, audited, or blocked by policy.
Understanding the System Monitor and its reliance on eBPF is key to appreciating KubeArmor's low-overhead, high-fidelity approach to runtime security. In the next chapter, we'll take a deeper dive into the technology that powers this monitoring (and the BPF-LSM enforcer)
Welcome back to the KubeArmor tutorial! In our journey so far, we've explored the key components that make KubeArmor work:
Security Policies: Your rulebooks for security.
Container/Node Identity: How KubeArmor knows who is doing something.
Runtime Enforcer: The component that translates policies into kernel rules and blocks forbidden actions.
System Monitor: KubeArmor's eyes and ears, observing system events.
BPF (eBPF): The powerful kernel technology powering much of the monitoring and enforcement.
In this chapter, we'll look at the KubeArmor Daemon. If the other components are like specialized tools or senses, the KubeArmor Daemon is the central brain and orchestrator that lives on each node. It brings all these pieces together, allowing KubeArmor to function as a unified security system.
The KubeArmor Daemon is the main program that runs on every node (Linux server) where you want KubeArmor to provide security. When you install KubeArmor, you typically deploy it as a DaemonSet in Kubernetes, ensuring one KubeArmor Daemon pod runs on each of your worker nodes. If you're using KubeArmor outside of Kubernetes (on a standalone Linux server or VM), the daemon runs directly as a system service.
Think of the KubeArmor Daemon as the manager for that specific node. Its responsibilities include:
Starting and stopping all the other KubeArmor components (System Monitor, Runtime Enforcer, Log Feeder).
Communicating with external systems like the Kubernetes API server or the container runtime (Docker, containerd, CRI-O) to get information about running workloads and policies.
Building and maintaining the internal mapping for Container/Node Identity.
Fetching and processing Security Policies (KSP, HSP, CSP) that apply to the workloads on its node.
Instructing the Runtime Enforcer on which policies to load and enforce for specific containers and the host.
Receiving security events and raw data from the System Monitor.
Adding context (like identity) to raw events received from the monitor.
Forwarding processed logs and alerts to the Log Feeder for external consumption.
Handling configuration changes and responding to shutdown signals.
Without the Daemon, the individual components couldn't work together effectively to provide end-to-end security.
Let's trace the journey of a security policy and a system event, highlighting the Daemon's role.
Imagine you want to protect a specific container, say a database pod with label app: my-database
, by blocking it from executing the /bin/bash
command. You create a KubeArmor Policy (KSP) like this:
And let's say later, a process inside that database container actually attempts to run /bin/bash
.
Here's how the KubeArmor Daemon on the node hosting that database pod orchestrates the process:
Policy Discovery: The KubeArmor Daemon, which is watching the Kubernetes API server, detects your new block-bash-in-db
policy.
Identify Targets: The Daemon processes the policy's selector
(app: my-database
). It checks its internal state (built by talking to the Kubernetes API and container runtime) to find which running containers/pods on its node match this label. It identifies the specific database container.
Prepare Enforcement: The Daemon takes the policy rule (Block /bin/bash
) and tells its Runtime Enforcer component to load this rule specifically for the identified database container. The Enforcer translates this into the format needed by the underlying OS security module (AppArmor, SELinux, or BPF-LSM) and loads it into the kernel.
System Event: A process inside the database container tries to execute /bin/bash
.
Event Detection & Enforcement: The OS kernel intercepts this action. If using BPF-LSM, the Runtime Enforcer's BPF program checks the loaded policy rules (which the Daemon put there). It sees the rule to Block
/bin/bash
for this container's identity. The action is immediately blocked by the kernel.
Event Monitoring & Context: Simultaneously, the System Monitor's BPF programs also detect the exec
attempt on /bin/bash
. It collects details like the process ID, the attempted command, and the process's Namespace IDs. It sends this raw data to the Daemon (via a BPF ring buffer).
Event Processing: The Daemon receives the raw event from the Monitor. It uses the Namespace IDs to look up the Container/Node Identity in its internal map, identifying that this event came from the database container (app: my-database
). It sees the event includes an error code indicating it was blocked by the security module.
Log Generation: The Daemon formats a detailed log/alert message containing all the information: the event type (process execution), the command (/bin/bash
), the outcome (Blocked), and the workload identity (container ID, Pod Name, Namespace, Labels).
Log Forwarding: The Daemon sends this formatted log message to its Log Feeder component, which then forwards it to your configured logging/monitoring system.
This diagram illustrates how the Daemon acts as the central point, integrating information flow and control between external systems (K8s, CRI), the low-level kernel components (Monitor, Enforcer), and the logging/alerting system.
Let's look at the core structure representing the KubeArmor Daemon in the code. It holds references to all the components it manages and the data it needs.
Referencing KubeArmor/core/kubeArmor.go
:
Explanation:
The KubeArmorDaemon
struct contains fields like Node
(details about the node it runs on), K8sEnabled
(whether it's in a K8s cluster), and maps/slices to store information about K8sPods
, Containers
, EndPoints
, and parsed SecurityPolicies
. Locks (*sync.RWMutex
) are used to safely access this shared data from multiple parts of the Daemon's logic.
Crucially, it has pointers to the other main components: Logger
, SystemMonitor
, and RuntimeEnforcer
. This shows that the Daemon owns and interacts with instances of these components.
WgDaemon
is a sync.WaitGroup
used to track background processes (goroutines) started by the Daemon, allowing for a clean shutdown.
When KubeArmor starts on a node, the KubeArmor()
function in KubeArmor/main.go
(which calls into KubeArmor/core/kubeArmor.go
) initializes and runs the Daemon.
Here's a simplified look at the initialization steps within the KubeArmor()
function:
Explanation:
NewKubeArmorDaemon
is like the constructor; it creates the Daemon object and initializes its basic fields and locks. Pointers to components like Logger
, SystemMonitor
, RuntimeEnforcer
are initially zeroed.
The main KubeArmor()
function then calls dedicated Init...
methods on the dm
object (like dm.InitLogger()
, dm.InitSystemMonitor()
, dm.InitRuntimeEnforcer()
).
These Init...
methods are responsible for creating the actual instances of the other components using their respective New...
functions (e.g., mon.NewSystemMonitor()
) and assigning the returned object to the Daemon's pointer field (dm.SystemMonitor = ...
). They pass necessary configuration and references (like the Logger
) to the components they initialize.
After initializing components, the Daemon starts goroutines (using go dm.SomeFunction()
) for tasks that need to run continuously in the background, like serving logs, monitoring system events, or watching external APIs.
The main flow then typically waits for a shutdown signal (<-sigChan
).
When a signal is received, dm.DestroyKubeArmorDaemon()
is called, which in turn calls Close...
methods on the components to shut them down gracefully.
This demonstrates the Daemon's role in the lifecycle: it's the entity that brings the other parts to life, wires them together by passing references, starts their operations, and orchestrates their shutdown.
The Daemon isn't just starting components; it's managing the flow of information:
Policies In: The Daemon actively watches the Kubernetes API (or receives updates in non-K8s mode) for changes to KubeArmor policies. When it gets a policy, it stores it in its SecurityPolicies
or HostSecurityPolicies
lists and notifies the Runtime Enforcer to update the kernel rules for affected workloads.
Identity Management: The Daemon watches Pod/Container/Node events from Kubernetes and the container runtime. It populates internal structures (like the Containers
map) which are then used by the System Monitor to correlate raw kernel events with workload identity (Container/Node Identity). While the NsMap
itself might live in the Monitor (as seen in Chapter 4 context), the Daemon is responsible for gathering the initial K8s/CRI data needed to populate that map.
Events Up: The System Monitor constantly reads raw event data from the kernel (via BPF ring buffer). It performs the initial lookup using the Namespace IDs and passes the enriched events (likely via Go channels, as hinted in Chapter 4 code) back to the Daemon or a component managed by the Daemon (like the logging pipeline within the Feeder).
Logs Out: The Daemon (or its logging pipeline) takes these enriched events and passes them to the Log Feeder component. The Log Feeder is then responsible for sending these logs/alerts to the configured output destinations.
The Daemon acts as the central switchboard, ensuring that policies are delivered to the enforcement layer, that kernel events are enriched with workload context, and that meaningful security logs and alerts are generated and sent out.
Component Management
Starts, stops, and manages the lifecycle of Monitor, Enforcer, Logger.
System Monitor, Runtime Enforcer, Log Feeder
External Comm.
Watches K8s API for policies & workload info; interacts with CRI.
Kubernetes API Server, Container Runtimes (Docker, containerd, CRI-O)
Identity Building
Gathers data (Labels, Namespaces, Container IDs, PIDs, NS IDs) to map low-level events to workloads.
Kubernetes API Server, Container Runtimes, OS Kernel (/proc
)
Policy Processing
Fetches policies, identifies targeted workloads on its node.
Kubernetes API Server, Internal state (Identity)
Enforcement Orchest.
Tells the Runtime Enforcer which policies to load for which workload.
Runtime Enforcer, Internal state (Identity, Policies)
Event Reception
Receives raw or partially processed events from the Monitor.
System Monitor (via channels/buffers)
Event Enrichment
Adds full workload identity and policy context to incoming events.
System Monitor, Internal state (Identity, Policies)
Logging/Alerting
Formats events into structured logs/alerts and passes them to the Log Feeder.
Log Feeder, Internal state (Enriched Events)
Configuration/Signal
Reads configuration, handles graceful shutdown requests.
Configuration files/API, OS Signals
This table reinforces that the Daemon is the crucial integration layer on each node.
In this chapter, you learned that the KubeArmor Daemon is the core process running on each node, serving as the central orchestrator for all other KubeArmor components. It's responsible for initializing, managing, and coordinating the System Monitor (eyes/ears), Runtime Enforcer (security guard), and Log Feeder (reporter). You saw how it interacts with Kubernetes and container runtimes to understand Container/Node Identity and fetch Security Policies, bringing all the pieces together to enforce your security posture and report violations.
Understanding the Daemon's central role is key to seeing how KubeArmor operates as a cohesive system on each node. In the final chapter, we'll focus on where all the security events observed by the Daemon and its components end up
KubeArmor is a security solution for the Kubernetes and cloud native platforms that helps protect your workloads from attacks and threats. It does this by providing a set of hardening policies that are based on industry-leading compliance and attack frameworks such as CIS, MITRE, NIST-800-53, and STIGs. These policies are designed to help you secure your workloads in a way that is compliant with these frameworks and recommended best practices.
One of the key features of KubeArmor is that it provides these hardening policies out-of-the-box, meaning that you don't have to spend time researching and configuring them yourself. Instead, you can simply apply the policies to your workloads and immediately start benefiting from the added security that they provide.
Additionally, KubeArmor presents these hardening policies in the context of your workload, so you can see how they will be applied and what impact they will have on your system. This allows you to make informed decisions about which policies to apply, and helps you understand the trade-offs between security and functionality.
Overall, KubeArmor is a powerful tool for securing your Kubernetes workloads, and its out-of-the-box hardening policies based on industry-leading compliance and attack frameworks make it easy to get started and ensure that your system is as secure as possible.
The rules in hardening policies are based on inputs from:
Several others...
Pre-requisites:
Install KubeArmor
curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin && karmor install
Get the hardening policies in context of all the deployment in namespace NAMESPACE:
karmor recommend -n NAMESPACE
The recommended policies would be available in the out
folder.
Key highlights:
The hardening policies are available by default in the out
folder separated out in directories based on deployment names.
Get an HTML report by using the option --report report.html
with karmor recommend
.
Get hardening policies in context to specific compliance by specifying --tag <CIS/MITRE/...>
option.
KubeArmor has visibility into systems and application behavior. KubeArmor summarizes/aggregates the information and provides a user-friendly view to figure out the application behavior.
Process data:
What are the processes executing in the pods?
What processes are executing through which parent processes?
File data:
What are the file system accesses made by different processes?
Network Accesses:
What are the Ingress/Egress connections from the pod?
What server binds are done in the pod?
Get visibility into process executions in default
namespace.
Welcome back to the KubeArmor tutorial! In the previous chapters, we've learned how KubeArmor defines security rules using Security Policies, identifies workloads using Container/Node Identity, enforces policies with the Runtime Enforcer, and observes system activity with the System Monitor, all powered by the underlying BPF (eBPF) technology and orchestrated by the KubeArmor Daemon on each node.
We've discussed how KubeArmor can audit or block actions based on policies. But where do you actually see the results of this monitoring and enforcement? How do you know when a policy was violated or when suspicious activity was detected?
This is where the Log Feeder comes in.
Think of the Log Feeder as KubeArmor's reporting and alerting system. Its primary job is to collect all the security-relevant events and telemetry that KubeArmor detects and make them available to you and other systems.
It receives structured information, including:
Security Alerts: Notifications about actions that were audited or blocked because they violated a Security Policy.
System Logs: Telemetry about system activities that KubeArmor is monitoring, even if no specific policy applies (e.g., process executions, file accesses, network connections, depending on visibility settings).
KubeArmor Messages: Internal messages from the KubeArmor Daemon itself (useful for debugging and monitoring KubeArmor's status).
The Log Feeder formats this information into standardized messages (using Protobuf, a language-neutral, platform-neutral, extensible mechanism for serializing structured data) and sends it out over a gRPC interface. gRPC is a high-performance framework for inter-process communication.
This gRPC interface allows various clients to connect to the KubeArmor Daemon on each node and subscribe to streams of these security events in real-time. Tools like karmor log
(part of the KubeArmor client tools) connect to this feeder to display events. External systems like Security Information and Event Management (SIEM) platforms can also integrate by writing clients that understand the KubeArmor gRPC format.
You've deployed KubeArmor and applied policies. Now you need to answer questions like:
Was that attempt to read /etc/passwd
from the web server container actually blocked?
Is any process on my host nodes trying to access sensitive files like /root/.ssh
?
Are my applications spawning unexpected shell processes, even if they aren't explicitly blocked by policy?
Did KubeArmor successfully apply the policies I created?
The Log Feeder provides the answers by giving you a stream of events directly from KubeArmor:
It reports when an action was Blocked by a specific policy, providing details about the workload and the attempted action.
It reports when an action was Audited, showing you potentially suspicious behavior even if it wasn't severe enough to block.
It reports general System Events (logs), giving you visibility into the normal or unusual behavior of processes, file accesses, and network connections on your nodes and within containers.
Without the Log Feeder, KubeArmor would be enforcing policies blindly from a monitoring perspective. You wouldn't have the necessary visibility to understand your security posture, detect attacks (even failed ones), or troubleshoot policy issues.
Use Case Example: You want to see every time someone tries to execute a shell (/bin/sh
, /bin/bash
) inside any of your containers. You might create an Audit Policy for this. The Log Feeder is how you'll receive the notifications for these audited events.
Event Source: The System Monitor observes kernel events (process execution, file access, etc.). It enriches these events with Container/Node Identity and sends them to the KubeArmor Daemon. The Runtime Enforcer also contributes by confirming if an event was blocked or audited by policy.
Reception by Daemon: The KubeArmor Daemon receives these enriched events.
Formatting (by Feeder): The Daemon passes the event data to the Log Feeder component. The Feeder takes the structured event data and converts it into the predefined Protobuf message format (e.g., Alert
or Log
message types defined in protobuf/kubearmor.proto
).
Queueing: The Feeder manages internal queues or channels for different types of messages (Alerts, Logs, general KubeArmor Messages). It puts the newly formatted Protobuf message onto the appropriate queue/channel.
gRPC Server: The Feeder runs a gRPC server on a specific port (default 32767).
Client Subscription: External clients connect to this gRPC port and call specific gRPC methods (like WatchAlerts
or WatchLogs
) to subscribe to event streams.
Event Streaming: When a client subscribes, the Feeder gets a handle to the client's connection. It then continuously reads messages from its internal queues/channels and streams them over the gRPC connection to the connected client.
Here's a simple sequence diagram showing the flow:
This shows how events flow from the kernel, up through the System Monitor and Daemon, are formatted by the Log Feeder, and then streamed out to any connected clients.
The Log Feeder is implemented primarily in KubeArmor/feeder/feeder.go
and KubeArmor/feeder/logServer.go
, using definitions from protobuf/kubearmor.proto
and the generated protobuf/kubearmor_grpc.pb.go
.
First, let's look at the Protobuf message structures. These define the schema for the data that gets sent out.
Referencing protobuf/kubearmor.proto
:
These Protobuf definitions specify the exact structure and data types for the messages KubeArmor will send, ensuring that clients know exactly what data to expect. The .pb.go
and _grpc.pb.go
files are automatically generated from this .proto
file and provide the Go code for serializing/deserializing these messages and implementing the gRPC service.
Now, let's look at the Log Feeder implementation in Go.
Referencing KubeArmor/feeder/feeder.go
:
Explanation:
NewFeeder
: This function, called during Daemon initialization, sets up the data structures (EventStructs
) to manage client connections, creates a network listener for the configured gRPC port, and creates and registers the gRPC server (LogServer
). It passes a reference to EventStructs
and other data to the LogService
implementation.
ServeLogFeeds
: This function is run as a goroutine by the KubeArmor Daemon. It calls LogServer.Serve()
, which makes the gRPC server start listening for incoming client connections and handling gRPC requests.
PushLog
: This method is called by the KubeArmor Daemon (specifically, the part that processes events from the System Monitor) whenever a new security event or log needs to be reported. It takes KubeArmor's internal tp.Log
structure, converts it into the appropriate Protobuf message (pb.Alert
or pb.Log
), and then iterates through all registered client connections (stored in EventStructs
) broadcasting the message to their respective Go channels (Broadcast
). If a client isn't reading fast enough, the message might be dropped due to the channel buffer being full.
Now let's see the client-side handling logic within the Log Feeder's gRPC service implementation.
Referencing KubeArmor/feeder/logServer.go
:
Explanation:
LogService
: This struct is the concrete implementation of the gRPC service defined in protobuf/kubearmor.proto
. It holds references to the feeder's state.
WatchAlerts
: This method is a gRPC streaming RPC handler. When a client initiates a WatchAlerts
call, this function is executed. It creates a dedicated Go channel (conn
) for that client using AddAlertStruct
. Then, it enters a for
loop. Inside the loop, it waits for either the client to disconnect (<-svr.Context().Done()
) or for a new pb.Alert
message to appear on the client's dedicated channel (<-conn
). When a message arrives, it sends it over the gRPC stream back to the client using svr.Send(resp)
. This creates the real-time streaming behavior.
WatchLogs
: This method is similar to WatchAlerts
but handles subscriptions for general system logs (pb.Log
messages).
This shows how the Log Feeder's gRPC server manages multiple concurrent client connections, each with its own channel, ensuring that events pushed by PushLog
are delivered to all interested subscribers efficiently.
The most common way to connect to the Log Feeder is using the karmor
command-line tool provided with KubeArmor.
To watch security alerts:
To watch system logs:
To watch both alerts and logs:
These commands are simply gRPC clients that connect to the KubeArmor Daemon's Log Feeder port on your nodes (or via the KubeArmor Relay service if configured) and call the WatchAlerts
and WatchLogs
gRPC methods.
You can also specify filters (e.g., by namespace or policy name) using karmor log
options, which the Log Feeder's gRPC handlers can process (although the code snippets above show a simplified filter handling).
For integration with other systems, you would write a custom gRPC client application in your preferred language (Go, Python, Java, etc.) using the KubeArmor Protobuf definitions to connect to the feeder and consume the streams.
The Log Feeder is your essential window into KubeArmor's activity. By collecting enriched security events and telemetry from the System Monitor and Runtime Enforcer, formatting them using Protobuf, and streaming them over a gRPC interface, it provides real-time visibility into policy violations (alerts) and system behavior (logs). Tools like karmor log
and integrations with SIEM systems rely on the Log Feeder to deliver crucial security insights from your KubeArmor-protected environment.
This chapter concludes our detailed look into the core components of KubeArmor! You now have a foundational understanding of how KubeArmor defines policies, identifies workloads, enforces rules, monitors system activity using eBPF, orchestrates these actions with the Daemon, and reports everything via the Log Feeder.
Thank you for following this tutorial series! We hope it has provided a clear and beginner-friendly introduction to the fascinating world of KubeArmor.
ModelArmor uses KubeArmor as a sandboxing engine to ensure that the untrusted models execution is constrained and within required checks. AI/ML Models are essentially processes and allowing untrusted models to execute in AI environments have significant risks such as possibility of cryptomining attacks leveraging GPUs, remote command injections, etc. KubeArmor's preemptive mitigation mechanism provides a suitable framework for constraining the execution environment of models.
ModelArmor can be used to enforce security policies on the model execution environment.
KubeArmor helps organizations enforce a zero trust posture within their Kubernetes clusters. It allows users to define an allow-based policy that allows specific operations, and denies or audits all other operations. This helps to ensure that only authorized activities are allowed within the cluster, and that any deviations from the expected behavior are denied and flagged for further investigation.
By implementing a zero trust posture with KubeArmor, organizations can increase their security posture and reduce the risk of unauthorized access or activity within their Kubernetes clusters. This can help to protect sensitive data, prevent system breaches, and maintain the integrity of the cluster.
Install the nginx deployment using
kubectl create deployment nginx --image=nginx
.
Set the default security posture to default-deny.
kubectl annotate ns default kubearmor-file-posture=block --overwrite
Apply the following policy:
Observe that the policy contains Allow action. Once there is any KubeArmor policy having Allow action then the pods enter least permissive mode, allowing only explicitly allowed operations.
Note: Use kubectl port-forward $POD --address 0.0.0.0 8080:80 to access nginx and you can see that the nginx web access still works normally.
Lets try to execute some other processes:
This would be permission denied.
Achieving Zero Trust Security Posture is difficult. However, the more difficult part is to maintain the Zero Trust posture across application updates. There is also a risk of application downtime if the security posture is not correctly identified. While KubeArmor provides a way to enforce Zero Trust Security Posture, identifying the policies/rules for achieving this is non-trivial and requires that you keep the policies in dry-run mode (or default audit mode) before using the default-deny mode.
Hardening policies are derived from industry leading compliance standards and attack frameworks such as CIS, MITRE, NIST, STIGs, and several others. contains the latest hardening policies.
KubeArmor client tool (karmor) provides a way (karmor recommend
) to fetch the policies in the context of the kubernetes workloads or specific container using command line.
The output is a set of or that can be applied using k8s native tools (such as kubectl apply
).
📄
KubeArmor supports allow-based policies which results in specific actions to be allowed and denying/auditing everything else. For example, a specific pod/container might only invoke a set of binaries at runtime. As part of allow-based rules you can specify the set of processes that are allowed and everything else is either audited or denied based on the .
KubeArmor provides framework so as to smoothen the journey to Zero Trust posture. For e.g., it is possible to set dry-run/audit mode at the namespace level by . Thus, you can have different namespaces in different default security posture modes (default-deny vs default-audit). Users can switch to default-deny mode once they are comfortable (i.e., they do not see any alerts) with the settings.
gRPC Server
Listens for incoming client connections and handles RPC calls.
feeder/feeder.go
Exposes event streams to external clients.
LogService
Implementation of the gRPC service methods (WatchAlerts
, WatchLogs
).
feeder/logServer.go
Manages client connections and streams events.
EventStructs
Internal data structure (maps of channels) holding connections for each client type.
feeder/feeder.go
Enables broadcasting events to multiple clients.
Protobuf Defs
Define the structure of Alert
and Log
messages.
protobuf/kubearmor.proto
Standardizes the output format.
PushLog
method
Method on the Feeder
called by the Daemon to send new events.
feeder/feeder.go
Point of entry for events into the feeder.
There are two default mode of operations available block
and audit
. block
mode blocks all the operations that are not allowed in the policy. audit
generates telemetry events for operations that would have been blocked otherwise.
KubeArmor has 4 types of resources: Process, File, Network and Capabilities. Default Posture is configurable for each of the resources seperately except Process. Process based operations are treated under File resource only.
Note By default, KubeArmor set the Global default posture to
audit
Global default posture is configured using configuration options passed to KubeArmor using configuration file
Or using command line flags with the KubeArmor binary
We use namespace annotations to configure default posture per namespace. Supported annotations keys are kubearmor-file-posture
,kubearmor-network-posture
and kubearmor-capabilities-posture
with values block
or audit
. If a namespace is annotated with a supported key and an invalid value ( like kubearmor-file-posture=invalid
), KubeArmor will update the value with the global default posture ( i.e. to kubearmor-file-posture=block
).
Let's start KubeArmor with configuring default network posture to audit in the following YAML.
Contents of kubearmor.yaml
Here's a sample policy to allow tcp
connections from curl
binary.
Inside the ubuntu-5-deployment
, if we try to access tcp
using curl
. It works as expected with no telemetry generated.
If we try to access udp
using curl
, a bunch of telemetry is generated for the udp
access.
curl google.com
requires UDP for DNS resolution.
Generated alert has Policy Name DefaultPosture
and Action as Audit
Now let's update the default network posture to block for multiubuntu
namespace.
Now if we try to access udp
using curl
, the action is blocked and related alerts are generated.
Here curl couldn't resolve google.com due to blocked access to UDP.
Generated alert has Policy Name DefaultPosture
and Action as Block
Let's try to set the annotation value to something invalid.
We can see that, annotation value was automatically updated to audit since that was global mode of operation for network in the KubeArmor configuration.
KubeArmor currently supports enabling visibility for containers and hosts.
Visibility for hosts is not enabled by default, however it is enabled by default for containers .
The karmor
tool provides access to both using karmor logs
.
Now we need to deploy some sample policies
This sample policy blocks execution of the apt
and apt-get
commands in wordpress pods with label selector app: wordpress
.
Checking default visibility
Container visibility is enabled by default. We can check it using kubectl describe
and grep kubearmor-visibility
For pre-existing workloads : Enable visibility using kubectl annotate
. Currently KubeArmor supports process
, file
, network
, capabilities
Open up a terminal, and watch logs using the karmor
cli
In another terminal, simulate a policy violation . Try sleep
inside a pod
In the terminal running karmor logs
, the policy violation along with container visibility is shown, in this case for example
The logs can also be generated in JSON format using karmor logs --json
Host Visibility is not enabled by default . To enable Host Visibility we need to annotate the node using kubectl annotate node
To confirm it use kubectl describe
and grep kubearmor-visibility
Now we can get general telemetry events in the context of the host using karmor logs
.The logs related to Host Visibility will have type Type: HostLog
and Operation: File | Process | Network
KubeArmor has the ability to let the user select what kind of events have to be traced by changing the annotation kubearmor-visibility
at the namespace.
Checking Namespace visibility
Namespace visibility can be checked using kubectl describe
.
To update the visibility of namespace : Now let's update Kubearmor visibility using kubectl annotate
. Currently KubeArmor supports process
, file
, network
, capabilities
.
Lets try to update visibility for the namespace wordpress-mysql
Note: To turn off the visibility across all aspects, use
kubearmor-visibility=none
. Note that any policy violations or events that results in non-success returns would still be reported in the logs.
Open up a terminal, and watch logs using the karmor
cli
In another terminal, let's exec into the pod and run some process commands . Try ls
inside the pod
Now, we can notice that no logs have been generated for the above command and logs with only Operation: Network
are shown.
Note If telemetry is disabled, the user wont get audit event even if there is an audit rule.
Note Only the logs are affected by changing the visibility, we still get all the alerts that are generated.
Let's simulate a sample policy violation, and see whether we still get alerts or not.
Policy violation :
Here, note that the alert with Operation: Process
is reported.
Adversarial attacks exploit vulnerabilities in AI systems by subtly altering input data to mislead the model into incorrect predictions or decisions. These perturbations are often imperceptible to humans but can significantly degrade the system's performance.
By Model Access:
White-box Attacks: Complete knowledge of the model, including architecture and training data.
Black-box Attacks: No information about the model; the attacker probes responses to craft inputs.
By Target Objective:
Non-targeted Attacks: Push input to any incorrect class.
Targeted Attacks: Force input into a specific class.
Training Phase Attacks:
Data Poisoning: Injects malicious data into the training set, altering model behavior.
Backdoor Attacks: Embeds triggers in training data that activate specific responses during inference.
Inference Phase Attacks:
Model Evasion: Gradually perturbs input to skew predictions (e.g., targeted misclassification).
Membership Inference: Exploits model outputs to infer sensitive training data (e.g., credit card numbers).
Highly accurate models often exhibit reduced robustness against adversarial perturbations, creating a tradeoff between accuracy and security. For instance, Chen et al. found that better-performing models tend to be more sensitive to adversarial inputs.
Pre-analysis: Test models for prompt injection vulnerabilities using techniques like fuzzing.
Input Sanitation:
Validation: Enforce strict input rules (e.g., character and data type checks).
Filtering: Strip malicious scripts or fragments.
Encoding: Convert special characters to safe representations.
Secure Practices for Model Deployment:
Restrict model permissions.
Regularly update libraries to patch vulnerabilities.
Detect injection attempts with specialized tooling.
Python's pickle
module allows serialization and deserialization but lacks security checks. Attackers can exploit this to execute arbitrary code using crafted payloads. The module’s inherent insecurity makes it risky to use with untrusted inputs.
Mitigation:
Avoid using pickle
with untrusted sources.
Use secure serialization libraries like json
or protobuf
.
The Pickle Code Injection Proof of Concept (PoC) demonstrates the security vulnerabilities in Python's pickle
module, which can be exploited to execute arbitrary code during deserialization. This method is inherently insecure because it allows execution of arbitrary functions without restrictions or security checks.
Custom Pickle Injector:
Print Injection:
Install Packages:
Adversarial Command Execution: Upon loading the tampered model:
Output:
Installs the package or executes the payload.
Alters model behavior: changes predictions, losses, etc.
Spreading Malware: The injected code can download and install malware on the target machine, which can then be used to infect other systems in the network or create a botnet.
Backdoor Installation: An attacker can use pickle injection to install a backdoor that allows persistent access to the system, even if the original vulnerability is patched.
Data Exfiltration: An attacker can use pickle injection to read sensitive files or data from the system and send it to a remote server. This can include configuration files, database credentials, or any other sensitive information stored on the machine.
The pickle
module is inherently insecure for handling untrusted input due to its ability to execute arbitrary code.
Native Json format (this document)
KubeArmor CEF Format (coming soon...)
Container alerts are generated when there is a policy violation or audit event that is raised due to a policy action. For example, a policy might block execution of a process. When the execution is blocked by KubeArmor enforcer, KubeArmor generates an alert event implying policy action. In the case of an Audit action, the KubeArmor will only generate an alert without actually blocking the action.
The primary difference in the container alerts events vs the telemetry events (showcased above) is that the alert events contains certain additional fields such as policy name because of which the alert was generated and other metadata such as "Tags", "Message", "Severity" associated with the policy rule.
The fields are self-explanatory and have similar meaning as in the context of container based events (explained above).
Here is the specification of a security policy.
Note Please note that for system calls monitoring we only support audit action no matter what the value of action is
Now, we will briefly explain how to define a security policy.
A security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion and kind would be the same in any security policies. In the case of metadata, you need to specify the names of a policy and a namespace where you want to apply the policy.
The severity part is somewhat important. You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.
The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.
The message part is optional. You can add an alert message, and then the message will be presented in alert logs.
The selector part is relatively straightforward. Similar to other Kubernetes configurations, you can specify (a group of) pods based on labels.
Further in selector we can use matchExpressions
to define labels to select/deselect the workloads. Currently, only labels can be matched, so the key should be 'label'. The operator will determine whether the policy should apply to the workloads specified in the values field or not.
Operator: In When the operator is set to In, the policy will be applied only to the workloads that match the labels in the values field.
Operator: NotIn When the operator is set to NotIn, the policy will be applied to all the workloads except that match the labels in the values field.
NOTE Both
matchExpressions
andmatchLabel
are an ANDed operation.
In each match, there are three options.
ownerOnly (static action: allow owner only; otherwise block all)
If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.
fromSource
If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).
The file section is quite similar to the process section.
The only difference between 'process' and 'file' is the readOnly option.
readOnly (static action: allow to read only; otherwise block all)
If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.
In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.
In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.
There is one options in each match.
fromSource
If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash
will be matched.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory.
Action
KubeArmor supports configurable default security posture. The security posture could be allow/audit/deny. Default Posture is used when there's atleast one Allow
policy for the given deployment i.e. KubeArmor is handling policies in whitelisting manner (more about this in ).
Note: This example is in the environment.
If you don't have access to a K8s cluster, please follow to set one up.
karmor CLI tool:
To deploy app follow
Ref:
For better understanding, you can check .
In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, the coverage of regular expressions is highly dependent on AppArmor (). Thus, we generally do not recommend using this match.
In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in .
The action could be Allow, Audit, or Block. Security policies would be handled in a blacklist manner or a whitelist manner according to the action. Thus, you need to define the action carefully. You can refer to for more details. In the case of the Audit action, we can use this action for policy verification before applying a security policy with the Block action. For System calls monitoring, we only support audit mode no matter what the action is set to.
ClusterName
gives information about the cluster for which the log was generated
default
Operation
gives details about what type of operation happened in the pod
File/Process/ Network
ContainerID
information about the container ID from where log was generated
7aca8d52d35ab7872df6a454ca32339386be
ContainerImage
shows the image that was used to spin up the container
docker.io/accuknox/knoxautopolicy:v0.9@sha256:bb83b5c6d41e0d0aa3b5d6621188c284ea
ContainerName
specifies the Container name where the log got generated
discovery-engine
Data
shows the system call that was invoked for this operation
syscall=SYS_OPENAT fd=-100 flags=O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC
HostName
shows the node name where the log got generated
aks-agentpool-16128849-vmss000001
HostPID
gives the host Process ID
967872
HostPPID
list the details of host Parent Process ID
967496
Labels
shows the pod label from where log generated
app=discovery-engine
Message
gives the message specified in the policy
Alert! Execution of package management process inside container is denied
NamespaceName
lists the namespace where pod is running
accuknox-agents
PID
lists the process ID running in container
1
PPID
lists the Parent process ID running in container
967496
ParentProcessName
gives the parent process name from where the operation happend
/usr/bin/containerd-shim-runc-v2
PodName
lists the pod name where the log got generated
mysql-76ddc6ddc4-h47hv
ProcessName
specifies the operation that happened inside the pod for this log
/knoxAutoPolicy
Resource
lists the resources that was requested
//accuknox-obs.db
Result
shows whether the event was allowed or denied
Passed
Source
lists the source from where the operation request came
/knoxAutoPolicy
Type
specifies it as container log
ContainerLog
Action
specifies the action of the policy it has matched.
Audit/Block
ClusterName
gives information about the cluster for which the alert was generated
aks-test-cluster
Operation
gives details about what type of operation happened in the pod
File/Process/Network
ContainerID
information about the container ID where the policy violation or alert got generated
e10d5edb62ac2daa4eb9a2146e2f2cfa87b6a5f30bd3a
ContainerImage
shows the image that was used to spin up the container
docker.io/library/mysql:5.6@sha256:20575ecebe6216036d25dab5903808211f
ContainerName
specifies the Container name where the alert got generated
mysql
Data
shows the system call that was invoked for this operation
syscall=SYS_EXECVE
Enforcer
it specifies the name of the LSM that has enforced the policy
AppArmor/BPFLSM
HostName
shows the node name where the alert got generated
aks-agentpool-16128849-vmss000001
HostPID
gives the host Process ID
3647533
HostPPID
list the details of host Parent Process ID
3642706
Labels
shows the pod label from where alert generated
app=mysql
Message
gives the message specified in the policy
Alert! Execution of package management process inside container is denied
NamespaceName
lists the namespace where pod is running
wordpress-mysql
PID
lists the process ID running in container
266
PPID
lists the Parent process ID running in container
251
ParentProcessName
gives the parent process name from where the operation happend
/bin/bash
PodName
lists the pod name where the alert got generated
mysql-76ddc6ddc4-h47hv
PolicyName
gives the policy that was matched for this alert generation
harden-mysql-pkg-mngr-exec
ProcessName
specifies the operation that happened inside the pod for this alert
/usr/bin/apt
Resource
lists the resources that was requested
/usr/bin/apt
Result
shows whether the event was allowed or denied
Permission denied
Severity
gives the severity level of the operation
5
Source
lists the source from where the operation request came
/bin/bash
Tags
specifies the list of benchmarks this policy satisfies
NIST,NIST_800-53_CM-7(4),SI-4,process,NIST_800-53_SI-4
Timestamp
gives the details of the time this event tried to happen
1687868507
Type
shows whether policy matched or default posture alert
MatchedPolicy
UpdatedTime
gives the time of this alert
2023-06-27T12:21:47.932526
cluster_id
specifies the cluster id where the alert was generated
596
component_name
gives the component which generated this log/alert
kubearmor
tenant_id
specifies the tenant id where this cluster is onboarded in AccuKnox SaaS
11
Here is the specification of a Cluster security policy.
Note Please note that for system calls monitoring we only support audit action no matter what the value of action is
Now, we will briefly explain how to define a cluster security policy.
A cluster security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion would be the same in any security policies. In the case of metadata, you need to specify the names of a policy and a namespace where you want to apply the policy and kind would be KubeArmorClusterPolicy.
The severity part is somewhat important. You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.
The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.
The message part is optional. You can add an alert message, and then the message will be presented in alert logs.
In the selector section for cluster-based policies, we use matchExpressions to define the namespaces where the policy should be applied and labels to select/deselect the workloads in those namespaces. Currently, only namespaces and labels can be matched, so the key should be 'namespace' and 'label'. The operator will determine whether the policy should apply to the namespaces and its workloads specified in the values field or not. Both matchExpressions
, namespace
and label
are an ANDed operation.
Operator: In
When the operator is set to In, the policy will be applied only to the namespaces listed and if label matchExpressions
is defined, the policy will be applied only to the workloads that match the labels in the values field.
Operator: NotIn
When the operator is set to NotIn, the policy will be applied to all other namespaces except those listed in the values field and if label matchExpressions
is defined, the policy will be applied to all the workloads except that match the labels in the values field.
TIP If the selector operator is omitted in the policy, it will be applied across all namespaces.
In each match, there are three options.
ownerOnly (static action: allow owner only; otherwise block all)
If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.
fromSource
If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).
The file section is quite similar to the process section.
The only difference between 'process' and 'file' is the readOnly option.
readOnly (static action: allow to read only; otherwise block all)
If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.
In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.
In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.
There is one options in each match.
fromSource
If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash
will be matched.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory.
Action
Here, we demonstrate how to define a cluster security policies.
Process Execution Restriction
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in the containers present in the namespace nginx1. For this, we define the 'nginx1' value and operator as 'In' in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is blocked.
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in all containers present in the cluster except that are in the namespace nginx1. For this, we define the 'nginx1' value and operator as 'NotIn' in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is not blocked. Now try running same command in container inside 'nginx2' namespace and it should not be blocked.
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in the workloads who match the labels app=nginx
OR app=nginx-dev
present in the namespace nginx1
. For this, we define the 'nginx1' as value and operator as 'In' for key namespace
AND app=nginx
& app=nginx-dev
value and operator as 'In' for key label
in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers in the namespace 'nginx1' (using "kubectl -n nginx1 exec -it nginx-X-... -- bash") and run '/usr/bin/apt'. You will see that /usr/bin/apt is blocked. apt
won't be blocked in a workload that doesn't have labels app=nginx
OR app=nginx-dev
in namespace nginx1
and all the workloads across other namespaces.
Explanation: The purpose of this policy is to block the execution of '/usr/bin/apt' in all the workloads who doesn't match the labels app=nginx
AND not present in the namespace nginx2
. For this, we define the 'nginx2' as value and operator as 'NotIn' for key namespace
AND app=nginx
value and operator as 'NotIn' for key label
in selector -> matchExpressions and the specific path ('/usr/bin/apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please exec into any container within the namespace 'nginx2' and run '/usr/bin/apt'. You can see the operation is blocked. Then try to do same in other workloads present in different namespace and if they don't have label app=nginx
, the operation will be blocked, in case container have label app=nginx
, operation won't be blocked.
File Access Restriction
Explanation: The purpose of this policy is to block read access for '/etc/host.conf' in all the containers except the namespace 'bginx2'.
Verification: After applying this policy, please get into the container within the namespace 'nginx2' and run 'cat /etc/host.conf'. You can see the operation is not blocked and can see the content of the file. Now try to run 'cat /etc/host.conf' in container of 'nginx1' namespace, this operation should be blocked.
Note Other operations like Network, Capabilities, Syscalls also behave in same way as in security policy. The difference only lies in how we match the cluster policy with the namespaces.
Here, we demonstrate how to define security policies using our example microservice (multiubuntu).
Process Execution Restriction
Explanation: The purpose of this policy is to block the execution of '/bin/sleep' in the containers with the 'group-1' label. For this, we define the 'group-1' label in selector -> matchLabels and the specific path ('/bin/sleep') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please get into one of the containers with the 'group-1' (using "kubectl -n multiubuntu exec -it ubuntu-X-deployment-... -- bash") and run '/bin/sleep'. You will see that /bin/sleep is blocked.
Explanation: The purpose of this policy is to block the execution of 'apt' binary in all the workloads in the namespace multiubuntu
, who contains label container=ubuntu-1
. For this, we define the 'container=ubuntu-1' as value and operator as 'In' for key label
in selector -> matchExpressions and the specific execname ('apt') in process -> matchPaths. The other expression container=ubuntu-3
value and operator as 'NotIn' for key label
is not mandatory because if we mention something in 'In' operator, everything else is just not slected for matching. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please exec into any container who contains label container=ubuntu-1
within the namespace 'multiubuntu' and run 'apt'. You can see the binary is blocked. Then try to do same in other workloads who doesn't contains label container=ubuntu-1
, the binary won't be blocked.
Explanation: The purpose of this policy is to block the execution of 'apt' binary in all the workloads in the namespace multiubuntu
, who doesn't contains label container=ubuntu-1
. For this, we define the 'container=ubuntu-1' as value and operator as 'In' for key label
in selector -> matchExpressions and the specific execname ('apt') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please exec into any container who contains label container=ubuntu-1
within the namespace 'multiubuntu' and run 'apt'. You can see the binary is not blocked. Then try to do same in other workloads who doesn't contains label container=ubuntu-1
, the binary will be blocked.
Explanation: The purpose of this policy is to block all executables in the '/sbin' directory. Since we want to block all executables rather than a specific executable, we use matchDirectories to specify the executables in the '/sbin' directory at once.
Verification: After applying this policy, please get into the container with the 'ubuntu-1' label and run '/sbin/route' to see if this command is allowed (this command will be blocked).
Explanation: As the extension of the previous policy, we want to block all executables in the '/usr' directory and its subdirectories (e.g., '/usr/bin', '/usr/sbin', and '/usr/local/bin'). Thus, we add 'recursive: true' to extend the scope of the policy.
Verification: After applying this policy, please get into the container with the 'ubuntu-2' label and run '/usr/bin/env' or '/usr/bin/whoami'. You will see that those commands are blocked.
Explanation: Here, we want the container with the 'ubuntu-3' label only to access certain files by specific executables. Otherwise, we want to block any other file accesses. To achieve this goal, we define the scope of this policy using matchDirectories with fromSource and use the 'Allow' action.
Verification: In this policy, we allow /bin/cat to access the files in /credentials only. After applying this policy, please get into the container with the 'ubuntu-3' label and run 'cat /credentials/password'. This command will be allowed with no errors. Now, please run 'cat /etc/hostname'. Then, this command will be blocked since /bin/cat is only allowed to access /credentials/*.
Explanation: This policy aims to allow a specific user (i.e., user1) only to launch its own executable (i.e., hello), which means that we do not want for the root user to even launch /home/user1/hello. For this, we define a security policy with matchPaths and 'ownerOnly: ture'.
Verification: For verification, we also allow several directories and files to change users (from 'root' to 'user1') in the policy. After applying this policy, please get into the container with the 'ubuntu-3' label and run '/home/user1/hello' first. This command will be blocked even though you are the 'root' user. Then, please run 'su - user1'. Now, you are the 'user1' user. Please run '/home/user1/hello' again. You will see that it works now.
File Access Restriction
Explanation: The purpose of this policy is to allow the container with the 'ubuntu-4' label to read '/credentials/password' only (the write operation is blocked).
Verification: After applying this policy, please get into the container with the 'ubuntu-4' label and run 'cat /credentials/password'. You can see the contents in the file. Now, please run 'echo "test" >> /credentials/password'. You will see that the write operation will be blocked.
Explanation: In this policy, we do not want the container with the 'ubuntu-5' label to access any files in the '/credentials' directory and its subdirectories. Thus, we use 'matchDirectories' and 'recursive: true' to define all files in the '/credentials' directory and its subdirectories.
Verification: After applying this policy, please get into the container with the 'ubuntu-5' label and run 'cat /secret.txt'. You will see the contents of /secret.txt. Then, please run 'cat /credentials/password'. This command will be blocked due to the security policy.
Network Operation Restriction
Explanation: We want to audit sending ICMP packets from the containers with the 'ubuntu-5' label while allowing packets for the other protocols (e.g., TCP and UDP). For this, we use 'matchProtocols' to define the protocol (i.e., ICMP) that we want to block.
Verification: After applying this policy, please get into the container with the 'ubuntu-5' label and run 'curl https://kubernetes.io/'. This will work fine. Then, run 'ping 8.8.8.8'. You will see 'Permission denied' since the 'ping' command internally uses the ICMP protocol.
Capabilities Restriction
Explanation: We want to block any network operations using raw sockets from the containers with the 'ubuntu-1' label, meaning that containers cannot send non-TCP/UDP packets (e.g., ICMP echo request or reply) to other containers. To achieve this, we use matchCapabilities and specify the 'CAP_NET_RAW' capability to block raw socket creations inside the containers. Here, since we use the stream and datagram sockets to TCP and UDP packets respectively, we can still send those packets to others.
Verification: After applying this policy, please get into the container with the 'ubuntu-1' label and run 'curl https://kubernetes.io/'. This will work fine. Then, run 'ping 8.8.8.8'. You will see 'Operation not permitted' since the 'ping' command internally requires a raw socket to send ICMP packets.
System calls alerting
Alert for all unlink
syscalls
Alert on all rmdir
syscalls targeting anything in /home/
directory and sub-directories
KubeArmor maintainers welcome individuals and organizations from across the cloud security landscape (creators and implementers alike) to make contributions to the project. We equally value the addition of technical contributions and enhancements of documentation that helps us grow the community and strengthen the value of KubeArmor. We invite members of the community to contribute to the project!
To make a contribution, please follow the steps below.
Fork this repository (KubeArmor)
First, fork this repository by clicking on the Fork button (top right).
Then, click your ID on the pop-up screen.
This will create a copy of KubeArmor in your account.
Clone the repository
Now clone Kubearmor locally into your dev environment.
This will clone a copy of Kubearmor installed in your dev environment.
Make changes
First, go into the repository directory and make some changes.
Check the changes
If you have changed the core code of KubeArmor then please run tests before committing the changes
If you see any warnings or errors, please fix them first.
Commit changes
Please see your changes using "git status" and add them to the branch using "git add".
Then, commit the changes using the "git commit" command.
Please make sure that your changes are properly tested on your machine.
Push changes to your forked repository
Push your changes using the "git push" command.
Create a pull request with your changes with the following steps
First, go to your repository on GitHub.
Then, click "Pull request" button.
After checking your changes, click 'Create pull request'.
A pull request should contain the details of all commits as specific as possible, including "Fixes: #(issue number)".
Finally, click the "Create pull request" button.
The changes would be merged post a review by the respective module owners. Once the changes are merged, you will get a notification, and the corresponding issue will be closed.
DCO Signoffs
To ensure that contributors are only submitting work that they have rights to, we are requiring everyone to acknowledge this by signing their work. Any copyright notices in this repo should specify the authors as "KubeArmor authors".
To sign your work, just add a line like this at the end of your commit message:
This can easily be done with the -s
or --signoff
option to git commit
.
By doing this, you state that the source code being submitted originated from you (see https://developercertificate.org).
There are two ways to check the functionalities of KubeArmor: 1) testing KubeArmor manually and 2) using the testing framework.
Beforehand, check if the KubeArmorPolicy and KubeArmorHostPolicy CRDs are already applied.
If they are still not applied, do so.
Now you can apply specific policies.
flags:
Note that you will see alerts and logs generated right after karmor
runs logs; thus, we recommend to run the above command in other terminal to see logs live.
The case that KubeArmor is directly running in a host
Compile KubeArmor
Run the auto-testing framework
Check the test report
The case that KubeArmor is running as a daemonset in Kubernetes
Run the testing framework
Check the test report
To run a specific suit of tests move to the directory of test and run
Here is the specification of a host security policy.
Note Please note that for system calls monitoring we only support audit action no matter what the value of action is
Now, we will briefly explain how to define a host security policy.
Common
A security policy starts with the base information such as apiVersion, kind, and metadata. The apiVersion and kind would be the same in any security policies. In the case of metadata, you need to specify the name of a policy.
Make sure that you need to use KubeArmorHostPolicy
, not KubeArmorPolicy
.
Severity
You can specify the severity of a given policy from 1 to 10. This severity will appear in alerts when policy violations happen.
Tags
The tags part is optional. You can define multiple tags (e.g., WARNING, SENSITIVE, MITRE, STIG, etc.) to categorize security policies.
Message
The message part is optional. You can add an alert message, and then the message will be presented in alert logs.
NodeSelector
The node selector part is relatively straightforward. Similar to other Kubernetes configurations, you can specify (a group of) nodes based on labels.
If you do not have any custom labels, you can use system labels as well.
Process
In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, we generally do not recommend using this match.
In each match, there are three options.
ownerOnly (static action: allow owner only; otherwise block all)
If this is enabled, the owners of the executable(s) defined with matchPaths and matchDirectories will be only allowed to execute.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory defined with matchDirectories.
fromSource
If a path is specified in fromSource, the executable at the path will be allowed/blocked to execute the executables defined with matchPaths or matchDirectories. For better understanding, let us say that an operator defines a policy as follows. Then, /bin/bash will be only allowed (blocked) to execute /bin/sleep. Otherwise, the execution of /bin/sleep will be blocked (allowed).
File
The file section is quite similar to the process section.
The only difference between 'process' and 'file' is the readOnly option.
readOnly (static action: allow to read only; otherwise block all)
If this is enabled, the read operation will be only allowed, and any other operations (e.g., write) will be blocked.
Network
In the case of network, there is currently one match type: matchProtocols. You can define specific protocols among TCP, UDP, and ICMP.
Capabilities
Syscalls
In the case of syscalls, there are two types of matches, matchSyscalls and matchPaths. matchPaths can be used to target system calls targeting specific binary path or anything under a specific directory, additionally you can slice based on syscalls generated by a binary or a group of binaries in a directory. You can use matchSyscall as a more general rule to match syscalls from all sources or from specific binaries.
There is one options in each match.
fromSource
If a path is specified in fromSource, kubearmor will match only syscalls generated by the defined source. For better undrestanding, lets take the example below. Only unlink system calls generated by /bin/bash
will be matched.
recursive
If this is enabled, the coverage will extend to the subdirectories of the directory.
Action
The action could be Audit or Block in general. In order to use the Allow action, you should define 'fromSource'; otherwise, all Allow actions will be ignored by default.
If 'fromSource' is defined, we can use all actions for specific rules.
For System calls monitoring, we only support audit mode no matter what the action is set to.
Here, we demonstrate how to define host security policies.
Process Execution Restriction
Explanation: The purpose of this policy is to block the execution of '/usr/bin/diff' in a host whose host name is 'kubearmor-dev'. For this, we define 'kubernetes.io/hostname: kubearmor-dev' in nodeSelector -> matchLabels and the specific path ('/usr/bin/diff') in process -> matchPaths. Also, we put 'Block' as the action of this policy.
Verification: After applying this policy, please open a new terminal (or connect to the host with a new session) and run '/usr/bin/diff'. You will see that /usr/bin/diff is blocked.
NOTE
The given policy works with almost every linux distribution. If it is not working in your case, check the process location. The following location shows location of sleep
binary in different ubuntu distributions:
In case of Ubuntu 20.04 : /usr/bin/sleep
In case of Ubuntu 18.04 : /bin/sleep
File Access Restriction
Explanation: The purpose of this policy is to audit any accesses to a critical file (i.e., '/etc/passwd'). Since we want to audit one critical file, we use matchPaths to specify the path of '/etc/passwd'.
Verification: After applying this policy, please open a new terminal (or connect to the host with a new session) and run 'sudo cat /etc/passwd'. Then, check the alert logs of KubeArmor.
System calls alerting
Alert for all unlink
syscalls
Alert on all rmdir
syscalls targeting anything in /home/
directory and sub-directories
Requirements
Here is the list of requirements for a Vagrant environment
Clone the KubeArmor github repository in your system
Install Vagrant and VirtualBox in your environment, go to the vagrant path and run the setup.sh file
VM Setup using Vagrant
Now, it is time to prepare a VM for development.
To create a vagrant VM
Output will show up as ...
To get into the vagrant VM
Output will show up as ...
To destroy the vagrant VM
VM Setup using Vagrant with Ubuntu 21.10 (v5.13)
To use the recent Linux kernel v5.13 for dev env, you can run make
with the NETNEXT
flag set to 1
for the respective make option.
You can also make the setting static by changing NETNEXT=0
to NETNEXT=1
in the Makefile.
Requirements
Here is the list of minimum requirements for self-managed Kubernetes.
Alternative Setup
You can try the following alternative if you face any difficulty in the above Kubernetes (kubeadm) setup.
Note Please make sure to set up the alternative k8s environment on the same host where the KubeArmor development environment is running.
K3s
MicroK8s
No Support - Docker Desktops
KubeArmor does not work with Docker Desktops on Windows and macOS because KubeArmor integrates with Linux-kernel native primitives (including LSMs).
Development Setup
In order to install all dependencies, please run the following command.
Now, you are ready to develop any code for KubeArmor. Enjoy your journey with KubeArmor.
Compilation
Check if KubeArmor can be compiled on your environment without any problems.
If you see any error messages, please let us know the issue with the full error messages through #kubearmor-development channel on CNCF slack.
Execution
In order to directly run KubeArmor in a host (not as a container), you need to run a local proxy in advance.
Then, run KubeArmor on your environment.
Note If you have followed all the above steps and still getting the warning
The node information is not available
, then this could be due to the case-sensitivity discrepancy in the actual hostname (obtained by runninghostname
) and the hostname used by Kubernetes (underkubectl get nodes -o wide
). K8s converts the hostname to lowercase, which results in a mismatch with the actual hostname. To resolve this, change the hostname to lowercase using the commandhostnamectl set-hostname <lowercase-hostname>
.
KubeArmor Controller
Starting from KubeArmor v0.11 - annotations, container policies, and host policies are handled via kubearmor controller, the controller code can be found under pkg/KubeArmorController
.
To install the controller from KubeArmor docker repository run
To install the controller (local version) to your cluster run
if you need to setup a local registry to push you image, use docker-registry.sh
script under ~/KubeArmor/contribution/local-registry
directory
Here, we briefly give you an overview of KubeArmor's directories.
Source code for KubeArmor (/KubeArmor)
Source code for KubeArmor Controller (CRD)
Deployment tools and files for KubeArmor
Files for testing
In the process section, there are three types of matches: matchPaths, matchDirectories, and matchPatterns. You can define specific executables using matchPaths or all executables in specific directories using matchDirectories. In the case of matchPatterns, advanced operators may be able to determine particular patterns for executables by using regular expressions. However, the coverage of regular expressions is highly dependent on AppArmor (). Thus, we generally do not recommend using this match.
In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in .
The action could be Allow, Audit, or Block. Security policies would be handled in a blacklist manner or a whitelist manner according to the action. Thus, you need to define the action carefully. You can refer to for more details. In the case of the Audit action, we can use this action for policy verification before applying a security policy with the Block action. For System calls monitoring, we only support audit mode no matter what the action is set to.
Block a specific executable - In operator ()
Block a specific executable - NotIn operator()
Block a specific executable matching labels, In operator- In operator ()
Block accessing specific executable matching labels, NotIn operator ()
Block accessing specific file ()
Block a specific executable ()
Block accessing specific executable matching labels, In & NotIn operator ()
Block accessing specific executable matching labels, NotIn operator ()
Block all executables in a specific directory ()
Block all executables in a specific directory and its subdirectories ()
Allow specific executables to access certain files only ()
Allow a specific executable to be launched by its owner only ()
Allow accessing specific files only ()
Block all file accesses in a specific directory and its subdirectories ()
Audit ICMP packets ()
Block Raw Sockets (i.e., non-TCP/UDP packets) ()
Please refer to to set up your environment for KubeArmor contribution.
If some tests are failing, then fix them by following
If you have made changes in Operator or Controller, then follow
UEK R7 can be installed on OL 8.6 by following the easy-to-follow instructions provided here in this .
Note: KubeArmor now supports upgrading the nodes to BPF-LSM using . The following text is just an FYI but need not be used manually for k8s env.
The KubeArmor team has brought this to the attention of the on StackOverflow and await their response.
For more such differences checkout .
After this, exit out of the node shell and follow the .
Although there are many ways to run a Kubernetes cluster (like minikube or kind), it will not work with locally developed KubeArmor. KubeArmor needs to be on the same node as where the Kubernetes nodes exist. If you try to do this it will not identify your node since minikube and kind use virtualized nodes. You would either need to build your images and deploy them into these clusters or you can simply use k3s
or kubeadm
for development purposes. If you are new to these terms then the easiest way to do this is by following this guide:
You can refer to security policies defined for example microservices in .
Watch alerts using cli tool
For better understanding, you can check .
In the case of capabilities, there is currently one match type: matchCapabilities. You can define specific capability names to allow or block using matchCapabilities. You can check available capabilities in .
Block a specific executable ()
Audit a critical file access ()
Note Skip the steps for the vagrant setup if you're directly compiling KubeArmor on the Linux host. Proceed to setup K8s on the same host by resolving any dependencies.
KubeArmor is designed for Kubernetes environment. If Kubernetes is not setup yet, please refer to . KubeArmor leverages CRI (Container Runtime Interfaces) APIs and works with Docker or Containerd or CRIO based container runtimes. KubeArmor uses LSMs for policy enforcement; thus, please make sure that your environment supports LSMs (either AppArmor or bpf-lsm). Otherwise, KubeArmor will operate in Audit-Mode with no policy "enforcement" support.
You can also develop and test KubeArmor on K3s instead of the self-managed Kubernetes. Please follow the instructions in .
You can also develop and test KubeArmor on MicroK8s instead of the self-managed Kubernetes. Please follow the instructions in .
will automatically install , , , and some other dependencies.