Control Telemetry/Visibility
KubeArmor currently supports enabling visibility for containers and hosts.
Visibility for hosts is not enabled by default, however it is enabled by default for containers .
The karmor tool provides access to both using karmor logs.
Prerequisites
If you don't have access to a K8s cluster, please follow this to set one up.
karmor CLI tool: Download and install karmor-cli
Example: wordpress-mysql
To deploy wordpress-mysql app follow this
Now we need to deploy some sample policies
kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/examples/wordpress-mysql/security-policies/ksp-wordpress-block-process.yamlThis sample policy blocks execution of the apt and apt-get commands in wordpress pods with label selector app: wordpress.
Getting Container Visibility
Checking default visibility
Container visibility is enabled by default. We can check it using
kubectl describeand grepkubearmor-visibility
POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl describe -n wordpress-mysql pod $POD_NAME | grep kubearmor-visibility kubearmor-visibility: process, file, network, capabilitiesFor pre-existing workloads : Enable visibility using
kubectl annotate. Currently KubeArmor supportsprocess,file,network,capabilities
kubectl annotate pods <pod-name> -n wordpress-mysql "kubearmor-visibility=process,file,network,capabilities"Open up a terminal, and watch logs using the
karmorclikarmor logsIn another terminal, simulate a policy violation . Try
sleepinside a podPOD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash # apt updateIn the terminal running
karmor logs, the policy violation along with container visibility is shown, in this case for exampleThe logs can also be generated in JSON format using
karmor logs --json
Getting Host Visibility
Host Visibility is not enabled by default . To enable Host Visibility we need to annotate the node using
kubectl annotate node
kubectl annotate node <node-name> "kubearmor-visibility=process,file,network,capabilities" To confirm it use
kubectl describeand grepkubearmor-visibility
kubectl describe node <node-name> | grep kubearmor-visibilityNow we can get general telemetry events in the context of the host using
karmor logs.The logs related to Host Visibility will have typeType: HostLogandOperation: File | Process | Network
karmor logs --logFilter=allUpdating Namespace Visibility
KubeArmor has the ability to let the user select what kind of events have to be traced by changing the annotation kubearmor-visibility at the namespace.
Checking Namespace visibility
Namespace visibility can be checked using
kubectl describe.
kubectl describe ns wordpress-mysql | grep kubearmor-visibility kubearmor-visibility: process, file, network, capabilitiesTo update the visibility of namespace : Now let's update Kubearmor visibility using
kubectl annotate. Currently KubeArmor supportsprocess,file,network,capabilities. Lets try to update visibility for the namespacewordpress-mysql
kubectl annotate ns wordpress-mysql kubearmor-visibility=network --overwrite "namespace/wordpress-mysql annotated"Note: To turn off the visibility across all aspects, use
kubearmor-visibility=none. Note that any policy violations or events that results in non-success returns would still be reported in the logs.Open up a terminal, and watch logs using the
karmorclikarmor logs --logFilter=all -n wordpress-mysqlIn another terminal, let's exec into the pod and run some process commands . Try
lsinside the podPOD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash # lsNow, we can notice that no logs have been generated for the above command and logs with only
Operation: Networkare shown.Note If telemetry is disabled, the user wont get audit event even if there is an audit rule.
Note Only the logs are affected by changing the visibility, we still get all the alerts that are generated.
Let's simulate a sample policy violation, and see whether we still get alerts or not.
Policy violation :
POD_NAME=$(kubectl get pods -n wordpress-mysql -l app=wordpress -o jsonpath='{.items[0].metadata.name}') && kubectl -n wordpress-mysql exec -it $POD_NAME -- bash #aptHere, note that the alert with
Operation: Processis reported.
Last updated
Was this helpful?