Development Guide
Note Skip the steps for the vagrant setup if you're directly compiling KubeArmor on the Linux host. Proceed here to setup K8s on the same host by resolving any dependencies.
- RequirementsHere is the list of requirements for a Vagrant environmentVagrant - v2.2.9VirtualBox - v6.1Clone the KubeArmor github repository in your system$ git clone https://github.com/kubearmor/KubeArmor.gitInstall Vagrant and VirtualBox in your environment, go to the vagrant path and run the setup.sh file$ cd KubeArmor/contribution/vagrant~/KubeArmor/contribution/vagrant$ ./setup.sh~/KubeArmor/contribution/vagrant$ sudo reboot
- VM Setup using VagrantNow, it is time to prepare a VM for development.To create a vagrant VM~/KubeArmor/KubeArmor$ make vagrant-upOutput will show up as ...To get into the vagrant VM~/KubeArmor/KubeArmor$ make vagrant-sshOutput will show up as ...To destroy the vagrant VM~/KubeArmor/KubeArmor$ make vagrant-destroy
- VM Setup using Vagrant with Ubuntu 21.10 (v5.13)To use the recent Linux kernel v5.13 for dev env, you can run
make
with theNETNEXT
flag set to1
for the respective make option.~/KubeArmor/KubeArmor$ make vagrant-up NETNEXT=1You can also make the setting static by changingNETNEXT=0
toNETNEXT=1
in the Makefile.~/KubeArmor/KubeArmor$ vi Makefile
- RequirementsHere is the list of minimum requirements for self-managed Kubernetes.OS - Ubuntu 18.04Kubernetes - v1.19Docker - 18.09 or Containerd - 1.3.7Linux Kernel - v4.15LSM - AppArmorKubeArmor is designed for Kubernetes environment. If Kubernetes is not setup yet, please refer to Kubernetes installation guide. KubeArmor leverages CRI (Container Runtime Interfaces) APIs and works with Docker or Containerd or CRIO based container runtimes. KubeArmor uses LSMs for policy enforcement; thus, please make sure that your environment supports LSMs (either AppArmor or bpf-lsm). Otherwise, KubeArmor will operate in Audit-Mode with no policy "enforcement" support.Alternative SetupYou can try the following alternative if you face any difficulty in the above Kubernetes (kubeadm) setup.Note Please make sure to set up the alternative k8s environment on the same host where the KubeArmor development environment is running.
- K3sYou can also develop and test KubeArmor on K3s instead of the self-managed Kubernetes. Please follow the instructions in K3s installation guide.
- MicroK8sYou can also develop and test KubeArmor on MicroK8s instead of the self-managed Kubernetes. Please follow the instructions in MicroK8s installation guide.
- No Support - Docker DesktopsKubeArmor does not work with Docker Desktops on Windows and macOS because KubeArmor integrates with Linux-kernel native primitives (including LSMs).
- Development SetupIn order to install all dependencies, please run the following command.$ cd KubeArmor/contribution/self-managed-k8s~/KubeArmor/contribution/self-managed-k8s$ ./setup.shNow, you are ready to develop any code for KubeArmor. Enjoy your journey with KubeArmor.
- CompilationCheck if KubeArmor can be compiled on your environment without any problems.$ cd KubeArmor/KubeArmor~/KubeArmor/KubeArmor$ makeIf you see any error messages, please let us know the issue with the full error messages through KubeArmor's slack.
- ExecutionIn order to directly run KubeArmor in a host (not as a container), you need to run a local proxy in advance.$ kubectl proxy &Then, run KubeArmor on your environment.$ cd KubeArmor/KubeArmor~/KubeArmor/KubeArmor$ make runNote If you have followed all the above steps and still getting the warning
The node information is not available
, then this could be due to the case-sensitivity discrepancy in the actual hostname (obtained by runninghostname
) and the hostname used by Kubernetes (underkubectl get nodes -o wide
). K8s converts the hostname to lowercase, which results in a mismatch with the actual hostname. To resolve this, change the hostname to lowercase using the commandhostnamectl set-hostname <lowercase-hostname>
. - Annotation controllerStarting from KubeArmor v0.5 annotations are applied via an annotation controller, the controller code can be found under
pkg/KubeArmorAnnotation
.To install the controller from KubeArmor docker repository run$ cd KubeArmor/pkg/KubeArmorAnnotation~/KubeArmor/pkg/KubeArmorAnnotation$ make deployTo install the controller (local version) to your cluster run$ cd KubeArmor/pkg/KubeArmorAnnotation~/KubeArmor/pkg/KubeArmorAnnotation$ make docker-build deployif you need to setup a local registry to push you image, usedocker-registry.sh
script under~/KubeArmor/contribution/local-registry
directory
Here, we briefly give you an overview of KubeArmor's directories.
- Source code for KubeArmor (/KubeArmor)KubeArmor/BPF - eBPF code for system monitorcommon - Libraries internally usedconfig - Configuration loadercore - The main body (start point) of KubeArmorenforcer - Runtime policy enforcer (enforcing security policies into LSMs)feeder - gRPC-based feeder (sending audit/system logs to a log server)kvmAgent - KubeArmor VM agentlog - Message logger (stdout)monitor - eBPF-based system monitor (mapping process IDs to container IDs)policy - gRPC service to manage Host Policies for VM environmentstypes - Type definitionsprotobuf/ - Protocol buffer
- Source code for KubeArmor's custom resource definition (CRD)pkg/KubeArmorPolicy/ - KubeArmorPolicy CRD generated by Kube-Builderpkg/KubeArmorHostPolicy/ - KubeArmorHostPolicy CRD generated by Kube-Builderpkg/KubeArmorAnnotation/ - KubeArmorAnnotation Annotation controller/webhook generated by Kube-Builder
- Files for testingexamples/ - Example microservices for testingtests/ - Automated test framework for KubeArmor
Last modified 3mo ago