ModelArmor Overview

ModelArmor uses KubeArmor as a sandboxing engine to ensure that the untrusted models execution is constrained and within required checks. AI/ML Models are essentially processes and allowing untrusted models to execute in AI environments have significant risks such as possibility of cryptomining attacks leveraging GPUs, remote command injections, etc. KubeArmor's preemptive mitigation mechanism provides a suitable framework for constraining the execution environment of models.

ModelArmor can be used to enforce security policies on the model execution environment.

TensorFlow Based Use Cases

FGSM Attack on a TensorFlow Model

▶️ Watch FGSM Attack Video

Keras Inject Attack and Apply Policies

▶️ Watch Keras Inject Video


Securing NVIDIA NIM

📄 View PDF: Securing_NVIDIA_NIM.pdf

Last updated

Was this helpful?