ModelArmor Overview
ModelArmor uses KubeArmor as a sandboxing engine to ensure that the untrusted models execution is constrained and within required checks. AI/ML Models are essentially processes and allowing untrusted models to execute in AI environments have significant risks such as possibility of cryptomining attacks leveraging GPUs, remote command injections, etc. KubeArmor's preemptive mitigation mechanism provides a suitable framework for constraining the execution environment of models.
ModelArmor can be used to enforce security policies on the model execution environment.
TensorFlow Based Use Cases
FGSM Attack on a TensorFlow Model
Keras Inject Attack and Apply Policies
Securing NVIDIA NIM
Last updated
Was this helpful?