KubeArmor
  • KubeArmor
  • Quick Links
    • Getting Started
    • Support Matrix
    • Differentiation
    • VM/Bare-Metal Deployment
  • Use-Cases
    • Harden Infrastructure
    • Least Permissive Access
    • Application Behavior
    • Advanced
    • ModelArmor Overview
      • Pickle Code Injection PoC
      • Adversarial Attacks on Deep Learning Models
      • Deploy PyTorch App with ModelKnox
  • Documentation
    • KubeArmor Events
    • Control Telemetry/Visibility
    • Security Posture
    • Policy Spec for Containers
    • Policy Examples for Containers
    • Cluster Policy Spec for Containers
    • Cluster Policy Examples for Containers
    • Policy Spec for Nodes/VMs
    • Policy Examples for Nodes/VMs
    • FAQs
  • Contribution
    • Contribution Guide
    • Development Guide
    • Testing Guide
Powered by GitBook
On this page
  • TensorFlow Based Use Cases
  • FGSM Attack on a TensorFlow Model
  • Keras Inject Attack and Apply Policies
  • Securing NVIDIA NIM

Was this helpful?

Edit on GitHub
Export as PDF
  1. Use-Cases

ModelArmor Overview

ModelArmor uses KubeArmor as a sandboxing engine to ensure that the untrusted models execution is constrained and within required checks. AI/ML Models are essentially processes and allowing untrusted models to execute in AI environments have significant risks such as possibility of cryptomining attacks leveraging GPUs, remote command injections, etc. KubeArmor's preemptive mitigation mechanism provides a suitable framework for constraining the execution environment of models.

ModelArmor can be used to enforce security policies on the model execution environment.

TensorFlow Based Use Cases

FGSM Attack on a TensorFlow Model

Keras Inject Attack and Apply Policies

Securing NVIDIA NIM

PreviousAdvancedNextPickle Code Injection PoC

Last updated 1 day ago

Was this helpful?