What You'll Build
The AIRS MLOps Lab is a hands-on workshop where you build, deploy, and secure a real ML pipeline. You fine-tune an open-source language model into a cybersecurity advisor, deploy it on Google Cloud, and then systematically secure the pipeline using Palo Alto Networks AI Runtime Security (AIRS).
This is not a passive tutorial. You work with Claude Code as your development partner and mentor. Claude has been configured specifically for this lab -- it knows the codebase, paces its explanations, and uses Socratic questioning to build your understanding.
Prerequisites
Before starting the lab, ensure you have:
- GCP Project -- with owner rights, Vertex AI and Cloud Run APIs enabled
- AIRS License -- access to a Prisma AIRS tenant with Strata Cloud Manager credentials
- Claude Code -- installed and configured
- GitHub Account -- with
ghCLI authenticated - Python 3.12+ -- with
uvfor dependency management - HuggingFace Account -- for exploring models and datasets
Your instructor will provide GCP project details and AIRS credentials during the setup session.
The Pipeline
You'll build a 3-gate CI/CD pipeline that scans ML models at every stage:
Gate 1 scans the base model before training, fine-tunes it with LoRA on Vertex AI, and creates a provenance manifest.
Gate 2 merges the LoRA adapter with the base model, scans the merged artifact, and publishes to GCS.
Gate 3 verifies the manifest chain, performs a final scan, deploys the model to a Vertex AI GPU endpoint with vLLM, and deploys the FastAPI app to Cloud Run.
The Architecture
The deployed application uses a decoupled architecture -- the model runs on GPU infrastructure, the app runs on lightweight infrastructure:
No model weights in the Cloud Run container. No GPU. No ML dependencies at runtime. Cloud Run handles the web layer; Vertex AI handles inference on dedicated GPU hardware.
The Three-Act Structure
The workshop is organized into three acts with a presentation break between Acts 1 and 2.
Act 1: Build It (Modules 0-3)
Build a complete ML pipeline from scratch. By the end, you'll have a live cybersecurity advisor -- trained, merged, published, and deployed. No security scans in place yet. That's intentional.
| Module | Focus | Time |
|---|---|---|
| Module 0: Setup | Environment, GCP, GitHub, AIRS credentials | ~30 min |
| Module 1: ML Fundamentals | HuggingFace, formats, datasets, platforms | ~45 min |
| Module 2: Train Your Model | Gate 1, LoRA fine-tuning, Vertex AI | ~30 min + wait |
| Module 3: Deploy & Serve | Gate 2+3, merge, publish, deploy, live app | ~30 min + wait |
Presentation Break
Instructor-led session: AIRS value proposition, real-world attacks, customer scenarios.
Act 2: Understand Security (Module 4)
Deep dive into AIRS Model Security. Set up access, run scans, explore policies.
| Module | Focus | Time |
|---|---|---|
| Module 4: AIRS Deep Dive | SCM, SDK, scanning, security groups, HF integration | ~1-1.5 hr |
Act 3: Secure It (Modules 5-7)
Secure the pipeline you built. Explore what AIRS catches and what it doesn't.
| Module | Focus | Time |
|---|---|---|
| Module 5: Integrate AIRS | Pipeline scanning, manifest verification, labeling | ~1-1.5 hr |
| Module 6: The Threat Zoo | Pickle bombs, Keras traps, format risks | ~1 hr |
| Module 7: Gaps & Poisoning | Data poisoning, behavioral backdoors, defense in depth | ~45 min-1 hr |
Project Structure
prisma-airs-mlops-lab/
├── .github/
│ ├── workflows/ # CI/CD pipeline (3 gates + app deploy)
│ └── pipeline-config.yaml # Your GCP project and bucket config
├── src/airs_mlops_lab/
│ └── serving/ # FastAPI app + Vertex AI inference client
├── airs/
│ ├── scan_model.py # AIRS scanning CLI
│ └── poisoning_demo/ # Data poisoning proof-of-concept
├── model-tuning/
│ ├── train_advisor.py # LoRA fine-tuning script
│ └── merge_adapter.py # Adapter merge for deployment
├── scripts/
│ └── manifest.py # Model provenance tracking CLI
├── lab/
│ └── .progress.json # Your progress (tracked automatically)
├── CLAUDE.md # Claude Code mentor configuration
└── Dockerfile # Cloud Run app (thin client, no model)Next Step
Ready to set up? Head to the Student Setup Guide to create your repo and launch Claude Code.