Hands-on adversarial testing of GenAI systems (prompt injection/jailbreaks, input–output evals, data-exfil testing) with actionable remediation
Cybersecurity red-team / penetration testing background and strong Python/scripting for automation and test harnesses
ML/GenAI fundamentals (LLMs, embeddings, diffusion models) and adversarial ML techniques (model extraction, poisoning, prompt injection).
Familiarity with AI security frameworks: NIST AI RMF or MITRE ATLAS or OWASP Top 10 for LLMs
Experience with AI/MLOps platforms & integration frameworks (Azure AI or AWS SageMaker; OpenAI API/Hugging Face; LangChain or equivalent) in an enterprise setting
Nice-to-Haves:
Exposure to governance/risk for AI (model risk, policy alignment)
SIEM/SOAR & threat-intel integration and monitoring
Experience with building reusable adversarial test repos, scripts, and automation