Description

Key Responsibilities
GenAI Security Strategy & Architecture

  • Define and implement security architecture for GenAI platforms including LLMs, RAG pipelines, agentic workflows, and inference services.
  • Lead threat modeling and risk assessments for GenAI components such as LangChain, LangGraph, Langfuse, and vector databases (e.g., OpenSearch, Milvus).
  • Establish governance and guardrails for prompt engineering, model evaluation (RAGAs, G-Eval), and responsible AI practices.


Data Protection & Compliance

  • Collaborate with the Data Protection team to integrate GenAI security controls into existing DLP, encryption, and classification frameworks.
  • Ensure alignment with regulatory standards (e.g., NIST CSF, ISO 27001, CIS) and support audit readiness for GenAI deployments.
  • Evaluate and enforce IAM, data residency, and privacy controls across AWS, Azure, and hybrid environments.


Operational Readiness & Monitoring

  • Build observability and monitoring frameworks for GenAI systems using tools like Prometheus, Grafana, ELK, and Langfuse.
  • Develop automated pipelines for model deployment, evaluation, and rollback using Kubernetes, Helm, and CI/CD tools (e.g., ArgoCD, CircleCI).
  • Lead incident response planning and tabletop exercises for GenAI-related security events.


Cross-Functional Enablement

  • Partner with engineering, product, and business units to embed security into GenAI use cases such as chatbots, knowledge assistants, and document summarization.
    - Deliver training and awareness programs on GenAI security risks and best practices.
    - Represent the Data Protection team in enterprise AI steering committees and working groups.


Minimum Qualifications

  • Bachelor’s degree in Computer Science, Cybersecurity, or related field.
  • 10+ years of experience in cybersecurity, with 1+ years in AI/ML or GenAI security.
  • Hands-on experience with GenAI frameworks (LangChain, LangGraph, Langfuse), LLMs (GPT-4o, Claude, Llama 3), and vector DBs.
  • Proficiency in Python, Kubernetes, AWS (Bedrock, Sagemaker), Azure Databricks, and MLOps tools (MLflow, Argo workflows).
  • Strong understanding of data protection principles, encryption, and cloud security.


Preferred Qualifications

  • Master’s degree in Cybersecurity, AI/ML, or related discipline.
  • Certifications: CISSP, CCSP, AWS Machine Learning Specialty, or equivalent.
  • Experience in highly regulated industries (e.g., financial services, insurance).
  • Familiarity with data governance tools (e.g., Collibra, Alation) and GenAI evaluation frameworks (RAGAs, Guardrails).
  • Exposure to agentic AI design patterns and multi-agent orchestration

Education

Bachelor's degree