Description

You will use an existing batch inference model to establish a secure, automated deployment pipeline. This role involves both engineering and change management, including architecture and training, with a focus on educating data scientists and other Data Science Enablement members on MLOps. Once the foundational deployment framework is in place, you will enable additional MLOps capabilities such as MLFlow, A/B testing, real-time endpoints, and further automation with Model Risk Management (MRM).

 

Key Responsibilities:

  • Develop and implement a secure, automated deployment pipeline.
  • Educate and mentor team members on MLOps practices.
  • Balance engineering tasks with change management and training.
  • Enhance MLOps capabilities with advanced tools and techniques.

Preferred Experience:

  • Experience in highly regulated industries like banking, finance, or healthcare.

Qualifications:

  • Experience:
  • Minimum of 5+ years of experience in machine learning and MLOps.
  • Proven experience with AWS Sagemaker and building end-to-end machine learning models.
  • Experience with data integration and management using IBM DB2 and Snowflake (or like databases)
  • Strong understanding of CI/CD pipelines and automation tools.
  • Technical Skills:
  • Proficiency in programming languages such as Python, R, SQL and/or Java.
  • Use of DevOps tools such as Jira, Terraform, GitHub, Jenkins
  • Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes)

Education

Any Graduate