Description

  • Work with a team of data scientists, data, MLOps and software engineers on AI engineering tasks and deployment of pipelines at all stages of development: data engineering, model training, model fine-tuning and optimization, productive deployment, testing and monitoring.
  • Contribute to infrastructure maintenance and to building reusable enterprise-grade software components required for fast delivery of productive ML models, increased process automation and accelerated adoption of MLOps best practices across the AI lifecycle.
  • Support a high-visibility project along experienced data scientists and AI developers.
  • Learn about applying data-centric AI development practices.
  • Work with a modern cloud stack (SAP BTP, Hyperscaler offerings)

 

ROLE REQUIREMENTS
 

Must have:

  • Bachelor’s Degree in Computer Science, Data Science, Software Engineering or related fields,
  • Proficiency in Python and at least one other programming/scripting language (e.g. Java, SQL, Scala),
  • Experience in DevOps practices and tools such as Jenkins, GitHub, XMake
  • Experience with development tools such as Jupyter, Docker, Kubernetes, Terraform, Github 
  • Strong oral and written communication skills in English
  • Good understanding of AI/ML concepts including Deep Learning, GenA, MLOps and AI lifecycle
  • Basic understanding of a variety of Python libraries and Machine Learning frameworks such as Numpy, Pandas, Keras, scikit-learn, TensorFlow, PyTorch
  • General interest in applied machine learning to solve business problems.
  • Working from Walldorf office is a must
  • At least 3 semesters left for studies

 

Nice to have:

  • Experience in Python backend development such as FastAPI, Postgres, OpenAPI
  • Experience with MLOps tools (e.g., mlflow) 
  • Experience in writing tests for ML models
  • Hyperscaler experience


 

Education

Any Gradute