Description

What you'll do:

• Design and develop data processing pipelines and analytics solutions using Databricks.

• Architect scalable and efficient data models and storage solutions on the Databricks platform.

• Collaborate with architects and other teams to migrate current solution to use Databricks.

• Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements.

• Use best practices for data governance, security, and compliance on the Databricks platform.

• Mentor junior engineers and provide technical guidance.

• Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement.

 

You'll be expected to have:

• Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

• 8+ years of overall experience and 3+ years of experience designing and implementing data solutions on the Databricks platform.

• Proficiency in programming languages such as Python, Scala, or SQL.

• Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.

• Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.

• Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.

• Excellent problem-solving skills and attention to detail.

• Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.

• Good to have experience with containerization technologies such as Docker and Kubernetes.

• Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Education

Bachelor's or Master's degree in Computer Science, Engineering