Description

Job Responsibilities:
Develop and enhance data-processing systems, orchestration, and monitoring using open-source software, AWS, and GitLab automation.
Collaborate with product and technology teams to design and validate the capabilities of the data platform.
Identify, design, and implement process improvements, including automating manual tasks, optimizing usability, and scaling processes.
Provide technical support and guidance to users of the platform’s services.
Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to ensure visibility into production services.
Required Qualifications:
6-8 years of experience building and optimizing data pipelines in distributed environments.
Experience supporting and working with cross-functional teams.
Proficiency in working in a Linux environment.
4+ years of advanced working knowledge of SQL, Python, and PySpark.
Knowledge of Palantir.
Experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, CodePipeline.
Experience with platform monitoring and alert tools