Job Description:
Key Responsibilities:
- Design, develop, and deploy data pipelines and applications on Palantir Foundry.
- Work closely with clients to understand their business requirements and translate them into scalable solutions.
- Implement data integration, transformation, and modeling within the Palantir ecosystem.
- Develop and optimize workflows, ontologies, and data connectors.
- Troubleshoot and resolve issues related to Palantir Foundry implementations.
- Collaborate with cross-functional teams to ensure seamless integration with existing systems.
Required Skills & Experience:
- Must have hands-on development experience in Palantir Foundry.
- Strong expertise in data integration, ETL processes, and pipeline development in Palantir.
- Proficiency in Python, SQL, and Spark for data processing.
- Experience with Palantir’s Ontology, Contour, and Workshop tools.
- Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus.
- Strong problem-solving skills and ability to work in a client-facing role.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Previous experience working with IBM or enterprise clients.
- Knowledge of big data technologies and distributed computing.
- Experience with CI/CD pipelines and DevOps practices.