Work with high volume, heterogeneous data, preferably with distributed systems and are cloud agnostic.
Build large-scale batch and real-time data pipelines leveraging the latest technologies, such as Microsoft Azure, Apache Kafka (Confluent), Databricks, Snowflake Data Cloud.
Integrate data from multiple sources including on-premises, cloud (Salesforce and Azure) and external.
Serve as a resource for data integration implementations on other technology teams and collaborate with data domain owners, business owners, and leaders.
Drive adoption and automation of data integration services and tools.
Required Skills
Senior knowledge of Python.
Strong expertise in SQL, relational databases.
Working knowledge of Docker.
Proficient in Data Handling/Data Pipelines.
Basic level knowledge on AWS.
Required Experience
5 years of experience driving adoption and using automation of data integration management services and tools.
Experience with the API toolset: REST, HTTP, GraphQL, JSON, XML, Postman, etc.
Some ETL experience.
Experience in designing and building public and internal API’s.
Experience with Git, Continuous Integration, and Continuous Delivery mechanisms.
Education Requirements
Bachelor’s Degree in Computer Science, Computer Engineering or a closely related field.