Design, develop, and implement data pipelines and ETL processes using Azure services, with a focus on Azure Databricks, Azure Data Factory (ADF), Azure Functions, and Azure AI Search.
Leverage Python SDKs to build and optimize data processing and analysis tasks, ensuring high performance and scalability in data workflows.
Write and deploy Python applications that can run on Kubernetes, ensuring efficient containerization and orchestration of data processing tasks.
Implement and integrate Azure AI Search capabilities to enhance data accessibility and retrieval for analytics and business intelligence.
Utilize version control tools such as GitHub for code management and collaboration and manage multiple activities in a fast-paced environment while adhering to best practices in coding and documentation.
Qualifications:
3+ years of experience in a data engineering role.
Strong experience with Azure services, particularly Azure Databricks, ADF, and Azure AI Search.
Proficiency in Python and its relevant libraries and Spark, with a focus on leveraging Python SDKs for data engineering tasks.
Experience writing and deploying Python applications, including familiarity with containerization and orchestration tools.
Experience working with diverse data formats and data storage solutions.
Familiarity with version control software (e.g., GitHub) and agile methodologies.
Strong problem-solving and analytical skills, with a keen attention to detail
Excellent communication and collaboration abilities, with a proven track record of working effectively in teams