Description

Required Skills: Azure, Data Lake, Databricks, ETL, Python, Pyspark
Job Description:

  • Design, develop, and maintain scalable data pipelines using Azure Databricks and Azure Data Lake.
  • Integrate data from various sources into the Databricks platform.
  • Implement data integration and ETL processes using Azure Data Factory.
  • Develop and optimize data processing workflows and pipelines using PySpark.
  • Support solving business use cases involving Bloomberg data acquisition and transformation.
  • Collaborate with data scientists and analysts to support data-driven decision-making.
  • Ensure data quality and integrity across various data sources and storage solutions.
  • Monitor and troubleshoot data pipeline performance and reliability.
  • Assist with dashboarding and data visualization using Power BI

Education

Any Graduate