Description

Job Description:

 

Required Skills:

  • Expert level skills writing and optimizing complex SQL
  • Experience with complex data modelling, ETL design, and using large databases in a business environment
  • Experience with building data pipelines and applications to stream and process datasets at low latencies
  • Fluent with Big Data technologies like Spark, Kafka and Hive
  • Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required
  • Designing and building of data pipelines using API ingestion and Streaming ingestion methods
  • Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential
  • Experience in developing NO SQL solutions using Azure Cosmos DB is essential
  • Thorough understanding of Azure and AWS Cloud Infrastructure offerings
  • Working knowledge of Python is desirable
  • Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
  • Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
  • Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance
  • Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information
  • Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
  • Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.
  • Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards
  • Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging


 

Education

Any Graduate