Description

  • Develop Python/Pyspark-based automation scripts to optimize workflows and processes.
  • Provide solutioning for web applications and data pipelines.
  • Stay updated with emerging technologies and quickly adapt to new tools and frameworks.
  • Work with Jenkins for basic DevOps processes, including CI/CD pipelines.
  • Ensure scalability, security, and performance of deployed solutions.
  • Collaborate with business teams, data engineers, and developers to align solutions with business goals.
  • Present technical solutions in a clear and concise manner to both technical and non-technical stakeholders.
  • Document architectural designs and best practices for future reference.

Qualifications:

  • Overall, 3 to 5 years of experience in IT Industry. Min 2 years of experience working on Data Engineering - Solutions.
  • Experience with Cloud or on-prem based Data platforms and Data Warehousing solutions for efficient data storage and processing.

Mandatory Skills:

  • Python, Pyspark: Hands-on experience in automation and scripting.
  • Azure: Strong knowledge of Azure SQL Database, Data Lakes, Data Warehouses, and cloud architecture.
  • Databricks: Experience in implementation of Python/Pyspark solution in Databricks for implementing Data Pipelines. Fetching data from different source system.
  • DevOps Basics: Familiarity with Azure DevOps, Jenkins for CI/CD pipelines.
  • Communication: Excellent verbal and written communication skills.
  • Fast Learner: Ability to quickly grasp new technologies and adapt to changing requirements.
  • Qualifications - BE, M.Tech or MCA.
  • Certifications: Azure Big Data, Databricks Certified Associate

Good to have skills :

  • Design and develop Power BI solutions, ensuring data visualization best practices.
  • Power BI: Expertise in report/dashboard development and DAX calculations

Education

Any Gradute