Description

Are you an experienced Data Engineer with expertise in Databricks, SQL, and Python? Do you have hands-on experience designing and maintaining scalable data pipelines? If you have a Databricks certification and a passion for working with cutting-edge data platforms, this opportunity is for you!
What We’re Looking For:
Expert-level experience in Databricks and working with Apache Spark to design and optimize data pipelines.
Advanced SQL skills to query, transform, and manipulate large datasets from relational and non-relational databases (e.g., PostgreSQL, MySQL, SQL Server).
Proficiency in Python for scripting and automation in data engineering tasks.
Experience working with cloud platforms such as AWS, Azure, or GCP for implementing data engineering solutions.
Hands-on experience with big data technologies (Apache Spark, Hadoop, Flink) and data warehousing concepts.

Key Responsibilities:
Build & maintain data pipelines using Databricks and Apache Spark.
Implement data processing workflows for efficient transformation and aggregation of data.
Write and optimize complex SQL queries for data extraction and analysis.
Automate pipeline jobs and implement monitoring & logging for continuous operation.
Leverage cloud platforms (AWS, Azure, GCP) for cloud-based data solutions.
Requirements:
Databricks Certification (required).
Advanced knowledge of SQL for large dataset manipulation and transformation.
Python scripting experience for automation and integration with Databricks.
Experience with cloud data platforms (AWS, Azure, GCP).
Knowledge of big data technologies (Apache Spark, Hadoop, Flink).
Strong problem-solving and analytical skills.

Education

Any Graduate