Description

Responsibilities:
Design, build, and maintain scalable data pipelines and ETL processes to support analytics and business intelligence.
Develop and optimize data models, databases, and data warehouses.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
Ensure data quality, integrity, and security across all systems.
Troubleshoot and resolve data-related issues and performance bottlenecks.
Implement and manage data governance practices and standards.
Stay current with industry trends and emerging technologies to drive innovation.

Requirements:
Minimum of 11 years of experience in data engineering or a related field.
Expertise in SQL and experience with various database technologies (e.g., PostgreSQL, MySQL, NoSQL databases).
Proficiency in data processing frameworks such as Apache Spark or Hadoop.
Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and data warehousing solutions (e.g., Redshift, BigQuery).
Strong programming skills in languages such as Python, Java, or Scala.
Knowledge of data integration tools and ETL processes.
Excellent problem-solving skills and attention to detail.
Proven ability to work collaboratively in a fast-paced environment.
 

Education

Any Graduate