Key Responsibilities:
Design, build, and maintain scalable data pipelines and architectures to support business intelligence and analytics.
Develop and manage ETL processes to ensure data accuracy, completeness, and consistency.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
Optimize and troubleshoot data processing workflows and ensure efficient data storage and retrieval.
Work with MongoDB, MariaDB, and Snowflake to manage and analyze data, ensuring best practices in database design and performance tuning.
Implement data security and privacy measures to protect sensitive information.
Document data processes, workflows, and systems for future reference and compliance purposes.
Stay updated with the latest industry trends and technologies to continuously improve data engineering practices.
Required Skills and Qualifications:
Bachelor’s degree in Computer Science, Engineering, or a related field.
Minimum of 6 years of experience in data engineering with hands-on experience in MongoDB, MariaDB, and Snowflake (3-4 years in each).
Proficiency in SQL and NoSQL databases, data modeling, and ETL processes.
Experience with data warehousing and cloud-based data solutions.
Strong analytical and problem-solving skills with the ability to work on complex data problems.
Familiarity with data pipeline orchestration tools (e.g., Apache Airflow, Luigi) is a plus.
Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
Bachelor's degree in Computer Science