Position overview
We are looking for a Senior Data Engineer with deep expertise in Python and SQL. You will design, implement, and optimize data pipelines and ETL processes, ensuring high performance, scalability, and reliability. This is a fully remote position with flexible working conditions.
Responsibilities
- Develop and optimize ETL workflows using Python and SQL.
- Design and implement scalable data pipelines for processing various data sources.
- Work with relational databases (e.g., PostgreSQL, Oracle) to ensure efficient data storage and retrieval.
- Optimize SQL queries and Python scripts for performance and maintainability.
- Implement robust error handling, logging, and monitoring for data processing pipelines.
- Collaborate with cross-functional teams to integrate and manage data sources.
Requirements
- 5+ years of experience in Data Engineering.
- Expert-level Python and SQL skills – ability to write efficient, scalable, and maintainable code.
- Strong experience in designing ETL processes and working with large datasets.
- Solid understanding of database performance tuning and query optimization.
- Experience with data modeling and schema design.
- Familiarity with cloud storage solutions (Azure, AWS, or GCP).
- Strong problem-solving skills and ability to work independently.
- Nice to have: Experience with PySpark, orchestrators and knowledge of CI/CD for deployment process automation.