Responsibilities:
- Build data systems and pipelines
- Combine raw information from different sources
- Explore ways to enhance data quality and reliability
- Develop analytical tools and programs, and build dashboards
- Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Building ETL jobs for optimal extraction, transformation and loading of data from various data sources using AWS, Airflow, Databricks and SQL technologies.
- Building analytical tools to utilize the data pipeline, providing actionable insights into key infrastructure metrics and performance
Requirements
- Previous experience as a SWE engineer or in a similar role
- Technical expertise with data models, data mining, and segmentation techniques
- Fluent in python
- Hands-on experience with SQL database design and experienced to work with hive/databricks or other data warehouses
- Have experiences to build dashboards on Tableau/Grafana/Databricks
- Experiences on building data pipelines on Airflow is a plus
- Great numerical and analytical skills
- Degree in Computer Science, IT, or similar field
Any Gradute