Description

Responsibilities:

As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments.
You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data.
You will be responsible for designing and implementing complex data models to scale research and simulations.
You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs.
Required Skills & Experience

3+ years of data engineering experience
Experience with data modeling, warehousing and building ETL pipelines
Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
Master's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
Experience with SQL
Experience working on and delivering end to end projects independently
Preferred qualifications

Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
Experience with Apache Spark / Elastic Map Reduce
Familiarity and comfort with Python, SQL, Docker, and Shell scripting. Java preferred but not necessary.
Experience with continuous delivery, infrastructure as code, microservices, in addition to designing and implementing automated data solutions using Apache Airflow, AWS Step Functions, or equivalent

Education

Bachelor's degree in Computer Science