Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘Big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Required Skills
Expert level in SQL.
Querying and manipulating large data sets for analytical purposes using SQL and SQL like languages (Hive/Presto is strongly preferred).
Good attention to detail and ability to QA multiple data sources.
Excellent verbal and written communication skills.
Required Experience
5+ years of experience (Sr-level) Strong Programming experience with object-oriented/object function scripting languages: Scala.
5+ years of experience (Mid-level) Experience with big data tools: Hadoop, Apache Spark etc.
1+ years of strong technical Experience with AWS cloud services and DevOps engineering: S3, IAM, EC2, EMR, RDS, Redshift, Cloudwatch with Docker, Kubernetes, GitHub, Jenkins, CICD.
1+ Years of experience with relational SQL or, Snowflake and NoSQL databases, like Postgres and Cassandra.
Experience with stream-processing systems: Python, Spark-Streaming, etc. (Nice to have).
Education Requirements
Bachelor’s Degree in Computer Science, Computer Engineering or a closely related field.