Description

Responsibilities:

Build and document automated data pipelines from a wide range of data sources with an emphasis on automation and scale: Handled large streaming data set (volume / size) and performed initial Data Pattern identification with AWS-Kinesis.

Develop highly available applications and APIs to support near-real-time integrations using an AWS-based technology stack: Designed and developed REST APIs as prototype to support near-real-time {vehicle / any other domain based} data using SWAGGER and presented the design to the business stakeholders. Converted the prototypes to full scale development using AWS technology stack such as AWS-Event Bridge, AWS-SQS, AWS-SNS & Confluent-KAFKA.

Ensure product and technical features are delivered to spec and on-time in a DevOps fashion: Authored "Software-Design-Specification" based on Product Feature requirement & user stories, implemented within agile-sprint cadence of the program.

Contribute to overall architecture, framework, and design patterns to store and process high data volumes: Collaboratively worked with Architect and contributed on improving the large datasets by decomposing un-structured data into "Structure" , "Semi-Structured" along with design patterns {Circuit Break / MVC, etc} and designed the storage framework with AWS-S3/AWS-Dynamo-DM/AWS-RDS.

Skills:

  1. Bachelors; degree in Computer Science, Informatics, or a related field required
  2. 3+ years of experience in a data engineering role
  3. 2+ years of experience with AWS and related services (e.g., EC2, S3, SNS, Lambda, IAM, Snowflake)
  4. Hands-on experience with ETL tools and techniques (Desirable)
  5. Basic proficiency with a dialect of ANSI SQL, APIs, and Python
  6. Knowledge of and experience with RDBMS platforms, such as MS SQL Server,
  7. MySQL, NoSQL, Postgres

 


 

Education

Bachelors degree in Computer Science, Informatics