Responsibilities:
Build and document automated data pipelines from a wide range of data sources with an emphasis on automation and scale: Handled large streaming data set (volume / size) and performed initial Data Pattern identification with AWS-Kinesis.
Develop highly available applications and APIs to support near-real-time integrations using an AWS-based technology stack: Designed and developed REST APIs as prototype to support near-real-time {vehicle / any other domain based} data using SWAGGER and presented the design to the business stakeholders. Converted the prototypes to full scale development using AWS technology stack such as AWS-Event Bridge, AWS-SQS, AWS-SNS & Confluent-KAFKA.
Ensure product and technical features are delivered to spec and on-time in a DevOps fashion: Authored "Software-Design-Specification" based on Product Feature requirement & user stories, implemented within agile-sprint cadence of the program.
Contribute to overall architecture, framework, and design patterns to store and process high data volumes: Collaboratively worked with Architect and contributed on improving the large datasets by decomposing un-structured data into "Structure" , "Semi-Structured" along with design patterns {Circuit Break / MVC, etc} and designed the storage framework with AWS-S3/AWS-Dynamo-DM/AWS-RDS.
Skills:
Bachelors degree in Computer Science, Informatics