Must have minimum 5+ years of experience as AWS Data Engineer
Nice to have AWS Certified Data Analytics Specialty certification
Experience in designing and implementing data pipelines using AWS Glue and orchestration using Airflow for batch, streaming (Kafka/Kinesis) and API ingestion patterns
Work closely with DevOps engineer to support implementation of deployment of data engineering code using CI/CD pipelines such as CodePipeline and CodeDeploy and enterprise Git
Experience in designing and implementing streaming data pipelines using AWS MSK, AWS SQS, Amazon Kinesis
Experience working on scripting language preferably Python using Cargill standard tools such as AWS Glue (Spark)
Experience in designing and implementing API ingestion pipeline using AWS API Gateway
Work with Solution architect to enhance the data ingestion framework using AWS native services to make it metadata driven supporting multiple ingestion patterns such as batch processing, streaming data and API integration.