Description

Description/Comment:           

• Design and develop data ingestion solutions for big data.

• Build efficient and reliable data processing solutions.

• Design and implement data storage solutions.

• Develop scalable data pipelines for ingestion, transformation, and storage of large datasets.

• Optimize data pipelines for real-time and batch processing.

• Ensure data quality and integrity throughout the data pipeline by implementing effective data validation and monitoring strategies.

 

Additional Job Details:             

• Minimum 5-8 years of designing and implementing ETL solutions.

• Bachelor's degree or higher in Computer Science, Engineering, or a related field.

• Familiar with AWS data ingestion and processing tools like FluentBit, Kinesis, and Glue.

• Strong expertise in big data technologies such as Apache Spark.

• Experience with AWS data storage solutions including S3, Redshift, Iceberg, Aurora.

• Proficiency in programming languages including Python, Scala, and Java.

• Preferred certification and/or hands-on experience with AWS data services.

 

Professional Skills:

• Attention to detail and a strong commitment to delivering high-quality solutions.

• Strong problem-solving skills and the ability to work effectively in a fast-paced environment.

• Work well in a team.

• Excellent communication and interpersonal skills

 

Primary Skill    SQL (Structured Query Language)

 

Top 3 skills are:

• Real-time Data Stream

• Big Data ETL

• Programming and scripting

Education

Bachelor's degree or higher in Computer Science, Engineering