"Responsibilities
Develop and optimize Spark-based data processing pipelines. - Collaborate with data engineers and data scientists to design data solutions. - Write efficient and scalable code for processing large data sets. - Monitor and troubleshoot performance issues in Spark applications. - Ensure data quality and integrity in the processing pipelines. - Implement and enforce best practices in Spark development. "Required Skills" - Apache Spark - Hadoop - Java - Scala - Hive - ETL - Data Integration - Distributed Systems - Performance Tuning "Desirable Skills" - Experience in Banking and Financial Services industry - Knowledge of capital management, liquidity management, and payments processes "Education Qualification" - Bachelor's degree in Computer Science or related field Team Structure - 2 developers with 2 to 3 years of experience - 2 developers with 5 to 6 years of experience - 1 senior developer (lead) with 10+ years of experience
Bachelor's degree