Description

Roles & Responsibility

Below is JD Design, develop, and maintain data processing workflows using Pyspark and Python. Collaborate with data engineers and data scientists to implement scalable and efficient data solutions. Optimize and fine-tune Spark applications for performance and resource efficiency. Develop ETL processes for data extraction, transformation, and loading. Work with large-scale distributed systems to process and analyze big data sets. Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions. Implement data quality and validation measures to ensure accuracy and reliability.


 

Location

Bangalore (preferred location

Alternate: Chennai & Pune

No of Contractors required1
Years of Experience6+

Any Project specific Prerequisite skills

Mention (Primary & Secondary key skills)

Data Engineer - (JavaSpark + SQL) OR (PySpark + SQL)

 

Detailed JDBelow is JD Design, develop, and maintain data processing workflows using Pyspark and Python. Collaborate with data engineers and data scientists to implement scalable and efficient data solutions. Optimize and fine-tune Spark applications for performance and resource efficiency. Develop ETL processes for data extraction, transformation, and loading. Work with large-scale distributed systems to process and analyze big data sets. Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions. Implement data quality and validation measures to ensure accuracy and reliability.

Education

Any Graduate