Hybrid Position in NJ – 2 days onsite a must
Rate: DOE
Duration: 6-12 Months
Only W2 candidate can Apply
Spark Scala Developer:
We are seeking a highly skilled Spark Scala Developer to join our team.
As a Spark Developer, you will be responsible for developing and implementing scalable data processing solutions using Apache Spark.
The ideal candidate should have experience in both batch and real-time data processing, along with a strong background in technologies such as Databricks, Snowflake, Kafka, SQL and Unix.
You will work closely with our data engineering and data science teams to design and optimize data pipelines and ensure efficient and reliable data processing.
• At least 6+ years of experience in Designing, developing, and maintaining data processing solutions using Apache Spark (Scala).
• Experience working in both on-prim Hadoop cluster as well as Databricks environment.
• Strong understanding of Functional programming and RESTful APIs
• Implementing both batch and real-time data processing frameworks.
• Collaborating with data engineering and data science teams to understand data requirements and design optimal data pipelines.
• Writing efficient Spark jobs and optimizing Spark code for performance and scalability.
• Working with Kafka for real-time data streaming and integration.
• Developing and optimizing SQL queries for data extraction, transformation, and loading.
• Produce unit tests for Spark transformations and helper methods
• Ensuring data quality and integrity throughout the data processing pipelines.
• Monitoring and troubleshooting data processing issues, and implementing necessary fixes.
• Collaborating with cross-functional teams to integrate Spark solutions with other systems
Any Graduate