Description

Job Description:

Job Title: Senior Data Engineer Company: RBM Software Pvt. Ltd. Location: Pune Experience: 5-7 Years Job Type: Full-Time, Work from Office Compensation: As per industry standards (based on experience and interview) Job Summary: RBM Software Pvt. Ltd. is seeking an experienced Senior Data Engineer to design, develop, and manage advanced data pipelines and data infrastructure solutions. You will work closely with cross-functional teams to solve complex business problems using state-of-the-art data engineering techniques. The ideal candidate should have strong expertise in AWS cloud services, SQL, Python, pySpark, Flink, Kafka, and modern debugging tools like Debugging. This role requires dedication, ownership of complex systems, and the ability to support global operations, including working during PST hours when necessary. Key Responsibilities: Data Engineering & Development: Design, develop, and deploy advanced data pipelines and algorithms to solve complex business problems. Build scalable, secure, and high-performance data pipelines using AWS cloud services. Write, optimize, and maintain SQL queries to extract, transform, and load data for analysis. Utilize Apache Spark, Flink, and other technologies to process and analyze large datasets efficiently. Implement real-time data processing solutions using Kafka. Collaboration & Project Delivery: Collaborate with cross-functional teams to implement data-driven solutions and ensure successful project delivery. Provide mentorship and guidance to junior team members, fostering a collaborative and innovative work environment. Support team operations during PST hours as required for global collaboration. Ownership of Data Science Workflow: Take ownership of the entire data science workflow, including data collection, processing, development, and deployment. Primary Skills: Proficiency in AWS cloud services (e.g., S3, EC2, Redshift, Lambda). Strong hands-on experience with SQL for data manipulation and analysis. Expertise in Python and data science libraries such as Pandas, NumPy, and Scikit-learn. Proficiency in Apache Spark and Apache Flink for big data processing. Experience with real-time data processing and messaging systems using Kafka. Familiarity with modern debugging tools like DebuggItJ for troubleshooting and optimizing code. Secondary Skills: Knowledge of Solr for enterprise search and indexing solutions. Experience working in fast-paced, agile environments. Familiarity with DevOps practices and tools like Docker, Kubernetes, and CI/CD pipelines. Knowledge of real-time data processing and streaming technologies.

Education

Any Graduate