What You Will Do:
Develop, maintain, and optimize data pipelines using Java and SQL.
Design and implement data storage solutions to support analytical and transactional systems.
Collaborate with cross-functional teams to define, design, and implement data solutions.
Ensure data quality, security, and governance across systems.
Monitor and troubleshoot data workflows to ensure reliability and scalability.
Document processes and workflows to maintain system transparency and support knowledge sharing.
What Experience You Need:
Experience relevant to technology : 1 to 3 years
Proficient in Java for backend development and data processing.
Strong understanding and hands-on experience with SQL for querying and managing databases.
Basic knowledge of data engineering principles, including ETL pipelines and data integration.
Familiarity with cloud-based data storage and processing solutions (e.g., AWS, Azure, GCP).
Knowledge of version control tools like Git.
Strong problem-solving skills and the ability to work independently and in a team environment.
What Could Set You Apart:
Knowledge or experience with Apache Beam for stream and batch data processing.
Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark.
Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
Exposure to data visualization tools or platforms.
Any Graduate