Leading the discovery phase of medium to large projects
Data Modeling: Influencing the overarching data strategy and vision for data pipelines and data products. Overseeing and governing the expansion of existing data architecture and the optimization of data query performance via best practices. Presenting and socializing data models to business and information technology stakeholders.
Data Architecture: Designing, implementing, and improving processes in data management.
Code development of medium-to large-scale, complex, cross-functional projects.
Enhancing design to prevent re-occurrences of defects, ensuring on-time delivery and hand-offs. Daily interacting with project manager to provide input on project plan and providing leadership to the project team.
Developing Innovation strategies, processes, and best practices by leading internal technical teams.
What you’ll bring:
You have expert-level technical skills and a proven record of accomplishment of building data pipelines using Spark, Hive, Kafka that are highly scalable, performant, and reliable.
You have expert-level coding skills in Python, JAVA, Spring, API design, Kafka, databases (SQL, no-SQL)
You are skilled in data modeling & data migration protocols
You have experience with Trino, Big Query, Apache Pinot
You have experience with data integration tools like Automic, or Airflow
You are experienced in public cloud computing platforms (e.g., GCP)
You are experienced in Gen AI Ecosystem in Agentic Frameworks
Any Gradute