· Bachelor’s degree in engineering, Computer Science or related field; Master’s degree is a plus.
· 8 years relevant industry and functional experience in Database and Cloud-based technologies
· Experience in building data ingestion pipelines for Structured and Unstructured data both for storage and optimal retrieval
· Experience working with Cloud data stores (specifically AWS), noSQL, Graph and Vector databases.
· Proficiency with languages such as Python, SQL, and PySpark
· Experience working with Databricks and Snowflake technologies
· Exposure to Machine learning and AI concepts related to RAG architecture, LLMs and Vector Datastores is a plus
· Experience with relevant code repository and project tools such as GitHub, JIRA and Confluence
· Working experience with Continuous Integration & Continuous Deployment with hands-on expertise on Jenkins, Terraform, Splunk and Dynatrace.
· Highly innovative with aptitude for foresight, systems thinking and design thinking, with a bias towards simplifying processes.
· Detail oriented individual with strong analytical, problem-solving, and organizational skills
· Ability to clearly communicate to both technical and business teams
Bachelor's Degree