You and your teammates have the opportunity to work collaboratively with other developers to design and deliver the data infrastructure (warehouse, schema, pipelines, OLTP, ETL) that supports client portfolio of applications and analytics tools, and we need your help building and maintaining supporting services for that infrastructure.
You will be able to use your software development skills in an agile environment to develop new data pipelines and APIs, and your experience with continuous delivery will support the end-to-end lifecycle: design, deployment, test, operations, monitoring and support. You will have the chance to directly interact with the people who are using your applications.
Languages and Frameworks:
You should be proficient in Python with at least three years of experience delivering Python data projects to production. We welcome candidates with experience in other languages such as Ruby, PHP, or JavaScript. Experience with JavaScript is a plus. It would be great for you to have experience in the Azure data stack (Azure Data Factory, Azure Data Lake, Tabular Model / DAX, Microsoft Fabric), but if you have strong experience with complementary open source data technologies such as Hadoop, Amazon Glue, Spark, Airflow and so on we would love to talk to you.
Databases:
We’d like to see real SQL and NoSQL production experience. We’re particularly interested in your expertise with SQL Server, Azure SQL, Azure Synapse, CosmosDB, MongoDB and Elastic. Having experience talking to customers and modeling data would be a plus. If you bring specific skills in Kafka, Cassandra, or any other cloud based warehouse let’s talk.
Tools:
You should have experience with maintaining projects using a distributed version control system such as Git and Github. Experience in automated deployment of pipelines and schema into production environments would be great
Any Gradute