Job Description –
· Deep hands-on knowledge in big data engineering in a cloud environment, including:
· Excellent knowledge of SQL in a large scale data warehouse or data Lakehouse environment such as Spark, Databricks, Presto/Athena/Trino
· Experience working in cloud environment, preferably AWS using tools such as EMR/Spark, Kafka, Databricks, S3, EC2
· Strong knowledge of database / dimensional modeling / data integration tools
· Experience writing scripts with languages like Python, and shell scripts in a Linux environment
· Can-do attitude, hands-on approach, passionate about data
· Experience in defining big data architecture
· Bachelor’s degree in information systems, Industrial Engineering or Computer Science
Bonus Points:
· Experience with risk/fraud/cyber systems
· Experience working in a multi-national high-tech environment
· Some knowledge of Data Science/Machine Learning
· Knowledge/Experience with Scala, Java
· Some experience with stream processing or near real-time data ingestion
· Some knowledge of graph databases
Any Graduate