Job Description:
-Experience leveraging open-source tools, predictive analytics, machine learning, Advanced Statistics, and other data techniques to perform basic analyses.
-Demonstrated basic knowledge of statistical analytical techniques, coding, and data engineering.
-Experience developing and configuring dashboards is a plus.
-Demonstrated judgement when escalating issues to the project team.
-High proficiency in Python/Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi), SQL.
-Curiosity, creativity, and excitement for technology and innovation.
-Demonstrated quantitative and problem-solving abilities.
-Expert proficiency in using Python/Scala, Spark (tuning jobs), SQL, Hadoop platforms to build Big Data products & platforms.
-At least 5 years leading collaborative work in complex engineering projects in an Agile setting e.g. Scrum.
-Extensive data warehousing/data lake development experience with strong data modelling and data integration experience.
-Good SQL and higher-level programming languages with solid knowledge of data mining, machine learning algorithms and tools.
-Strong hands-on experience in Analytics & Computer Science.
-Demonstrated basic knowledge of statistical analytical techniques, coding, and data engineering.
-Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases like data ingestion, feature engineering, modelling, tuning, evaluating, monitoring, and presenting.
Any Graduate