Description

Mandate : Big Data, Python, Spark , Data bricks , SQL , Azure ,  CICD , Data warehouse

 

Coding in Python is Mandate during interview.

 

Job Description:
Build Transformation and Load processes using Kafka streamed data into a Snowflake database. Analyze layouts and SQL design requirements. Define metadata for identifying and ingesting source files. Create and update source to target mapping lineage, transformation rules, and data definitions. Identify PII details and conform with standard naming conventions. Coordinate data integration, conformity, quality, integrity, and consolidation efforts. Analyze Source to Target Field level mapping for data sources. Design and implement transformation rules within the transformation framework. Provide support for resolving data quality issues. Coordinate with technical teams, SMEs, and architects on enhancements and change requests. Provide training as well as create detailed documentation, implementation plans, and project trackers.


Prior experience must include:

  • 4+ years conducting metadata modeling, data transformation processes, and data profiling
  • 4 years of data quality, integrity, and data consolidation
  • 4 years working with Snowflake, Oracle, MS SQL, SparkSQL, Hive or Impala
  • 3 years designing and optimizing data transformation processes
  • 3 years of software development life cycle (SDLC) processes including Agile methodologies
  • 3 years working with technical teams, stakeholders, and project managers to provide instructions and demonstrations on software delivery
  • 2 years of project management and documentation using GitHub, JIRA, and Confluence

Education

Any Graduate