Description

Roles & Responsibility

 

  • Lead GCP solution and scoping.
  • Create detailed target state technical architecture and design blueprints.
  • Conduct full technical discovery, identifying pain points, business and technical requirements, "as is" and "to be" scenarios.
  • Data migration, Data Pipeline migration to GCP, setting up the best practices, guidelines and right set of architectures. 

 

Requirements

 

RoleTechnology Analyst
QualificationB.TECH
Job description
  • Minimum of 5 years of experience in designing and building production data pipelines from data ingestion to consumption within a hybrid big data architecture using c etc.
  • Expertise in one of the programming language- Scala/Java/Python.
  • Experience with Data lake, data warehouse ETL build and design, and Data migration from legacy systems including Hadoop, Exadata, Oracle Teradata, or Netezza etc.
  • Deep understanding of GCP like
    • DataProc, Composer, AIRFlow and WireSafe
    • Data Transfer tools from GCP
    • Strong execution knowledge on BigQuery
  • GCP CiCD & SDK knowledge
  • Strong in PySpark , patterns of Structed streaming and Batch
  • Basic in JAVA
Work Location with Zip codeAny Location in India (preferably Bhubaneswar, Pune, Hyderabad)
3 Must have skills
  • GCP - BigQuery ,Cloud Storage, Bigtable, Dataflow, Dataproc, Cloud composer, Cloud PubSub, and Data Fusion etc
  • Expertise in one of the programming language- Scala/Java/Python.
  • Hadoop, Hive, HDFS, Hbase, Spark
3 Responsibility
  • Lead GCP solution and scoping.
  • Create detailed target state technical architecture and design blueprints.
  • Conduct full technical discovery, identifying pain points, business and technical requirements, "as is" and "to be" scenarios.
  • Data migration, Data Pipeline migration to GCP, setting up the best practices, guidelines and right set of architectures.  

Education

Any Graduate