Description

Responsibilities

  • Responsible for writing SQL, Python and PySpark scripts to be used in API calls to pull data from multiple disparate systems and databases.
  • Requirement gather and understand, Analyze and convert functional requirements into concrete technical tasks and able to provide reasonable effort estimates.
  • Directly impacts the business by ensuring the quality of work provided by self and others works closely with other teams as partner.
  • Work proactively, independently and with global teams to address project requirements and articulate issues challenges with enough lead time to address project delivery risks.
  • Providing expertise in technical analysis and solving technical issues during project delivery.
  • Contribute expertise, embrace emerging trends and provide overall guidance on best practices across all of Customer business and technology groups.
  • Assist with cleaning up the data so it’s in readily accessible format for the BI systems.

Required Skills

  • Strong in Big data especially Hadoop, Spark, Java, and related technologies.
  • Ability to express business concepts effectively, both verbally and in writing.
  • Excellent interpersonal, organizational and communication skills, with strong attention to details.
  • Ability to work in a fast-paced environment where continuous innovation is desired, and ambiguity is the norm.
  • Ability to communicate effectively via multiple channels (verbal, written, etc) with technical and non-technical staff.
  • Senior knowledge of Java, Apache Spark, Kafka, HiveImpala, Elastic Search and related Big Data Technologies.
  • Expert knowledge of Agile Methodologies.
  • Advanced training in project management methodologies, data modeling or other related IT disciplines.
  • Knowledge of the cloud is preferable.

Required Experience

  • 7+ years of IT Business or Data Analyst experience.
  • 5+ years of hand on experience with Big Data including HadoopHive and visualization tools such as Tableau, Power BI, Business Objects.
  • 5+ experience writing complex SQL statements, pulling data, combining and performing analytics using query development tools like Toad, SQL Developer, HQL Demonstrated experience and competency in eliciting requirements and producing requirement artifacts (functional specifications, use cases, user stories, business process flows, data flows, story boards, etc)
  • Must have 4+ years of hands on experience in Big dataHadoop ecosystem Expert in high volume data processing (real-time batch), performance optimization.
  • Experience working in Spring Boot, Microservices based architecture framework.
  • Experience in SCMs like GIT and tools like JIRA bull Strong systems analysis, design and architecture fundamentals, Unit Testing and other SDLC activities.

Education Requirements

  • Bachelor’s Degree in Computer Science, Computer Engineering or a closely related field.


 

Education

Any Graduate