Description

Essential Responsibilities

  • Develop big data applications for Synchrony in Hadoop ecosystem
  • Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment as a Product Owner
  • Work with Sonarqube for code quality analysis and build the dashboard for all stakeholders.
  • Work with Jenkins and Cloudbees for CICD
  • Identify repeatable eng processes and comeup with common reusable tools/frames works.
  • Introduce and drive best practices in devops & data engineering space
  • Work with team members to achieve business results in a fast paced and quickly changing environment
  • Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio
  • Test current processes and identify deficiencies
  • Plan, create and manage the test cases and test scripts
  • Identify process bottlenecks and suggest actions for improvement
  • Present test cases, test results, reports and metrics as required by the Office of Agile
  • Building and supporting Jenkin pipelines for CICD
  • Work with team members to achieve business results in a fast paced and quickly changing environment
  • Ensure the lineage of all data assets are properly documented in the appropriate enterprise metadata repositories
  • Assist with the creation and implementation of data quality rules
  • Investigate program quality to make improvements to achieve better data accuracy
  • Understand functional and non-functional requirement and prepare test data accordingly
  • Identify process bottlenecks and suggest actions for improvement
  • Perform other duties as needed to ensure the success of the team and application and ensure the team’s compliance with the applicable Data Sourcing, Data Quality, and Data Governance standards

Qualifications/Requirements

  • Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and 2 to 3 years of experience; in lieu of degree, High School Diploma/GED and minimum 4 to 5 years of Information Technology experience
  • Minimum of 2+ years hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git
  • Ability to write abstracted, reusable code components
  • Programming experience in at least one of the following languages: Scala, Java or Python
  • Analytical mindset
  • Willingness and aptitude to learn new technologies quickly
  • Superior oral and written communication skills;
  • Ability to collaborate across teams of internal and external technical staff, business analysts, software support and operations staff.

Desired Characteristics

  • Familiar with Bitbucket, various IDEs, Jenkins, CloudBees, Sonarqube
  • Performance tuning experience
  • Exposure to the following Ab Initio tools: GDE – Graphical Development Environment; Co>Operating System ; Control Center; Metadata Hub; Enterprise Meta>Environment; Enterprise Meta>Environment Portal; Acquire>It; Express>It; Conduct>It; Data Quality Environment; Query>It.
  • Familiar with Ab Initio, Hortonworks/Cloudera, Zookeeper, Oozie and Kafka
  • Familiar with Public Cloud AWS data analytics services
  • Familiar with data management tools (i.e. Collibra)
  • Background in ETL, data warehousing or data lake
  • Strong business acumen including a broad understanding of Synchrony business processes and practices
  • Demonstrated ability to work effectively in an agile team environment
  • Financial Industry or Credit processing experience
  • Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities
  • Previous experience working in client facing environment
  • Proficient in the maintenance of data dictionaries and other information in Collibra
  • Excellent analytical, organizational and influencing skills with a proven track record of successfully executing on assignments
     

Education

Bachelor's degree