Job Description :
Responsibilities
- Data Integration - Provide data integration services for all batch/real-time data Movement in data warehouse, data platforms (data lake) and dependent data marts (Cloud/OnPrem)
- Data Analysis - Sources, compiles, and interprets data. Performs analysis for accuracy and efficiencies, effectively communicates analysis output.
- Technical Proficiency - Provides technical support by performing coding, ensuring processes run smoothly, and working to continuously improve processing capabilities. Works closely with technical and operation teams to support their business objectives.
- System Testing - Thoroughly tests data integration, data warehouse, data marts to ensure accuracy, completeness, and overall efficiency. Evaluates and conveys testing results. Performs coding and assists in implementing system modifications and enhancements.
- Ensure that new solutions meet defined technical, functional, and service level requirements and standards and manage the end-to-end lifecycle for new data migration, data integration, and data services-oriented solutions.
- Additionally, the Senior Data Software Engineer will engage with partners in testing, release management, and operations to ensure quality of code development, deployment, and post-production support.
Preferred area of experience:
- 5+ Years of Experience in designing & building data pipeline/ETL Frameworks with cloud-based data technologies (Preferred AWS Cloud Services Skills)
- 8+ years of experience in creating and maintaining ETL processes and architecting complex data pipelines - knowledge of data modeling techniques and high-volume ETL/ELT design.
- 3+ years of Experience with data warehousing/Data Lake solutions using Snowflake, AWS- Redshift.
- 5+ years of Experience with ETL and Data Ingestion tools such as (Informatica IICS, AWS Glue, SAP BODS)
- Experience in programming languages such as ANSI-SQL, Python, Scala, and/or Java
- Extensive knowledge of data warehouse principles, design, and data modeling concepts.
- 1+ years of experience in designing data privacy related constructs to enable data tokenization, encryption & data access controls on a public cloud data lake
- 3+ years of experience in Data Engineering related AWS services like Lake formation, Glue, S3, Lambda, DynamoDB and other AWS accelerators
- 3+ years of experience in Bigdata technologies like Hadoop, Hive, SPARK, No-SQL DB, Kafka and API
- Experience in leading team of engineers across geography
- Use agile engineering practices and various data development technologies to rapidly develop creative and efficient data products
- Identify inefficiencies, optimize processes and data flows, and make recommendations for improvements
- Communicate with other developers across teams, both for ad hoc problem solving, and check-ins/discussions with other initiatives