About the Role:
We are seeking a highly skilled Software Engineer with deep expertise in PySpark, ETL development, and Data Warehousing/Business Intelligence (DW/BI) to support complex, large-scale data initiatives. This contingent role involves strategic collaboration with client teams to design and implement robust data solutions that drive financial insights and operational efficiency.
Responsibilities:
- Lead the design, development, and optimization of scalable ETL pipelines using PySpark, AWS S3, and Dremio.
- Support modernization efforts for ProfitView and other financial attribution systems.
- Engineer solutions for data ingestion, transformation, and loading into data lakes and data warehouses.
- Work with structured and unstructured data from diverse sources to enable advanced analytics.
- Collaborate cross-functionally with BI developers, data analysts, and business stakeholders to gather and translate data requirements.
- Ensure high standards of data quality, integrity, and governance across all pipelines.
- Monitor, troubleshoot, and resolve performance issues in data workflows.
- Participate in code reviews, testing, and deployment processes.
- Document technical designs, data flows, and architectural decisions.
Minimum Qualifications:
- 5+ years of experience in Software Engineering, with a focus on data engineering and analytics platforms.
- Proven experience in PySpark, ETL development, and DW/BI projects.
- Strong understanding of financial attribution, slowly changing dimensions (SCD), booking/referring agreements, and source-of-record (SOR) onboarding.
- Demonstrated ability to work on complex, multi-faceted projects with strategic impact.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Experience with cloud platforms (e.g., AWS), data lake architectures, and modern BI tools.
- Familiarity with Dremio or similar query acceleration platforms.
- Background in financial services or enterprise-scale data environments