Key Responsibilities
● Analyze engineering, operational, and productivity data to uncover trends, risks, and opportunities
● Design and implement data models that improve accessibility, structure, and long-term maintainability of engineering metrics.
● Build or enhance ETL pipelines to collect, transform, and export data from various systems (e.g., GitHub, Jira, Security Scans, Costs tools).
● Partner with stakeholders to define meaningful KPIs across engineering domains (e.g., reliability, security, velocity).
● Explore and implement GenAI tooling to support automation, summarization, and pattern detection in engineering workflows.
● Maintain data hygiene and enforce best practices in data governance and lineage within the API Engineering environment.
Required Skills and Experience
● Proven experience as a Data Analyst or Data Engineer, preferably in a software engineering or DevOps context.
● Strong SQL skills and experience with Python or another scripting language for data transformation and analysis.
● Hands-on experience working with APIs and integrating data across SaaS tools (e.g., Jira, GitHub, Datadog).
● Familiarity with dashboarding/visualization platforms like Looker, Grafana or Tableau.
Demonstrated experience structuring unorganized or siloed data into actionable reporting models.
Desirable:
● Experience designing and building ETL pipelines and data lakes or warehouses (e.g. Snowflake).
● Exposure to GenAI tooling and experience applying AI to engineering or operational workflows.
● Knowledge of modern data orchestration tools (e.g., Airflow, dbt)
Any Gradute