What You’ll Do:
Design and manage scalable GCP data systems (BigQuery, Pub/Sub, Vertex AI, Composer)
Automate infrastructure using Terraform and CI/CD pipelines
Build and optimize data pipelines with Python & SQL
Implement monitoring/alerting (Grafana, Prometheus, Datadog)
Ensure high availability and performance tuning
Support AI/ML deployments and incident management
Requirements:
3+ years in SRE, DevOps, or Data Engineering
Strong GCP (BigQuery, Pub/Sub, Vertex AI) experience
Proficient in Python, SQL, Terraform, and CI/CD
Knowledge of distributed systems & reliability best practices
Any Graduate