Job Description:
W2 only
Must-Have Qualifications:
8+ years of hands-on experience in data engineering, backend services, or large-scale data systems
Deep expertise in GCP technologies (BigQuery, Dataflow, Pub/Sub, Cloud Storage, Cloud Functions, IAM, Kubernetes)
Strong programming skills in Python, SQL, and JavaScript (additional languages are a plus)
Experience with distributed computing, data processing frameworks (Apache Beam, Spark, etc.), and workflow orchestration (Cloud Composer, Airflow)
Proven ability to solve complex technical challenges and build scalable, cloud-native solutions
Nice-to-Have Skills:
Experience with LLMs, AI/ML model deployment, and advanced analytics in a cloud environment
Knowledge of CI/CD pipelines, Terraform, and Infrastructure-as-Code (IaC)
Familiarity with event-driven architectures and real-time data streaming solutions
Key Responsibilities:
Design, develop, and optimize data-intensive applications, APIs, and backend services on GCP
Build scalable data pipelines using BigQuery, Dataflow, Pub/Sub, and Cloud Storage
Implement LLM-powered solutions, machine learning pipelines, and real-time data processing frameworks
Ensure high availability, performance, and security of cloud-based data platforms
Collaborate with cross-functional teams to drive data architecture decisions and engineering best practices
Lead technical discussions, mentor junior engineers, and contribute to engineering excellence
Any Graduate