Description

Key Responsibilities:

  • Develop and maintain ETL pipelines using Cloud Dataflow, Cloud Dataproc, and Apache Beam.
  • Design and optimize BigQuery data models for efficient querying and analytics.
  • Implement data ingestion from various sources using Cloud Pub/Sub and Cloud Storage.
  • Ensure data quality, integrity, and security across all systems.
  • Monitor and troubleshoot data pipeline performance issues.
  • Automate workflows and data processes for scalability.
  • Collaborate with cross-functional teams to support business intelligence and analytics needs.

Qualifications:

  • 5+ years of experience in data engineering or related roles.
  • Strong proficiency in SQL, Python, or Java for data processing.
  • Hands-on experience with GCP services like BigQuery, Dataflow, and Pub/Sub.
  • Knowledge of data modeling, warehousing, and ETL best practices.
  • Experience with CI/CD pipelines and version control (GIT).
  • Strong problem-solving skills and ability to work in a fast-paced environment.
  • Would you like me to tailor this further based on specific industry requirements? You can also check out this resource for more details on GCP Data Engineer roles

Education

Any Graduate