Description

Skills Required :

Google Kubernetes Engine (GKE); GCP; ETL - Big Data / Data Warehousing; Cloud Networking; Machine Learning; Kubernetes; OpenShift; Postgres / PostgreSQL; Git (GitHub, GitLab, BitBucket, SVN); Agile Champion; JIRA; Rally; API; Big Data; Python; Firewall; Functional Testing; Kafka

 

Job Description :

  • Strong knowledge of searching logs for troubleshooting and anomaly detection, implement comprehensive monitoring and logging solutions with GCP Logging and Monitoring
  • Must have strong scripting skills - proficiency in Shell or Python
  • Experience in configuration management and Infrastructure-as-code using Terraform
  • Experience with Git and Github, JavaScript, Regular Expressions (RegEx)
  • Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.
  • Should have understanding of cloud concepts (Storage /compute/network)
  • Proficient in developing and maintaining technical documentation, runbooks
  • Automating GCP operations using scripting languages like Python or Bash and GCP SDKs or APIs.
  • Setting up and administering databases in GCP, such as BigTable, Cloud SQL, Cloud Spanner,
  • Developing and maintaining CI/CD pipelines for automated code deployment using Google Cloud Build, Source Repositories, and Container Registry.
  • Working with containers and orchestrating them using Google Kubernetes Engine (GKE), including cluster management and deployment strategies.
  • Optimizing GCP costs and resource usage, providing cost estimates and reports, and implementing cost-saving strategies.
  • Collaborating with development teams to architect and support application deployment strategies that leverage
  • GCP services like App Engine, Cloud Functions, and Cloud Run

Education

Any Graduate