Key Skills: Kafka, Azure, Linux
Roles and Responsibilities:
- Design, build, and manage Kafka clusters using the Confluent Platform.
- Develop and deploy Kafka producers, consumers, connectors, and streaming applications.
- Implement Kafka Connect, Kafka Streams, ksqlDB, and other Confluent components as needed.
- Monitor, tune, and troubleshoot Kafka clusters to ensure high availability and optimal performance.
- Collaborate with developers and data engineers to integrate Kafka with upstream and downstream systems.
- Develop automation scripts for Kafka cluster deployment and configuration using tools like Ansible, Terraform, or custom scripts.
- Implement data security, authentication, and authorization using Confluent RBAC and Kafka ACLs.
- Ensure data governance, monitoring, and compliance for Kafka topics and data pipelines.
- Participate in capacity planning and disaster recovery strategies for Kafka infrastructure.
- Maintain clear and up-to-date documentation for Kafka topics, configurations, and operational procedures.
Skills Required:
- Strong hands-on experience with Apache Kafka and Confluent Platform in production environments.
- In-depth understanding of Kafka internals, including partitions, brokers, replication, consumer groups, and retention policies.
- Experience deploying, configuring, and tuning Kafka clusters for high throughput and low latency.
- Proficiency in Kafka Connect, Kafka Streams, and ksqlDB for building real-time data pipelines.
- Good working knowledge of Azure Cloud Services, especially in integrating Kafka with cloud-native tools.
- Familiarity with Linux/Unix environments for deploying and managing Kafka services and scripts.
- Experience using automation tools such as Ansible, Terraform, or Bash/Python scripts for infrastructure provisioning and configuration.
- Strong understanding of Kafka security, including RBAC, SSL encryption, SASL authentication, and ACL management.
Education: Bachelor's degree in Computer Science, Information Technology, or a related field