Design and implement Kafka-based event streaming solutions: topic architecture, partitioning strategy, consumer group patterns, ordering semantics, retention, and replay. Build and operate Confluent Platform components: Schema Registry for schema management, compatibility, and evolution (Avro/JSON Schema/Protobuf). 1 Kafka Connect connectors (source/sink), including performance tuning, error handling, and DLQ patterns. 6 ksqlDB for streaming transformations, aggregations, and joins where appropriate. 10 Control Center for monitoring clusters, topics, throughput, and consumer lag. 15 Implement security and governance: Configure Role-Based Access Control (RBAC) for Kafka, Connect, Schema Registry, and ksqlDB resources; integrate with enterprise identity and least-privilege practices. 20 Build resilient multi-cluster / multi-region patterns: Use Cluster Linking for replication, hybrid/cloud migration, and disaster recovery scenarios (offset-preserving replication). 26 Optimize cost and retention: Apply Tiered Storage strategies to offload older data to object storage while keeping hot data on local disks. 31 If using Confluent Cloud (managed Kafka): Leverage managed services (connectors, managed Schema Registry, managed ksqlDB), and cloud-native scaling approaches. 36 Apply org/environment/cluster-level RBAC, and enterprise security/networking patterns (encryption, BYOK where applicable). 37 Collaborate on advanced streaming processing options (e.g., Confluent Cloud for Apache Flink, where in scope). 36 Establish operational excellence: Define SLOs/SLIs, alerting, runbooks, capacity planning, and incident response for streaming workloads. Automate provisioning and configuration (Infrastructure as Code), CI/CD for connectors and stream processing artifacts, and standardized onboarding for producer/consumer teams. Mentor engineers and set platform standards: naming conventions, event contracts, versioning strategy, testing approach, and reference architectures. |