Sr. Middleware Engineer – Kafka
Client - Broadridge , NJ/NY
Primary Responsibilities
Architecture & Design
Architect, design, and implement Kafka-based solutions using Confluent Cloud and Confluent Platform, ensuring they are highly scalable, resilient, and future-proof.
Provide technical leadership in designing event-driven architectures that integrate with on-prem systems and multiple cloud environments (AWS, Azure, or Google Cloud Platform).
Platform Management
Oversee administration and operational management of Confluent Platform components: Kafka brokers, Schema Registry, Kafka Connect, ksqlDB, and REST Proxy.
Develop and maintain Kafka producers, consumers, and streams applications to support real-time data streaming use cases.
Deployment & Automation
Lead deployments and configurations of Kafka topics, partitions, replication strategies in both on-prem and cloud setups.
Automate provisioning, deployment, and maintenance tasks with Terraform, Chef, Ansible, Jenkins, or similar CI/CD tools.
Monitoring & Troubleshooting
Implement robust monitoring, alerting, and observability frameworks using Splunk, Datadog, Prometheus, or similar tools for both Confluent Cloud and on-prem clusters.
Proactively troubleshoot Kafka clusters, diagnose performance issues, and conduct root cause analysis for complex, distributed environments.
Performance & Capacity Planning
Conduct capacity planning and performance tuning to optimize Kafka clusters; ensure they can handle current and future data volumes.
Define and maintain SLA/SLI metrics to track latency, throughput, and downtime.
Security & Compliance
Ensure secure configuration of all Kafka and Confluent components, implementing best practices for authentication (Kerberos/OAuth), encryption (SSL/TLS), and access control (RBAC).
Collaborate with InfoSec teams to stay compliant with internal and industry regulations (GDPR, SOC, PCI, etc.).
Cross-Functional Collaboration
Work with DevOps, Cloud, Application, and Infrastructure teams to define and align business requirements for data streaming solutions.
Provide guidance and support during platform upgrades, expansions, and new feature rollouts.
Continuous Improvement
Stay current with Confluent Platform releases and Kafka community innovations.
Drive continuous improvement by recommending new tools, frameworks, and processes to enhance reliability and developer productivity.
Qualifications
Technical Expertise
5+ years of hands-on experience with Kafka; at least 2+ years focused on Confluent Cloud and Confluent Platform.
Deep knowledge of Kafka Connect, Schema Registry, Control Center, ksqlDB, and other Confluent components.
Experience architecting and managing hybrid Kafka solutions in on-prem and cloud (AWS, Azure, Google Cloud Platform).
Advanced understanding of event-driven architecture and the real-time data integration ecosystem.
Strong programming/scripting skills (Java, Python, Scala) for Kafka-based application development and automation tasks.
DevOps & Automation
Hands-on experience with Infrastructure as Code (Terraform, CloudFormation) for Kafka resource management in both cloud and on-prem.
Familiarity with Chef, Ansible, or similar configuration management tools to automate deployments.
Skilled in CI/CD pipelines (e.g., Jenkins) and version control (Git) for distributed systems.
Monitoring & Reliability
Proven ability to monitor and troubleshoot large-scale, distributed Kafka environments using Splunk, Datadog, Prometheus, or similar tools.
Experience with performance tuning and incident management to minimize downtime and data loss.
Security & Compliance
Expertise in securing Kafka deployments, including Kerberos and SSL configurations.
Understanding of IAM best practices, network security, encryption, and governance in hybrid environments.
Leadership & Collaboration
Demonstrated experience leading platform upgrades, migrations, and architecture reviews.
Excellent communication skills, with ability to articulate complex technical concepts to diverse audiences (developers, architects, executives).
Comfortable collaborating with cross-functional teams—product owners, system engineers, security, and business stakeholders.
Education & Preferred Experience
Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field (or equivalent experience).
Experience with container orchestration (Docker/Kubernetes) is a plus.