Sr. Middleware Engineer – Kafka

New York, NY, US • Posted 4 hours ago • Updated 4 hours ago
Contract W2
Contract Corp To Corp
No Travel Required
On-site
Depends on Experience
Fitment

Dice Job Match Score™

📋 Comparing job requirements...

Job Details

Skills

  • Access Control
  • Amazon Web Services
  • Apache Kafka
  • Cloud Computing
  • Data Integration
  • DevOps
  • Google Cloud Platform
  • Middleware
  • Network Security
  • OAuth
  • Migration
  • Python

Summary

Role: Sr. Middleware Engineer – Kafka

Location: NJ/NY (Onsite)

Duration: Long term

 

Need 10+ years of experienced candidates.

Primary Responsibilities:

  • Architecture & Design
    • Architect, design, and implement Kafka-based solutions using Confluent Cloud and Confluent Platform, ensuring they are highly scalable, resilient, and future-proof.
    • Provide technical leadership in designing event-driven architectures that integrate with on-prem systems and multiple cloud environments (AWS, Azure, or Google Cloud Platform).
  • Platform Management
    • Oversee administration and operational management of Confluent Platform components: Kafka brokers, Schema Registry, Kafka Connect, ksqlDB, and REST Proxy.
    • Develop and maintain Kafka producers, consumers, and streams applications to support real-time data streaming use cases.
  • Deployment & Automation
    • Lead deployments and configurations of Kafka topics, partitions, replication strategies in both on-prem and cloud setups.
    • Automate provisioning, deployment, and maintenance tasks with Terraform, Chef, Ansible, Jenkins, or similar CI/CD tools.
  • Monitoring & Troubleshooting
    • Implement robust monitoring, alerting, and observability frameworks using Splunk, Datadog, Prometheus, or similar tools for both Confluent Cloud and on-prem clusters.
    • Proactively troubleshoot Kafka clusters, diagnose performance issues, and conduct root cause analysis for complex, distributed environments.
  • Performance & Capacity Planning
    • Conduct capacity planning and performance tuning to optimize Kafka clusters; ensure they can handle current and future data volumes.
    • Define and maintain SLA/SLI metrics to track latency, throughput, and downtime.
  • Security & Compliance
    • Ensure secure configuration of all Kafka and Confluent components, implementing best practices for authentication (Kerberos/OAuth), encryption (SSL/TLS), and access control (RBAC).
    • Collaborate with InfoSec teams to stay compliant with internal and industry regulations (GDPR, SOC, PCI, etc.).
  • Cross-Functional Collaboration
    • Work with DevOps, Cloud, Application, and Infrastructure teams to define and align business requirements for data streaming solutions.
    • Provide guidance and support during platform upgrades, expansions, and new feature rollouts.
  • Continuous Improvement
    • Stay current with Confluent Platform releases and Kafka community innovations.
    • Drive continuous improvement by recommending new tools, frameworks, and processes to enhance reliability and developer productivity.

Qualifications

·        Technical Expertise

  • 5+ years of hands-on experience with Kafka; at least 2+ years focused on Confluent Cloud and Confluent Platform.
  • Deep knowledge of Kafka Connect, Schema Registry, Control Center, ksqlDB, and other Confluent components.
  • Experience architecting and managing hybrid Kafka solutions in on-prem and cloud (AWS, Azure, Google Cloud Platform).
  • Advanced understanding of event-driven architecture and the real-time data integration ecosystem.
  • Strong programming/scripting skills (Java, Python, Scala) for Kafka-based application development and automation tasks.

·        DevOps & Automation

  • Hands-on experience with Infrastructure as Code (Terraform, CloudFormation) for Kafka resource management in both cloud and on-prem.
  • Familiarity with Chef, Ansible, or similar configuration management tools to automate deployments.
  • Skilled in CI/CD pipelines (e.g., Jenkins) and version control (Git) for distributed systems.

·        Monitoring & Reliability

  • Proven ability to monitor and troubleshoot large-scale, distributed Kafka environments using Splunk, Datadog, Prometheus, or similar tools.
  • Experience with performance tuning and incident management to minimize downtime and data loss.

·        Security & Compliance

  • Expertise in securing Kafka deployments, including Kerberos and SSL configurations.
  • Understanding of IAM best practices, network security, encryption, and governance in hybrid environments.

·        Leadership & Collaboration

  • Demonstrated experience leading platform upgrades, migrations, and architecture reviews.
  • Excellent communication skills, with ability to articulate complex technical concepts to diverse audiences (developers, architects, executives).
  • Comfortable collaborating with cross-functional teams—product owners, system engineers, security, and business stakeholders.

·        Education & Preferred Experience

o   Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field (or equivalent experience).

o   Experience with container orchestration (Docker/Kubernetes) is a plus.

 

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91172920
  • Position Id: 8907773
  • Posted 4 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Jersey City, New Jersey

Today

Easy Apply

Contract

53

New York, New York

3d ago

Easy Apply

Contract

Depends on Experience

New York, New York

Today

Easy Apply

Third Party, Contract

Depends on Experience

Bayonne, New Jersey

Today

Contract

USD 120,000.00 - 125,000.00 per year

Search all similar jobs