Kafka Admin

Philadelphia, PA, US • Posted 19 hours ago • Updated 1 hour ago
Contract Independent
Contract Corp To Corp
Contract W2
On-site
Company Branding Image
Fitment

Dice Job Match Score™

🧠 Analyzing your skills...

Job Details

Skills

  • Data Integration
  • Governance
  • Requirements Analysis
  • Git
  • Linux
  • Metrics
  • Azure Data Lake
  • Terraform
  • Ansible
  • Grafana
  • Prometheus
  • Microsoft Azure
  • Apache Kafka
  • Change Management
  • Infrastructure Management
  • scalability
  • Testing Skills
  • Automation
  • DevOps
  • Performance Testing
  • Software Version Control
  • Employee Onboarding
  • Problem Solving
  • Administrative Operations
  • Safety Principles
  • System Availability
  • Data Streaming
  • Dashboards
  • Oracle Applications
  • Access Controls
  • Red Hat Enterprise Linux
  • Perseverance
  • Software Design Patterns
  • Ecosystems
  • Stream Processing
  • Apache Flink
  • Disaster Recovery
  • Incident Response
  • Network Performance
  • Capacity Planning
  • Cost Modelling
  • Role-Based Access Control
  • Software Exception Handling
  • Operationalisation
  • Confluent
  • Stakeholder Communications
  • New Relic (SaaS)
  • Development Support
  • Employee Retention
  • Recruitment Process Outsourcing
  • SUSE Linux Enterprise Servers
  • Calculations
  • Chargeback
  • Failover
  • Multi Tenant Architecture
  • Performance Engineering
  • Resource Utilisation

Summary

Job Role: Kafka Admin

Job Location: Philadelphia, PA (100% Onsite)

Job Type: Contract

Roles and Responsibilities

  • 8+ years of hands-on experience with Kafka/Confluent in production.
  • Strong expertise with:
  • SSL/TLS end-to-end configuration in Kafka ecosystems
  • RBAC authorization configuration and operational administration
  • Designing for HA/redundancy and scaling for growth
  • Monitoring/alerting with Prometheus & Grafana, plus operational tooling such as New Relic
  • Performance testing and tuning (producers/consumers, brokers, Connect, infrastructure)
  • Demonstrated experience implementing:
  • Confluent Oracle Premium CDC Connector
  • Confluent sink to ADLS Gen2 (ADLS2)
  • Proficiency with Azure DevOps, Git, and building CI/CD pipelines.
  • Working knowledge of Apache Flink and hands-on experience writing Kafka Streams.

Key Responsibilities:

  • Security & Access Control
  • Configure end-to-end SSL/TLS across Kafka/Confluent components and client integrations.
  • Implement and manage RBAC for authorizations, service accounts, and least-privilege access.
  • High Availability, Redundancy & Failover
  • Configure core components for redundancy and failover resilience (brokers/controllers, Connect, Schema Registry, etc.).
  • Design and implement a Kafka disaster recovery (DR) cluster, including replication strategy, failover testing, and runbooks aligned to RPO/RTO.
  • Scale & Future Growth
  • Plan and implement platform scalability for future growth (topic/partition strategy, retention, throughput, capacity planning).
  • Establish sustainable operational practices for multi-team usage and governance.
  • Monitoring, Alerting & Operations
  • Set up monitoring and alerts for streaming messages and platform health using Prometheus & Grafana.
  • Integrate New Relic dashboards/alerts to support operational visibility, incident response, and service health metrics.
  • Performance Engineering
  • Perform performance testing and tune Kafka/Confluent components for optimal throughput, latency, and stability.
  • Troubleshoot complex production issues across brokers, networking, storage, Connect, and client workloads.
  • Connectors & Data Integration
  • Implement and support Confluent Oracle Premium CDC Connector (configuration, offsets, schema evolution, error handling, operations).
  • Implement and support Confluent Sink Connector to ADLS2 (Azure Data Lake Storage Gen2) with reliable delivery and partitioning strategies.
  • Streaming Development
  • Build and support stream processing using Apache Flink (job configuration, deployment patterns, operationalization).
  • Develop Kafka Streams applications (topology design, state stores, exactly-once/processing guarantees as needed).
  • DevOps & Automation
  • Use Azure DevOps with Git integration for version control, reviews, and change management.
  • Deploy and manage cloud resources using Terraform and Ansible.
  • Build and maintain CI/CD pipelines for platform configuration, connectors, and streaming jobs across environments.
  • Cost Allocation
  • Support chargeback/show back calculations for Kafka usage (e.g., throughput, storage, partitions, connector/resource utilization) and related reporting.

Preferred Qualifications

  • Experience implementing and testing Kafka DR cluster architectures and operational runbooks.
  • Familiarity with enterprise governance patterns (multi-tenancy, naming standards, quotas, schema governance).
  • Experience defining usage metering to enable reliable chargeback/show back.
  • Additional Expectations
  • Proficiency with using Linux CLI (preferably RHEL/SLES).
  • Ability to participate in an on-call rotation and provide timely incident support (if required).
  • Strong documentation and stakeholder communication skills across engineering, operations, and product teams.
  • Ability to create design patterns/templates, provide development support and conduct knowledge transfer sessions for onboarding new use cases and modernizing existing ones.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91135853
  • Position Id: 2026-399/7275
  • Posted 19 hours ago

Company Info

About Tror

TROR is an artificial intelligence consultancy specializing in developing powerful and customized Al solutions for business. With top Al Experts we take pride in providing the best cutting-edge Al consultancy. Our years of experience in various industries helps us to develop and implement bespoke Al solutions for businesses. Our on demand Al products have helped over 100 companies drive transformational results.

The solutions we bring on your table meet the highest industry standards and quality, effectively and efficiently resolving your issues and optimizing the way you want to move forward in the market. Through our customer centric approach, we ensure that we are always there for our valuable customers by offering them satisfactory solutions for guaranteed results. 

 

About_Company_OneAbout_Company_Two
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

It looks like there aren't any Similar Jobs for this job yet.

Search all similar jobs