Kafka Admin

Overview

Hybrid
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
No Travel Required
Unable to Provide Sponsorship

Skills

Apache Kafka
Amazon Web Services
Zookeeper
Kafka Connect
Kafka Streams
Hazelcast
Flink

Job Details

Role #1: Kafka Admin

Locations: Atlanta, GA/ Chicago, IL (Hybrid Onsite)

Duration: 12+ Months Contract

Note: Candidate needs to be in the office 3-4 Days every week. Local or candidates from adjacent states only.

Required Qualifications:

  • Minimum 10+ years of experience in software engineering,
  • 3–5 years of experience in Kafka administration in enterprise environments.
  • Strong understanding of Kafka internals, Zookeeper, and Kafka Connect.
  • Experience with Kafka in cloud-native environments (AWS MSK, Confluent Cloud, or self-managed on Kubernetes).
  • Proficiency in Linux, shell scripting, and monitoring/logging tools.
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.

 

Role #2:

Role: Senior Software Engineer (Kafka)

Locations: Atlanta, GA/ Chicago, IL (Hybrid Onsite)

Duration: 12+ Months Contract

Note: Candidate needs to be in the office 3-4 Days every week. Local or candidates from adjacent states only.

Required Qualifications:

  • 7+ years in software engineering, with 3+ years focused on real time streaming or event driven systems.
  • Strong hands-on experience with Kafka (topics, partitions, consumer groups), Schema Registry, Kafka Connect, and either Flink or Kafka Streams or Hazelcast.
  • Solid understanding of ETL/ELT concepts, event time vs. processing time, checkpointing, state management, and exactly once/at least once semantics.
  • Proficiency with microservices (Java /Python), APIs (REST/gRPC), Avro/JSON/protobuf, and contract testing.
  • Experience with Docker, Kubernetes, and CI/CD tools (GitHub Actions/Azure DevOps/Jenkins or similar).
  • Familiarity with distributed caching (Redis, Hazelcast) and in memory data grids.
  • Cloud experience in at least one cloud platform (Azure/AWS/Google Cloud Platform).
  • Knowledge of observability (metrics, logs, traces) and resilience (retries, timeouts, DLQs, circuit breakers).
  • Exposure to data governance, metadata catalogs, and lineage tooling; schema evolution and compatibility (backward/forward/full).
  • Core competencies to include Problem Solving, Ownership, Code Quality, Operational Mindset, Collaboration, Continuous Improvement.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.