Kafka Engineer (Java)

Overview

Hybrid
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

API
Java
Confluent Kafka
Confluent Consulting Engineer
designing
developing
and maintaining scalable real-time data pipelines and integrations using Kafka and Confluent
Apache Kafka (any distribution: open-source
Confluent
Cloudera
AWS MSK
AWS
GCP
or Azure
deploying Kafka on cloud platforms
Docker
Kubernetes
and CI/CD pipelines
Kafka Connect
Kafka Streams
KSQL
Schema Registry
REST Proxy
Confluent Control Center

Job Details

Job Title - Confluent Kafka Engineer (Java)

Location - Dallas TX (Hybrid)

Duration - 12+ Months

Job Type - Contract (USC)

Must Have Skills
API
Java
Confluent Kafka

Job Description :-

As a Confluent Consulting Engineer, you will be responsible for designing, developing, and maintaining scalable real-time data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions.

10+ years of hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.)
Strong proficiency in Java, Python, or Scala
Solid understanding of event-driven architecture and data streaming patterns
Experience deploying Kafka on cloud platforms such as AWS, Google Cloud Platform, or Azure
Familiarity with Docker, Kubernetes, and CI/CD pipelines
Excellent problem-solving and communication abilities
Desired skills
Candidates with experience in Confluent Kafka and its ecosystem will be given preference: Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center
Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink
Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC Confluent certifications (Developer, Administrator, or Flink Developer)
Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud deployments, and Confluent for Kubernetes
Knowledge of data mesh architectures, KRaft migration, and modern event streaming patterns
Exposure to monitoring tools (Prometheus, Grafana, Splunk)
Experience with data lakes, data warehouses, or big data ecosystems

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.