Senior Confluent/Kafka Engineer

Overview

Remote
Depends on Experience
Contract - W2

Skills

kafka
Amazon Web Services
Apache Avro
Apache Flink
Apache Kafka
Cloud Computing
Continuous Improvement
Continuous Integration
Documentation
Good Clinical Practice
Collaboration
Communication
Conflict Resolution
Continuous Delivery
Debugging
Google Cloud Platform
IT Management
Kotlin
Management
Mentorship
Microsoft Azure
Optimization
Performance Tuning
Problem Solving
Real-time
Root Cause Analysis
Scala
Scalability
Streaming
Technical Drafting
Java

Job Details

Responsibilities and Duties:
Architect and design scalable, fault-tolerant, and high-performance Kafka-based data streaming solutions. Lead technical design sessions and provide architectural guidance to junior engineers.
Develop and maintain Confluent Platform components, including Kafka Connect, Kafka Streams, Flink, TableFlow and ksqlDB. Implement and manage monitoring and alerting systems for the Kafka cluster.
Lead troubleshooting and resolution of complex issues within the data streaming platform. Perform root cause analysis and implement corrective actions to prevent future occurrences. Mentor and guide junior engineers.
Collaborate with other engineering teams to integrate data streams into their applications. Develop and maintain comprehensive documentation for the data streaming platform. Provide technical leadership and guidance on best practices.
Participate in code reviews and ensure adherence to coding standards and best practices.
Proactively identify and address potential performance bottlenecks and scalability issues.
Contribute to continuous improvement initiatives.
Stay up to date on the latest Confluent Platform and Kafka technologies.
Required Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred.
5+ years of experience with Confluent Platform and Kafka, including experience in designing and implementing large-scale data streaming solutions.
Experience as a developer/operator of the Kafka technology (NOT interested in candidates who set up clusters for others to use).
Must have real-world experience setting up after Kafka real-time data streaming.
Proficient in Java or other JVM languages (e.g., Scala, Kotlin). Experience with Kafka Connect, Kafka Streams, and ksqlDB. Strong understanding of Kafka architecture and concepts (topics, partitions, consumers, producers). Experience with message queuing systems. Familiarity with cloud-based environments (AWS, Azure, Google Cloud Platform). Excellent problem-solving and debugging skills. Experience with CI/CD.
Experience leading and mentoring engineering teams. Strong architectural skills. Experience with performance tuning and optimization. Experience with schema registries (e.g., Avro).
Ability to work independently and as part of a team. Excellent communication and collaboration skills.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.