Senior Java Kafka Developer (LOCAL AS F2F IS MUST)

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)
Able to Provide Sponsorship

Skills

Java
Kafka

Job Details

About the Role:

We are building Polaris, a next-generation front-to-back platform for post-trade processing, real-time risk monitoring, and cloud-native orchestration. We are seeking a hands-on, high-impact Java developer with a proven track record in designing and operating large-scale distributed systems. You will contribute to the architecture, implementation, and operations of a low-latency, high-throughput trading data path, ensuring scalability, resiliency, recoverability, observability, and developer productivity.

Key Responsibilities:

  • Design, implement, and maintain high-performance, low-latency messaging and processing components (Java-based) for real-time trading data.
  • Develop and operate resilient microservices and streaming pipelines using Kafka and related technologies.
  • Build and optimize storage and access patterns with modern databases (e.g., RocksDB, MongoDB, MemSQL/SingleStore) for real-time analytics.
  • Lead efforts to improve SDLC, testing (shift-left), configuration, and developer experience (CI/CD, automation, tooling).
  • Architect and implement observability frameworks: metrics, tracing (Tempo/Jaeger), logging, and alerting for highly available systems.
  • Collaborate with Platform/Cloud-native teams to design Kubernetes-based deployments, scalability, and disaster recovery strategies.
  • Implement backpressure, idempotency, retries with exponential backoff and jitter, circuit breakers, and fault-tolerant patterns.
  • Provide technical leadership, mentorship, and code reviews; communicate effectively with stakeholders across teams.
  • Participate in on-call rotations and incident response, driving root-cause analysis and post-mortems.

Required Qualifications:

  • 7+ years of professional software development, with a strong emphasis on Java and distributed systems.
  • Deep hands-on experience building large-scale, low-latency, high-throughput systems.
  • Expertise with messaging and streaming technologies, especially Kafka (producers/consumers, transactions, exactly-once processing).
  • Proficiency with data stores used in real-time contexts (RocksDB, MongoDB, MemSQL/SingleStore) and time-series considerations.
  • Solid understanding of concurrency, memory models, locking vs. lock-free approaches, and performance tuning in Java.
  • Experience designing and operating in Kubernetes-based environments; familiarity with cloud-native patterns, DevOps, and observability tooling.
  • Strong problem-solving, debugging, and performance profiling skills.
  • Excellent communication, collaboration, and leadership capabilities.
  • Bonus: Python scripting experience; knowledge of PrometheGrafana, Tempo/Jaeger, and cloud platforms (AWS/Google Cloud Platform/Azure).

Nice-to-Have:

  • Experience with front-to-back architectural patterns in trading or financial services.
  • Knowledge of, or experience with, RocksDB, distributed caches, and columnar stores.
  • Familiarity with time-series data modeling and windowed computations.
  • Exposure to risk monitoring, market data feeds, and post-trade workflows.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.