Overview
On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - 1 Month(s)
Skills
API
Access Control
Amazon Web Services
Apache Avro
Apache HTTP Server
Apache Kafka
Automated Testing
Job Details
We are looking for a Lead Data Engineer for our client in Atlanta, GA
Job Title: Lead Data Engineer Job Location: Atlanta, GA Job Type: Contract Job Description:Responsibilities:
- Own design & delivery of high throughput, low latency streaming solutions using technologies like Confluent Kafka, Apache Flink, Hazelcast, Kafka Streams, Kafka Connect, and Schema Registry.
- Design and implement microservices and event driven systems with robust ETL/ELT pipelines for real time ingestion, enrichment, and delivery.
- Establish distributed caching and in memory data grid patterns (e.g., Redis, Hazelcast) to optimize read/write performance and session/state management.
- Define and operationalize event gateways / event grids for event routing, fan out, and reliable delivery.
- Lead data governance initiatives standards for metadata, lineage, classifications, retention, access controls, and compliance (PII/PCI/SOX/GDPR as applicable).
- Drive CI/CD best practices (pipelines, automated testing, progressive delivery) to enable safe, frequent releases; champion DevSecOps and shift left testing.
- Set SLOs/SLAs, track observability (tracing, metrics, logs), and optimize performance at scale (throughput, backpressure, state, checkpointing).
- Work with Security, Platform, and Cloud teams on networking, IAM, secrets, certificates, and cost optimization.
- Mentor engineers, conduct design reviews, and enforce coding standards and reliability patterns.
- Guide platform and delivery roadmap
- 10+ years in software engineering; 5+ years designing large-scale real time or event driven platforms.
- Expert with Confluent Kafka (brokers, partitions, consumer groups, Schema Registry, Kafka Connect), Flink (DataStream/Table API, stateful ops, checkpointing), Hazelcast, and/or Kafka Streams.
- Strong in ETL/ELT design, streaming joins/windows, exactly once semantics, and idempotent processing.
- Experience with microservices (Java/Python), REST/gRPC, protobuf/Avro, and contract-first development.
- Hands-on with distributed caching and in memory data grids; performance tuning and eviction strategies.
- Cloud experience in any one or more cloud platforms Azure/AWS/Google Cloud Platform; containers, Docker, Kubernetes.
- Experience in production-grade CI/CD (Jenkins, Bamboo, Harness or similar), Infrastructure as Code (Terraform/Helm).
- Robust observability (PrometheGrafana/OpenTeleme
try, Splunk/ELK or similar), and resilience patterns (circuit breakers, retries, DLQs). - Practical data governance: metadata catalogs, lineage, encryption, RBAC.
- Excellent communication; ability to lead design, influence stakeholders, and guide cross-functional delivery.
- Core competencies to include Architectural Thinking, Systems Design, Operational Excellence, Security & Compliance, Team Leadership, Stakeholder Management.
- Nice to Have:
- Experience with CDC, Kafka Connect custom connectors, Flink SQL, Beam.
- Streaming ML or feature stores integration (online/offline consistency).
- Multi region / disaster recovery for streaming platforms.
- Experience with Zero downtime migrations, blue/green, and canary deployments.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.