Overview
Full Time
Part Time
Accepts corp to corp applications
Contract - Independent
Contract - W2
Skills
Data Processing
Analytics
Data Quality
GRID
Redis
Management
Routing
Access Control
Payment Card Industry
Sarbanes-Oxley
Automated Testing
SAFE
DevSecOps
Testing
Computer Networking
Optimization
Mentorship
Design Review
Roadmaps
Software Engineering
Real-time
API
Extract
Transform
Load
ELT
Microsoft Windows
Semantics
Microservices
Java
Python
Apache Avro
Caching
Performance Tuning
Cloud Computing
Microsoft Azure
Amazon Web Services
Google Cloud Platform
Google Cloud
Docker
Kubernetes
Continuous Integration
Continuous Delivery
Jenkins
Bamboo
Terraform
Splunk
Data Governance
Meta-data Management
Encryption
RBAC
Communication
Systems Design
Operational Excellence
Regulatory Compliance
Team Leadership
Stakeholder Management
Change Data Capture
Apache Kafka
Apache Flink
SQL
Machine Learning (ML)
Disaster Recovery
Streaming
Migration
Job Details
JOB DESCRIPTION:
"ead Data Engineer - Real Time Data Processing
Location:
Atlanta, GA
Chicago, IL
Grade Level: D1
Role Summary:
We re seeking a seasoned Lead Software Engineer to architect, build, and scale real time data processing platforms that power event driven applications and analytics. You ll lead the design of streaming microservices, govern data quality and lineage, and mentor engineers while partnering with product, platform, and security stakeholders to deliver resilient, low latency systems.
Responsibilities:
Own design & delivery of high throughput, low latency streaming solutions using technologies like Confluent Kafka, Apache Flink, Hazelcast, Kafka Streams, Kafka Connect, and Schema Registry.
Design and implement microservices and event driven systems with robust ETL/ELT pipelines for real time ingestion, enrichment, and delivery.
Establish distributed caching and in memory data grid patterns (e.g., Redis, Hazelcast) to optimize read/write performance and session/state management.
Define and operationalize event gateways / event grids for event routing, fan out, and reliable delivery.
Lead data governance initiatives standards for metadata, lineage, classifications, retention, access controls, and compliance (PII/PCI/SOX/GDPR as applicable).
Drive CI/CD best practices (pipelines, automated testing, progressive delivery) to enable safe, frequent releases; champion DevSecOps and shift left testing.
Set SLOs/SLAs, track observability (tracing, metrics, logs), and optimize performance at scale (throughput, backpressure, state, checkpointing).
Work with Security, Platform, and Cloud teams on networking, IAM, secrets, certificates, and cost optimization.
Mentor engineers, conduct design reviews, and enforce coding standards and reliability patterns.
Guide platform and delivery roadmap
Required Qualifications:
10+ years in software engineering; 5+ years designing large-scale real time or event driven platforms.
Expert with Confluent Kafka (brokers, partitions, consumer groups, Schema Registry, Kafka Connect), Flink (DataStream/Table API, stateful ops, checkpointing), Hazelcast, and/or Kafka Streams.
Strong in ETL/ELT design, streaming joins/windows, exactly once semantics, and idempotent processing.
Experience with microservices (Java/Python), REST/gRPC, protobuf/Avro, and contract-first development.
Hands-on with distributed caching and in memory data grids; performance tuning and eviction strategies.
Cloud experience in any one or more cloud platforms Azure/AWS/Google Cloud Platform; containers, Docker, Kubernetes.
Experience in production-grade CI/CD (Jenkins, Bamboo, Harness or similar), Infrastructure as Code (Terraform/Helm).
Robust observability (PrometheGrafana/OpenTelemetry, Splunk/ELK or similar), and resilience patterns (circuit breakers, retries, DLQs).
Practical data governance: metadata catalogs, lineage, encryption, RBAC.
Excellent communication; ability to lead design, influence stakeholders, and guide cross-functional delivery.
Core competencies to include Architectural Thinking, Systems Design, Operational Excellence, Security & Compliance, Team Leadership, Stakeholder Management.
Nice to Have:
Experience with CDC, Kafka Connect custom connectors, Flink SQL, Beam.
Streaming ML or feature stores integration (online/offline consistency).
Multi region / disaster recovery for streaming platforms.
Experience with Zero downtime migrations, blue/green, and canary deployments."
"ead Data Engineer - Real Time Data Processing
Location:
Atlanta, GA
Chicago, IL
Grade Level: D1
Role Summary:
We re seeking a seasoned Lead Software Engineer to architect, build, and scale real time data processing platforms that power event driven applications and analytics. You ll lead the design of streaming microservices, govern data quality and lineage, and mentor engineers while partnering with product, platform, and security stakeholders to deliver resilient, low latency systems.
Responsibilities:
Own design & delivery of high throughput, low latency streaming solutions using technologies like Confluent Kafka, Apache Flink, Hazelcast, Kafka Streams, Kafka Connect, and Schema Registry.
Design and implement microservices and event driven systems with robust ETL/ELT pipelines for real time ingestion, enrichment, and delivery.
Establish distributed caching and in memory data grid patterns (e.g., Redis, Hazelcast) to optimize read/write performance and session/state management.
Define and operationalize event gateways / event grids for event routing, fan out, and reliable delivery.
Lead data governance initiatives standards for metadata, lineage, classifications, retention, access controls, and compliance (PII/PCI/SOX/GDPR as applicable).
Drive CI/CD best practices (pipelines, automated testing, progressive delivery) to enable safe, frequent releases; champion DevSecOps and shift left testing.
Set SLOs/SLAs, track observability (tracing, metrics, logs), and optimize performance at scale (throughput, backpressure, state, checkpointing).
Work with Security, Platform, and Cloud teams on networking, IAM, secrets, certificates, and cost optimization.
Mentor engineers, conduct design reviews, and enforce coding standards and reliability patterns.
Guide platform and delivery roadmap
Required Qualifications:
10+ years in software engineering; 5+ years designing large-scale real time or event driven platforms.
Expert with Confluent Kafka (brokers, partitions, consumer groups, Schema Registry, Kafka Connect), Flink (DataStream/Table API, stateful ops, checkpointing), Hazelcast, and/or Kafka Streams.
Strong in ETL/ELT design, streaming joins/windows, exactly once semantics, and idempotent processing.
Experience with microservices (Java/Python), REST/gRPC, protobuf/Avro, and contract-first development.
Hands-on with distributed caching and in memory data grids; performance tuning and eviction strategies.
Cloud experience in any one or more cloud platforms Azure/AWS/Google Cloud Platform; containers, Docker, Kubernetes.
Experience in production-grade CI/CD (Jenkins, Bamboo, Harness or similar), Infrastructure as Code (Terraform/Helm).
Robust observability (PrometheGrafana/OpenTelemetry, Splunk/ELK or similar), and resilience patterns (circuit breakers, retries, DLQs).
Practical data governance: metadata catalogs, lineage, encryption, RBAC.
Excellent communication; ability to lead design, influence stakeholders, and guide cross-functional delivery.
Core competencies to include Architectural Thinking, Systems Design, Operational Excellence, Security & Compliance, Team Leadership, Stakeholder Management.
Nice to Have:
Experience with CDC, Kafka Connect custom connectors, Flink SQL, Beam.
Streaming ML or feature stores integration (online/offline consistency).
Multi region / disaster recovery for streaming platforms.
Experience with Zero downtime migrations, blue/green, and canary deployments."
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.