AWS Cloud Engineer / Kafka Platform Engineer

Cincinnati, OH, US • Posted 21 hours ago • Updated 21 hours ago
Full Time
On-site
Up to $70/hr
Fitment

Dice Job Match Score™

👾 Reticulating splines...

Job Details

Skills

  • Amazon EC2
  • Amazon S3
  • Amazon SageMaker
  • Amazon Web Services
  • Analytical Skill
  • Analytics
  • Apache Kafka
  • Bash
  • Cloud Computing
  • Collaboration
  • Communication
  • Computer Science
  • Continuous Delivery
  • Continuous Integration
  • Data Engineering
  • Data Warehouse
  • DevOps
  • Disaster Recovery
  • Dynatrace
  • EXT
  • Grafana
  • High Availability
  • IaaS
  • Information Security Governance
  • Information Systems
  • Kubernetes
  • Machine Learning (ML)
  • Management
  • Mentorship
  • OLAP
  • Performance Tuning
  • ProVision
  • Python
  • Real-time
  • Regulatory Compliance
  • Replication
  • Roadmaps
  • Scalability
  • Scripting
  • Snow Flake Schema
  • Streaming
  • Supervision
  • Technical Writing
  • Terraform
  • Virtual Private Cloud
  • Workflow

Summary

Only W2, No C2C

Job Title: AWS Cloud Engineer V
Location: Cincinnati, OH (Onsite)
Years of Experience: 15-20
Contract Length: Thru 2026, likely ext.

What You'll Do
Kafka Platform Engineer V (AWS Cloud & Streaming Platforms)

The Kafka Platform Engineer V will play a key role in defining, building, and operating our real-time data streaming platform on AWS.
This is a hands-on engineering role responsible for Kafka platform strategy, architecture, and operations, while also contributing across broader cloud platforms including machine learning (SageMaker) and data warehousing (Snowflake).
The ideal candidate has deep experience with event-driven architectures, strong AWS platform knowledge, and a proven ability to design, build, and operate scalable, production-grade data platforms.

DUTIES AND RESPONSIBILITIES:
- Hands-on platform engineering role, leading the design, build, and operation of Kafka-based streaming platforms (Amazon MSK and Kafka ecosystem).
- Define and execute Kafka platform roadmap, architecture, and best practices aligned with enterprise data strategy.
- Build and manage real-time data pipelines using Kafka, Kafka Connect, and Schema Registry.
- Implement Infrastructure as Code (IaC) using Terraform to provision and manage Kafka clusters and supporting AWS infrastructure.
- Ensure high availability, scalability, and multi-region resiliency of Kafka environments.
- Monitor and optimize platform performance using tools such as Lenses, Prometheus, and cloud-native observability solutions.
- Provide hands-on support and troubleshooting for Kafka clusters, streaming pipelines, and integrations.
- Collaborate with application, data engineering, and analytics teams to enable event-driven integration patterns.
- Support and contribute to AWS-based platforms, including:
- SageMaker for ML workflows and model lifecycle support
- Snowflake for data ingestion and streaming integrations
- Drive security, governance, and compliance across streaming and cloud platforms.
- Create and maintain technical documentation, standards, and runbooks.
- Mentor junior engineers and promote engineering best practices across the team.

SUPERVISORY RESPONSIBILITIES: None

MINIMUM KNOWLEDGE, SKILLS AND ABILITIES REQUIRED:
- Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience)
- 8+ years of experience in data engineering, platform engineering, or cloud infrastructure
- Strong hands-on experience with:
- Apache Kafka (cluster administration, topics, partitions, replication)
- Amazon MSK (Managed Streaming for Kafka)
- Kafka Connect and Schema Registry
- Proven experience with event-driven architecture and real-time streaming systems
- Strong experience with Terraform (Infrastructure as Code) for AWS
- Hands-on experience with AWS services (EC2, VPC, IAM, S3, CloudWatch, etc.)
- Experience supporting production-grade distributed systems with high availability requirements
- Strong troubleshooting and performance tuning skills
- Experience with CI/CD pipelines and DevOps practices

PREFERRED SKILLS:
- Experience with multi-region Kafka deployments and disaster recovery strategies
- Exposure to SageMaker (ML pipelines, model deployment)
- Experience integrating Kafka with Snowflake or other OLAP/analytical platforms
- Experience with container platforms (Kubernetes/EKS)
- Familiarity with monitoring tools (Prometheus, Grafana, Dynatrace, etc.)
- Scripting experience (Python, Bash)
- Experience with Lenses or similar Kafka management tools
- AWS certifications (Solutions Architect, DevOps Engineer) preferred

KEY CHARACTERISTICS:
- Strong hands-on engineer, not just architect
- Ability to operate independently and take ownership of platform
- Comfortable working in fast-paced, evolving cloud environments
- Strong communication and collaboration skills

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 10113363
  • Position Id: 8937796
  • Posted 21 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Cincinnati, Ohio

Today

Full-time

USD 98,500.00 - 184,900.00 per year

Remote or Cincinnati, Ohio

Today

Easy Apply

Full-time

USD 130,000.00 - 170,000.00 per year

Cincinnati, Ohio

17d ago

Easy Apply

Full-time

Depends on Experience

Cincinnati, Ohio

Today

Full-time

Search all similar jobs