Senior Infrastructure Kafka Engineer

Phoenix, AZ, US • Posted 15 hours ago • Updated 15 hours ago
Contract Independent
Contract W2
On-site
Depends on Experience
Fitment

Dice Job Match Score™

🎯 Assessing qualifications...

Job Details

Skills

  • Kafka
  • SQL/NoSQL

Summary

Role: Senior Infrastructure Kafka Engineer

Location: Phoenix, AZ- Onsite

Employment type: contract-to-hire

Role Overview

Client is seeking a Senior Infrastructure Kafka Engineer to join the Enterprise Data Engineering team. This role is ideal for a seasoned engineer with deep experience in Apache Kafka / Confluent Kafka, messaging (MQ), SQL/NoSQL databases, and cloud infrastructure, who can lead operations and engineering for a large-scale, event-driven data platform.

You will lead Kafka platform operations and automation, integrate Kafka with core banking systems, and provide senior-level support across a broad technology stack to enable real-time data and analytics use cases across the bank.

Key Responsibilities

Platform & Infrastructure Engineering

  • Administer, configure, and troubleshoot Kafka clusters (on-prem and cloud), including broker/cluster configuration, partitioning, and performance tuning.
  • Design and implement scalable, highly available Kafka infrastructure, including disaster recovery and multi-environment strategies.
  • Integrate Kafka with upstream/downstream systems via Kafka Connect and other connectors (e.g., MQ, MongoDB, Oracle, SQL Server, PostgreSQL, MySQL).
  • Build and support real-time data pipelines using Kafka producers and streaming consumers (e.g., Spark Streaming, Kafka Streams).
  • Automate infrastructure provisioning and configuration across environments using Terraform and modern DevOps practices.
  • Deploy and manage Kafka components and clients in production and DR environments, ensuring resilience and recoverability.

Operations, Observability & Support

  • Lead a small team of engineers/technicians in event-based monitoring, diagnosis, and remediation of infrastructure issues.
  • Implement and maintain comprehensive monitoring, logging, and alerting using tools such as Splunk, Datadog, Grafana, and related observability platforms.
  • Perform proactive health checks and capacity planning to identify issues before they impact service.
  • Serve as a primary point of contact for daily operations, major incidents, and escalations related to Kafka and related infrastructure.
  • Develop, maintain, and continuously improve runbooks and playbooks for incident response, maintenance, and common operational tasks.
  • Analyze and audit support tickets to identify patterns, reduce downtime, and drive problem management and root-cause fixes.

Governance, Compliance & Collaboration

  • Ensure infrastructure and platform changes comply with internal standards, regulatory requirements, and security policies.
  • Collaborate with security, networking, application, and data engineering teams to design and operate secure, compliant, event-driven architectures.
  • Contribute to standards, best practices, and documentation for Kafka, messaging, and integration patterns across Enterprise Data Engineering.
  • Participate in and help drive agile ceremonies; influence product/technical direction for streaming and integration platforms.

Required Qualifications

  • 7+ years of experience in infrastructure engineering with a strong focus on:
    • Kafka administration (on-prem and cloud) and Kafka ecosystem (brokers, topics, consumer groups, replication, failover).
    • Messaging systems (e.g., MQ) and database integration (SQL and NoSQL).
  • Proven experience designing, deploying, and scaling Kafka clusters and connector infrastructure in production and DR environments.
  • Hands-on experience building real-time data pipelines using Kafka producers and streaming consumers (e.g., Spark Streaming).
  • Strong proficiency with at least one major cloud platform (AWS, Google Cloud Platform, or Azure) and event-driven architectures, including containerization and DevOps practices.
  • Experience with monitoring/observability tools such as Splunk, Datadog, Grafana.
  • Solid understanding of networking, operating systems (Linux/Windows), and core diagnostic tools.
  • Proficiency with source control (SVN, Git) and scripting/programming (e.g., PowerShell, Bash, Python, Perl).
  • Demonstrated ability to analyze complex issues, make sound decisions with limited information, and drive issues to resolution.
  • Strong communication, customer service, and collaboration skills; comfortable working with cross-functional technical teams.

Desired Qualifications

  • Prior experience with enterprise monitoring tools beyond those listed.
  • Financial services industry experience, ideally within a regulated banking environment.

Education

  • Bachelor s degree in Computer Science, Computer Engineering, Electronics Engineering, or equivalent professional experience.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91126849
  • Position Id: 8943210
  • Posted 15 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Phoenix, Arizona

Today

Easy Apply

Contract

USD0 - USD0

Hybrid in Tempe, Arizona

7d ago

Easy Apply

Contract

Depends on Experience

Scottsdale, Arizona

Today

Easy Apply

Contract

$90 - $100

Phoenix, Arizona

Today

Contract

USD 69.00 - 74.00 per hour

Search all similar jobs