Kafka or DevOps Engineer

Overview

Hybrid
$50 - $60
Contract - W2
Contract - Independent
Contract - 12 Month(s)
25% Travel

Skills

Configuration Management
Apache Kafka
Cloud Computing
Collaboration
Computer Networking
Agile
Amazon EC2
Continuous Integration
Continuous Integration and Development
Amazon RDS
Amazon S3
Disaster Recovery
Documentation
Amazon Web Services
Ansible
Grafana
IT Service Management
IaaS
Continuous Delivery
Problem Management
ROOT
Remote Desktop Services
Requirements Analysis
Kubernetes
Linux
Management
Messaging
Network
Python
Data Engineering
DevOps
FOCUS
Firewall
Git
Java
Kibana
PaaS
Requirements Elicitation
SaaS
Scripting
System Administration
Systems Engineering
Shell Scripting
Software Development
Software Engineering
Streaming
Terraform
Testing
Virtual Private Cloud

Job Details

Looking for Senior Kafka/DevOps Engineer

Must be able to go Onsite for the F2F interview in Owings Mills, Maryland

Job Description -

<>Initial Assignment Duration: 6 months</><>Work Location: Hybrid onsite required on Monday / Tuesday, Owing Mills, MD </><>INTERVIEW PROCESS: 30 min preliminary screening and then second round they would like to see in person</><>Core Skills: Kafka platform and AWS Cloud, Git needed work as well in Agile project mgmt. and document process etc. Python for scripting and Ansible for automation. Kafka data streaming as well.</><> </><>Kafka Engineering Job Description</><>Role Description</><> The successful candidate will be responsible for developing and managing infrastructure as code (IaC), software development, continuous integration, system administration, and Linux.</><> The candidate will be working with Confluent Kafka, Confluent cloud, Schema Registry, KStreams, and technologies like Terraform and Kubernetes to develop and manage infrastructure-related code on AWS platform.</><>Responsibilities</><> Support systems engineering lifecycle activities for Kafka platform, including requirements gathering, design, testing, implementation, operations, and documentation.</><> Automating platform management processes through Ansible, Python or other scripting tools/languages .</><> Troubleshooting incidents impacting the Kafka platform.</><> Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.</><> Develop documentation materials.</><> Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems.</><> Monitor, troubleshoot, and optimize the performance and reliability of Kafka in AWS environments.</><>Experience</><> Ability to troubleshoot and diagnose complex issues (e.g. including internal and external SaaS/PaaS, troubleshooting network flows).</><> Able to demonstrate experience supporting technical users and conduct requirements analysis</><> Can work independently with minimal guidance & oversight.</><> Experience with IT Service Management and familiarity with Incident & Problem management</><> Highly skilled in identifying performance bottlenecks, identifying anomalous system behavior, and resolving root cause of service issues.</><> Demonstrated ability to effectively work across teams and functions to influence design, operations, and deployment of highly available software.</><> Knowledge of standard methodologies related to security, performance, and disaster recovery</><> Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security.</><>Required Technical Expertise</><> Develop and maintain a deep understanding of Kafka and its various components.</><> Strong Knowledge in Kafka Connect, KSQL and KStreams.</><> Implementation experience in designing and building secure Kafka/streaming/messaging platform at enterprise scale and integration with other data system in hybrid multi-cloud environment.</><> Experience in working with Confluent Kafka, Confluent Cloud, Schema Registry, and KStreams Infrastructure as code (IaC) using tools like Terraform.</><> Strong operational background running Kafka clusters at scale.</><> Knowledge of both physical/onprem systems and public cloud infrastructure.</><> Strong understanding of Kafka broker, connect, and topic tuning and architectures.</><> Strong understanding of Linux fundamentals as related to Kafka performance.</><> Background in both Systems and Software Engineering.</><> Strong understanding and working knowledge, experience of containers and Kubernetes cluster.</><> Proven experience as a DevOps Engineer with a focus on AWS.</><> Strong proficiency in AWS services such as EC2, IAM, S3, RDS, Lambda , EKS and VPC. Working knowledge of networking - VPCs, Transit Gateways, firewalls, load balancers, etc.</><> Experience in monitoring and visualizing tools like Prometheus, Grafana, Kibana.</><> Competent developing new solutions in one or more of high-level language Java, Python.</><> Competent with configuration management in code/IaC including Ansible and Terraform</><> Hands on experience delivering complex software in an enterprise environment.</><> 3+ years of Python and Shell Scripting.</><> 3+ years of AWS DevOps experience.</><> Proficiency in distributed Linux environments.</><>Preferred Technical Experience</><> Certification in Confluent Kafka and/or Kubernetes is a plus</>
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About eGrove Systems Corporation