Overview
Remote
$60 - $65
Contract - W2
Contract - 12 Month(s)
Skills
Kafka
Confluent Cloud
AWS
Terraform
Job Details
Role: KAFKA Platform Engineer
Mode: Contract
LOCATION: Remote
The Kafka Platform Engineer designs, implements, and supports scalable, secure Kafka-based messaging pipelines that power real-time communication between critical systems such as credit, loan applications, and fraud services. This role focuses on improving the resiliency, reliability, and operations of our Kafka platform in a highly regulated financial environment. The Kafka Platform Engineer partners closely with engineering and platform teams to support the migration from on-prem to AWS and ensure seamless integration across systems.
Top 3 Must-Haves (Hard and/or Soft Skills):
- Kafka & Confluent Cloud Expertise
- Deep understanding of Kafka architecture and Confluent Cloud services.
- Experience with Kafka Connect, Schema Registry, and stream processing.
- AWS Infrastructure & Database Management
- Hands-on experience with AWS services like RDS, Aurora, EC2, IAM, and networking.
- Ability to integrate Kafka with AWS-hosted databases and troubleshoot cloud-native issues.
- Terraform & Infrastructure Automation
- Proficiency in Terraform for provisioning Kafka clusters, AWS resources, and managing infrastructure as code.
- Familiarity with GitOps workflows and CI/CD pipelines.
Essential Job Functions
- Regularly check cloud services for performance issues and recency and optimize as needed. Configure and manage user permissions and roles to ensure secure access to cloud resources. Develop and maintain backup strategies to ensure data integrity and availability. Maintain detailed records of system configurations and changes for compliance and troubleshooting. - (25%)
- Write and maintain scripts for automated deployment processes. Ensure automated tests are part of the CI/CD pipeline to catch issues early. Track deployment progress and resolve any issues that arise during the process. Work closely with developers to ensure smooth integration of new code into production. Continuously improve deployment processes to reduce downtime and increase efficiency. - (25%)
- Set up and configure tools to monitor cloud infrastructure and applications. Develop dashboards for real-time monitoring and set up alerts for critical issues. Regularly review monitoring data to identify trends and potential issues. Provide regular reports on system performance and health to stakeholders. Continuously improve monitoring solutions to cover new services and technologies. - (20%)
- Organize meetings to gather requirements from various teams for cloud projects. Ensure alignment between development, network, and security teams on cloud initiatives. Mediate and resolve any conflicts or discrepancies in requirements or priorities. Keep detailed records of discussions and decisions made during meetings. Ensure that all agreed-upon actions are completed in a timely manner. - (15%)
- Regularly review resource usage to identify areas for optimization. Predict future resource requirements based on current trends and business growth. Create plans for scaling resources up or down based on demand. Ensure that resources are allocated efficiently to avoid waste and reduce costs. Continuously review and adjust capacity plans to reflect changes in business needs or technology. - (15%)
Minimum Qualifications
- Bachelor s Degree in Information Technology, Computer Science, Engineering or related field or equivalent, relevant work experience
- At least 1 platform specific certification (AWS, Azure, Google Cloud Platform, DevSecOps, Apache Kafka).
- 2+ years of relevant experience working across areas of the Platform engineering.
- 2+ years of experience of cloud services and understanding of infrastructure as code (IaC) tools like Terraform or AWS CloudFormation.
Preferred Qualifications
- 5+ years of cloud engineering experience, particularly in designing and implementing cloud platform solutions.
- 3+ years of experience with Apache Kafka in highly regulated, mission-critical environments (preferably finance or banking).
- Strong understanding of Kafka internals and distributed systems.
- Proficiency in Java, Scala, or Python for building Kafka producers, consumers, and stream processors.
- Experience with Kafka Connect, Schema Registry (Avro), and Kafka Streams.
- Hands-on experience with containerization (Docker, Kubernetes) and CI/CD pipelines.
- Familiarity with securing Kafka using Kerberos, SSL, ACLs, and integration with IAM systems.
- Solid understanding of financial transaction systems, messaging standards, and data privacy regulations (e.g., SOX, PCI-DSS, GDPR).
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.