Overview
On Site
Contract - W2
Contract - Long Term
Skills
Kafka Architecture
RHEL/Linux
Job Details
TECHNOGEN, Inc. is a Proven Leader in providing full IT Services, Software Development and Solutions for 15 years.
TECHNOGEN is a Small & Woman Owned Minority Business with GSA Advantage Certification. We have offices in VA; MD & Offshore development centers in India. We have successfully executed 100+ projects for clients ranging from small business and non-profits to Fortune 50 companies and federal, state and local agencies.
Sr. Kafka Admin with Ansible
Onsite in Woodlawn, MD
Key Required Skills
Kafka Architecture, Ansible Automation, RHEL/Linux Administration, Scripting (Bash, Shell, Python), Availability Monitoring / Triage (Splunk, Dynatrace, Prometheus).
Position Description
Architect, design, develop, and implement next-generation data streaming and event-based architecture / platform using software engineering best practices in the latest technologies:
Data Streaming, Event Driven Architecture, Event Processing Frameworks
DevOps (Jenkins, Red Hat OpenShift, Docker, SonarQube)
Infrastructure-as-Code and Configuration-as-Code (Ansible, Terraform / CloudFormation, Scripting)
Administer Kafka including automating, installing, migrating, upgrading, deploying, troubleshooting, and configuring on Linux.
Provide expertise in one or more of these areas: Kafka administration, event-driven architecture, automation, application integration, monitoring and alerting, security, business process management/business rules processing, CI/CD pipeline and containerization, or data ingestion/data modeling.
Investigate, repair, and actively ensure business continuity regardless of impacted component: Kafka Platform, business logic, middleware, networking, CI/CD pipeline, or database (PL/SQL and Data Modeling).
Brief management, customer, team, or vendors using written or oral skills at appropriate technical level for audience
All other duties as assigned or directed
Data Streaming, Event Driven Architecture, Event Processing Frameworks
DevOps (Jenkins, Red Hat OpenShift, Docker, SonarQube)
Infrastructure-as-Code and Configuration-as-Code (Ansible, Terraform / CloudFormation, Scripting)
Administer Kafka including automating, installing, migrating, upgrading, deploying, troubleshooting, and configuring on Linux.
Provide expertise in one or more of these areas: Kafka administration, event-driven architecture, automation, application integration, monitoring and alerting, security, business process management/business rules processing, CI/CD pipeline and containerization, or data ingestion/data modeling.
Investigate, repair, and actively ensure business continuity regardless of impacted component: Kafka Platform, business logic, middleware, networking, CI/CD pipeline, or database (PL/SQL and Data Modeling).
Brief management, customer, team, or vendors using written or oral skills at appropriate technical level for audience
All other duties as assigned or directed
Detailed Skills Requirements
FOUNDATION FOR SUCCESS (Basic Qualifications)
Bachelor's Degree in Computer Science, Mathematics, Engineering or a related field.
Masters or Doctorate degree may substitute for required experience
8+ years of combined experience with Site Reliability Engineering, providing DevOps support, and/or RHEL administration for mission-critical platforms, ideally Kafka.
4+ years of combined experience with Kafka (Confluent Kafka, Apache Kafka, Amazon MSK)
4+ years of experience with Ansible automation.
FACTORS TO HELP YOU SHINE (Required Skills)
These skills will help you succeed in this position:
Strong experience with Ansible Automation and authoring playbooks and roles for installing, maintaining, or upgrading platforms
Solid experience using version control software such as Git/Bitbucket including peer reviewing Ansible playbooks
Hands-on experience administrating Kafka platform (Confluent Kafka, Apache Kafka, Amazon MSK) via Ansible playbooks or other automation.
Understanding of Kafka architecture, including partition strategy, replication, transactions, tiered storage, and disaster recovery strategies.
Strong experience in automating tasks with scripting languages like Bash, Shell, or Python
Solid foundation of Red Hat Enterprise Linux (RHEL) administration
Basic networking skills
Solid experience triaging and monitoring complex issues, outages, and incidents
Experience with integrating/maintaining various 3rd party tools like ZooKeeper, Flink, Pinot, Prometheus, and Grafana.
Experience with Platform-as-a-Service (PaaS) using Red Hat OpenShift/Kubernetes and Docker containers
Experience working on Agile projects and understanding Agile terminology.
Bachelor's Degree in Computer Science, Mathematics, Engineering or a related field.
Masters or Doctorate degree may substitute for required experience
8+ years of combined experience with Site Reliability Engineering, providing DevOps support, and/or RHEL administration for mission-critical platforms, ideally Kafka.
4+ years of combined experience with Kafka (Confluent Kafka, Apache Kafka, Amazon MSK)
4+ years of experience with Ansible automation.
FACTORS TO HELP YOU SHINE (Required Skills)
These skills will help you succeed in this position:
Strong experience with Ansible Automation and authoring playbooks and roles for installing, maintaining, or upgrading platforms
Solid experience using version control software such as Git/Bitbucket including peer reviewing Ansible playbooks
Hands-on experience administrating Kafka platform (Confluent Kafka, Apache Kafka, Amazon MSK) via Ansible playbooks or other automation.
Understanding of Kafka architecture, including partition strategy, replication, transactions, tiered storage, and disaster recovery strategies.
Strong experience in automating tasks with scripting languages like Bash, Shell, or Python
Solid foundation of Red Hat Enterprise Linux (RHEL) administration
Basic networking skills
Solid experience triaging and monitoring complex issues, outages, and incidents
Experience with integrating/maintaining various 3rd party tools like ZooKeeper, Flink, Pinot, Prometheus, and Grafana.
Experience with Platform-as-a-Service (PaaS) using Red Hat OpenShift/Kubernetes and Docker containers
Experience working on Agile projects and understanding Agile terminology.
HOW TO STAND OUT FROM THE CROWD (Desired Skills)
Showcase your knowledge of modern development through the following experience or skills:
Preferred Confluent Certified Administrator for Apache Kafka (CCAAK) or Confluent Certified Developer for Apache Kafka (CCDAK)
Practical experience with event-driven applications and at least one event processing framework, such as Kafka Streams, Apache Flink, or ksqlDB.
Understanding of Domain Driven Design (DDD) and experience applying DDD patterns in software development.
Experience working with Kafka connectors and/or supporting operation of the Kafka Connect API
Experience with Avro / JSON data serialization and schema governance with Confluent Schema Registry.
Preferred experience with AWS cloud technologies or other cloud providers; AWS cloud certifications.
Experience with Infrastructure-as-Code (CloudFormation / Terraform, Scripting)
Solid knowledge of relational databases (PostgreSQL, DB2, or Oracle), NoSQL databases (MongoDB, Cassandra, DynamoDB), SQL, or/and ORM technologies (JPA2, Hibernate, or Spring JPA)
Knowledge of Social Security Administration (SSA)
Education
Bachelor's Degree with 7+ years of experience
Best Regards,
Govinda rajulu. M| Sr. Talent Acquisition Specialist
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.