Sr. DevOps Data Platform Engineer - Kafka (Hybrid)

Overview

Hybrid
Depends on Experience
Full Time
No Travel Required

Skills

DevOps
Data pipelines
automation
Kafka
Hive
Hadoop
Spark
Storm
Zookeeper
Confluent Platform
Confluent Cloud
Solr
Elastic Search
Lucene API
ava
Python
XML
XML Schema
XSD
XSLT/XPath
JSON
Jira
Jenkins
Git

Job Details

Great opportunity to join a growing AWARD WINNING organization named on Forbes America s Best Small Companies and recognized as one of Deloitte s The Exceptional 100 List of Top Performing U.S. Companies. The organization continues to experience dynamic worldwide growth with a state of the art, industry leading business model / platform and a slate of benefits ensuring your physical and financial health!!

This Dallas based technology company is looking for a talented Sr. Data Platform Engineer to join their DevOps team supporting their data, search and messaging platforms, as well as partner with the product team to ensure data integrity across the enterprise.

RESPONSIBILITIES:

  • Build, deploy, manage, and monitor scalable, fault-tolerant, secure, and high-performance data platforms and data movement solutions.
  • Partner with agile data development teams and using DevOps methodologies to create efficient, automated, CI/CD processes to promote, test and deploy code.
  • Partner with InfoSec team to ensure data is secure at-rest and in-flight.
  • Develop, test, deploy and maintain efficient reusable patterns of streaming and batch data ingestion pipeline architectures.
  • Maintain key architecture and coding standards for all platforms.
  • Ensure data platforms operate efficiently and maximum uptime.
  • Take advantage of the latest features and support for data pipelines.

BACKGROUND:

  • Sr. level experience with deploying, configuring, scaling, and troubleshooting data and search infrastructure.
  • Sr. level experience with Kafka technologies (Hive, Hadoop, Spark, Storm, Zookeeper)
  • Strong experience managing Kafka with Confluent Platform and Confluent Cloud
  • Expert level experience with alerting, monitoring, and auto-remediation in a large-scale distributed environment
  • Strong experience with Solr technologies (Solr, Elastic Search, Lucene API, etc.)
  • Strong experience supporting stream processing solutions in Big Data and Kafka, Kinesis
  • Strong knowledge of messaging/events architecture Concepts and PUB/SUB Pattern
  • Solid knowledge of Java, Python, XML, XML Schema, XSD, XSLT/XPath and JSON technologies
  • Experience in building and deploying Microservices on containers such as Pivotal Foundry Cloud, Docker, or Kubernetes etc.
  • Solid experience designing and supporting Enterprise Redis Platforms
  • Experience with Source control/Bug Tracking/Automated Build tools Jira, Jenkins, and Git
  • Experience with http web servers and load balancers
  • Experience with Reporting and ETL platforms

RedRiver offers benefits including Major Medical, Dental, Vision, LTD and 401k. More positions @: RedRiver Systems is an Equal Opportunity Employer.