DevOps Engineer / Data Infrastructure Engineer

  • San Jose, CA
  • Posted 1 hour ago | Updated 1 hour ago

Overview

Hybrid
Depends on Experience
Contract - W2

Skills

Kubernetes
Helm
Spark services
Hadoop frameworks
Spark
Sqoop
AWS ECS
EKS
DevOps Engineer
Data Infrastructure Engineer

Job Details

Title: DevOps Engineer / Data Infrastructure Engineer
Location: Raleigh NC AND San Jose CA
Hybrid (50%Onsite + 50%Remote)
Duration: Starting with 6 months, and can go up to 12 months and onwards

Job Requirements

  • You will design and lead the team in important architectural decisions. The candidate will provide technical leadership for the team(s) they are associated with and participate in key technical decisions.
  • They will engage with customers on escalations and ensure that there is continuous improvement in all areas.
  • Participate in technical discussions within the team and with other groups within Business Units associated with specified projects.
  • You design, develop, and maintain our real-time data processing, Data LakeHouse infrastructure.
  • You have experience with Python to write data pipelines and data processing layers.
  • You develop and maintain Ansible playbooks for infrastructure configuration and management.
  • You develop and maintain Kubernetes manifests, Helm charts, and other deployment artifacts.
  • You have hands-on experience on Docker and containerisation and how to manage/prune the images in private registries.
  • You have hands-on experience on access control in K8S cluster.
  • You have hands-on experience on SPARK and maintaining SPARK CLUSTER.
  • You monitor and troubleshoot issues related to Kubernetes clusters and containerized applications.
  • You drive initiatives to containerize standalone apps to be containerized in Kubernetes.
  • You develop and maintain infrastructure as code (IaC) and collaborate with other teams to ensure consistent infrastructure management across the organization.
  • You use observability tools to do capacity management of our services and infrastructure resources.
  • You are for guiding the development and testing activities of other engineers that involve several inter-dependencies.
  • Experience in AWS ECS and EKS is added advantage.
  • Experience in Dremio is added advantage.
  • Experience in Dynatrace or any tracing, infrastructure, or real time monitoring tool is added advantage.
  • Infrastructure knowledge on the Kubernetes, Hadoop clusters, capacity planning, scheduling, and their configurations.
  • Understanding and implementation of the Disaster Recovery for distributed clusters that include Hadoop, Dremio, S3.
  • Experience with Kubernetes, Helm and Spark services and their migration, upgrade processes and troubleshootings.
  • Understanding of Hadoop frameworks Spark, Sqoop.
  • Need to sync up with NB team on daily basis to coordinate and share the handover.
  • 8 to 10 years of relevant experience.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Akshaya Inc