Senior DevOps Engineer 3 days Onsite project

Infodyne Solutions
Dice Job Match Score™
⭐ Evaluating experience...
Job Details
Skills
- Senior DevOps Engineer
- AWS
- EC2
- ECS
- EKS
- Lambda
- S3
- RDS
- Route 53
- Azure
- GCP
- Jenkins
- GitLab
- Terraform
- CloudFormation
- Ansible
- Docker
- Kubernetes
- Helm
- Datadog
- python
- HashiCorp
- PostgreSQL
- MySQL
- DynamoDB
- CI/CD pipelines
Summary
Senior DevOps Engineer Onsite project - Hybrid 2/3 days onsite
Location: Nashville, TN
Duration: 12 Months+
Rate: $50-60/Hr
We are seeking a talented and motivated DevOps Engineer to join our Technology & Digital team in Nashville, TN. In this role, you will design, implement, and maintain CI/CD pipelines, cloud infrastructure, and automation frameworks that power client's digital platforms, streaming integrations, and internal tooling. You will work closely with software engineers, data teams, and security to build resilient, scalable, and secure systems that support the global music ecosystem.
Category | Tools & Technologies |
Cloud | AWS (EC2, ECS, EKS, Lambda, S3, RDS, Route 53), Azure, Google Cloud Platform |
CI/CD | GitHub Actions, Jenkins, GitLab CI, CircleCI |
IaC | Terraform, CloudFormation, Ansible |
Containers | Docker, Kubernetes, Helm |
Observability | Datadog, Grafana, Prometheus, PagerDuty, ELK |
Languages | Python, Bash, Go, YAML |
Databases | PostgreSQL, MySQL, DynamoDB, Redis |
Security | Vault (HashiCorp), IAM, SAST/DAST tooling |
Key Responsibilities
- Design, build, and maintain CI/CD pipelines using tools such as Jenkins, GitHub Actions, or GitLab CI to accelerate software delivery across Client's digital platforms.
- Manage and optimize cloud infrastructure on AWS and/or Azure, including compute, networking, storage, and serverless components.
- Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi to ensure repeatable and auditable environments.
- Monitor system health, performance, and availability using observability tools (Datadog, Grafana, PagerDuty, ELK stack); proactively respond to incidents.
- Collaborate with software engineering teams to containerize applications using Docker and orchestrate workloads with Kubernetes (EKS, AKS, or GKE).
- Partner with the security team to embed security practices across the SDLC, including secrets management, vulnerability scanning, and compliance automation.
- Maintain internal developer platforms, self-service portals, and shared tooling to improve engineering productivity across Nashville and global teams.
- Support database infrastructure (PostgreSQL, MySQL, DynamoDB, Redis) including backups, performance tuning, and failover strategies.
- Lead incident response, post-mortem processes, and continuous improvement initiatives to improve mean time to recovery (MTTR).
- Document architecture decisions, runbooks, and operational procedures to ensure knowledge sharing across the team.
Required Qualifications
- 7+ years of hands-on DevOps, Site Reliability Engineering, or Platform Engineering experience in a production environment.
- Proficiency with at least one major cloud provider (AWS preferred; Azure or Google Cloud Platform considered).
- Strong experience with CI/CD tools (GitHub Actions, Jenkins, CircleCI, or equivalent).
- Solid understanding of containerization and orchestration: Docker and Kubernetes.
- Experience writing IaC with Terraform or equivalent tooling.
- Scripting skills in Python, Bash, or Go for automation and tooling.
- Familiarity with monitoring and observability platforms (Datadog, PrometheGrafana, Splunk, or ELK).
- Understanding of networking fundamentals: DNS, TLS/SSL, load balancing, firewalls, and VPNs.
- Strong communication skills with the ability to collaborate across technical and non-technical stakeholders.
Preferred Qualifications
- Experience in the media, entertainment, or music industry, including streaming platform integrations (Spotify, Apple Music, Amazon Music, TIDAL).
- Familiarity with digital rights management (DRM) infrastructure or content delivery pipelines.
- AWS Certified DevOps Engineer, Solutions Architect, or equivalent certification.
- Experience with GitOps workflows using ArgoCD or Flux.
- Familiarity with data pipeline tooling: Apache Kafka, Airflow, or AWS Glue.
- Prior work in a multi-region, globally distributed infrastructure environment.
- Dice Id: 91092600
- Position Id: DVPE2
- Posted 23 hours ago
Company Info
About Infodyne Solutions
InfoDyne has ventured into niche market segments including high competition areas, proving its capability with consistency. To that end, we offer high quality IT services to our clients - a true value for money. Our solution gives us the ability to work with our clients efficiently.
Infodyne Solutions delivers top-level talent with the industry acumen and certifications to produce immediate and measurable results. Our analysts and engineers have deep, real-world experience across disciplines, making them adept at winning trust and fostering a rapport among your internal staff. As a result, our Business Analytics Solutions bridge gaps between IT professionals and business stakeholders.
We help you to identify the most appropriate technologies for your needs. We have extensive hands-on experience with Cognos, Tableau, Spotfire and other Business Analytics tools.
Similar Jobs
It looks like there aren't any Similar Jobs for this job yet.
Search all similar jobs