OverviewThis is a remote role that may only be hired in the following locations: NC, AZ, TX
We are seeking an experienced DevOps Engineer to design, build, and maintain CI/CD pipelines, infrastructure automation, and deployment workflows supporting our data engineering platform. This role focuses on infrastructure as code, configuration management, cloud operations, and enabling data engineers to deploy reliably and rapidly across AWS and Azure environments.
ResponsibilitiesCI/CD Pipeline & Deployment Automation
- Design and implement robust CI/CD pipelines using Azure DevOps or GitLab; automate build, test, and deployment processes for data applications, dbt Cloud jobs, and infrastructure changes.
- Build deployment orchestration for multi-environment (dev, qa, uat, production) workflows with approval gates, rollback mechanisms, and artifact management.
- Implement GitOps practices for infrastructure and application deployments; maintain version control and audit trails for all changes.
- Optimize pipeline performance, reduce deployment times, and enable fast feedback loops for rapid iteration.
Infrastructure as Code (IaC) & Cloud Operations
- Design and manage Snowflake, AWS and Azure infrastructure using Terraform; ensure modularity, reusability, and consistency across environments.
- Provision and manage Cloud resources
- Implement tagging strategies and resource governance; maintain Terraform state management and implement remote state backends.
- Support multi-cloud architecture patterns and ensure portability between AWS and Azure where applicable.
Configuration Management & Infrastructure Automation
- Deploy and manage Ansible playbooks for configuration management, patching, and infrastructure orchestration across cloud environments.
- Utilize Puppet for infrastructure configuration, state management, and compliance enforcement; maintain Puppet modules and manifests for reproducible environments.
- Automate VM provisioning, OS hardening, and application stack deployment; reduce manual configuration and ensure environment consistency.
- Build automation for scaling, failover, and disaster recovery procedures.
Snowflake Cloud Operations & Integration
- Automate Snowflake provisioning, warehouse sizing, and cluster management via Terraform; integrate Snowflake with CI/CD pipelines.
- Implement Infrastructure as Code patterns for Snowflake roles, permissions, databases, and schema management.
- Build automated deployment workflows for dbt Cloud jobs and Snowflake objects; integrate version control with Snowflake changes.
- Monitor Snowflake resource utilization, costs, and performance; implement auto-suspend/auto-resume policies and scaling strategies.
Python Development & Tooling
- Develop Python scripts and tools for infrastructure automation, cloud operations, and deployment workflows.
- Build custom integrations between CI/CD systems, cloud platforms, and Snowflake; create monitoring and alerting automation.
Monitoring, Logging & Observability
- Integrate monitoring and logging solutions (Splunk, Dynatrace, CloudWatch, Azure Monitor) into CI/CD and infrastructure stacks.
- Build automated alerting for infrastructure health, deployment failures, and performance degradation.
- Implement centralized logging for applications, infrastructure, and cloud audit trails; maintain log retention and compliance requirements.
- Create dashboards and metrics for infrastructure utilization, deployment frequency, and change failure rates.
Data Pipeline & Application Deployment
- Support deployment of data processing jobs, Airflow DAGs, and dbt Cloud transformations through automated pipelines.
- Implement blue-green or canary deployment patterns for zero-downtime updates to data applications.
- Build artifact management workflows (Docker images, Python packages, dbt artifacts); integrate with Artifactory or cloud registries.
- Collaborate with data engineers on deployment best practices and production readiness reviews.
Disaster Recovery & High Availability
- Design backup and disaster recovery strategies for data infrastructure; automate backup provisioning and testing.
- Implement infrastructure redundancy and failover automation using AWS/Azure native services.
Documentation & Knowledge Sharing
- Maintain comprehensive documentation for infrastructure architecture, CI/CD workflows, and operational procedures.
- Create runbooks and troubleshooting guides for common issues; document infrastructure changes and design decisions.
- Establish DevOps best practices and standards; share knowledge through documentation, lunch-and-learns, and mentoring.
QualificationsBachelor's Degree and 4 years of experience in Data engineering, big data technologies, cloud platforms OR High School Diploma or GED and 8 years of experience in Data engineering, big data technologies, cloud platforms
Preferred:
- CI/CD tools: Azure DevOps Pipelines or GitLab CI/CD (hands-on pipeline development)
- Infrastructure as Code: Terraform (AWS and Azure providers) - production-grade experience
- Configuration Management: Ansible and/or Puppet - ability to write playbooks/manifests and manage infrastructure state
- Cloud platforms: AWS (EC2, S3, RDS, VPC, IAM, Lambda, Glue, Lakeformation) and Azure (VMs, App Services, Blob Storage, Cosmos DB, networking)
- Python programming: scripting, automation, API integration, and tooling development
- Snowflake: operational knowledge of warehouse management, cost optimization, and cloud integration
- Git/GitLab/GitHub: version control, branching strategies, and repository management
- Linux/Unix system administration and command-line proficiency
- Networking fundamentals: VPCs, subnets, security groups, DNS, load balancing
- Scripting languages: Bash, Python, or similar for automation
- 5+ years in DevOps, Platform Engineering, or Infrastructure Engineering
- 3+ years hands-on with Terraform and Infrastructure as Code
- 3+ years with CI/CD tools (Jenkins, GitLab CI, Azure DevOps, or similar)
- 2+ years with configuration management tools (Ansible, Puppet, or similar)
- 2+ years supporting cloud platforms (AWS and/or Azure in production)
- 1+ years with Python automation and scripting
- Experience supporting or integrating with Snowflake or modern data warehouses
CORE COMPETENCIES:
- Strong automation mindset: identify and eliminate manual toil
- Systems thinking: understand full deployment pipelines and infrastructure dependencies
- Problem-solving and troubleshooting skills
- Clear communication with both technical and non-technical stakeholders
- Detail-oriented with focus on reliability and repeatability
- Comfortable with continuous learning of new tools and cloud services
- Collaborative approach to working with data engineering teams
- Ability to balance speed of delivery with stability and safety
Benefits are an integral part of total rewards and First Citizens Bank is committed to providing a competitive, thoughtfully designed and quality benefits program to meet the needs of our associates. More information can be found at
$descr2
$descr3