Platform Engineer/Airflow Migration Engineer(Azure Kubernetes)

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 9 Month(s)
No Travel Required
Able to Provide Sponsorship

Skills

Apache Airflow
Backup
Business Operations
Collaboration
Computer Networking
Continuous Delivery
Continuous Integration
Data Engineering
Data Migration
DevOps
Docker
Failover
Grafana
High Availability
Identity Management
Infrastructure Architecture
Kubernetes
Management
Microsoft Azure
Microsoft SQL Server
Migration
Optimization
Orchestration
Perl
Python
Recovery
Regulatory Compliance
Reverse Engineering
SQL
SaaS
Scheduling
Storage
Terraform
Workflow

Job Details

### **Job Title:** Airflow Migration & Platform Engineer (Azure Kubernetes)

### **Duration:** 6 12 months

---

### **Overview:**

We re seeking a hands-on engineer to lead the migration of a high-volume, homegrown Perl-SQL Server scheduling system (~30,000 jobs/day) to a robust, scalable Apache Airflow platform. The ideal candidate will architect and deploy Airflow on Azure Kubernetes Service (AKS), ensuring high availability, observability, and operational integrity without reliance on SaaS-based infrastructure.

---

### **Key Responsibilities:**

- **Platform Buildout:**
Design and deploy a production-grade Airflow environment on AKS, including DAG orchestration, logging, monitoring, and autoscaling.

- **Migration Strategy:**
Analyze existing Perl-SQLServer job logic and translate scheduling workflows into Airflow DAGs with modular, maintainable Python code.

- **Infrastructure Engineering:**
Build secure, scalable Kubernetes clusters on Azure, integrating with existing enterprise tooling (e.g., secrets management, CI/CD pipelines, logging frameworks).

- **Operational Enablement:**
Establish backup, recovery, and failover strategies for Airflow and supporting services. Ensure observability and alerting are in place for job health and platform uptime.

- **Stakeholder Collaboration:**
Work closely with DevOps, data engineering, and business operations teams to validate job logic, scheduling dependencies, and performance benchmarks.

---

### **Required Skills & Experience:**

- Proven experience deploying and scaling **Apache Airflow** in **Kubernetes** environments (preferably AKS).
- Strong proficiency in **Python**, **SQL**, and **Perl** (for legacy job analysis).
- Deep understanding of **Azure infrastructure**, including networking, storage, and identity management.
- Experience with **CI/CD pipelines**, **containerization (Docker)**, and **infrastructure-as-code (Terraform or Helm)**.
- Familiarity with **monitoring tools** (e.g., Prometheus, Grafana) and **log aggregation** (e.g., ELK, Fluentd).
- Ability to reverse-engineer legacy scheduling logic and translate it into modern orchestration frameworks.
- Strong preference for candidates who have built **non-SaaS**, self-hosted infrastructure for critical workloads.

---

### **Nice to Have:**

- Experience with **SQL Server** optimization and data migration.
- Familiarity with **enterprise compliance** and **governance frameworks**.
- Prior work in **high-throughput job orchestration** environments.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.