Job Title: Senior Data Ops Engineer AI Automation & Operational Data
Location: Atlanta, GA (Need locals)
Duration: Long Term Contract
Role Overview
We are seeking a Senior Data Engineer with strong hands-on coding expertise and critical thinking skills, focused on building automation pipelines and AI-driven workflows using semi-structured and unstructured data.
Important: This is NOT a traditional analytics-focused Data Engineering role. The ideal candidate must have deep experience in data discovery, operational data systems, automation, and governance, rather than only working with clean, structured, analytics-ready datasets.
Key Responsibilities
-
Design and develop automation pipelines and workflows using semi-structured and unstructured data
-
Integrate and manage operational data from HR, Finance, Operations, and other enterprise systems
-
Implement and maintain data governance, lineage tracking, and data cataloging
-
Build and support production-grade ETL/ELT pipelines
-
Ensure data quality, observability, instrumentation, and monitoring
-
Collaborate with business stakeholders to convert ambiguous requirements into scalable data solutions
-
Support enterprise-grade Power BI implementations, including governance and performance optimization
-
Maintain and enhance Operational Data Stores (ODS) and enterprise data platforms
Required Skills & Experience 1. Data Engineering Fundamentals (Critical)
-
Experience implementing and managing data catalogs
-
Strong understanding of data lineage, governance, and metadata management
-
Experience creating and maintaining data dictionaries
-
Expertise in dimensional modeling
-
Strong experience building ETL / ELT pipelines
-
Knowledge of data quality frameworks and monitoring
-
Experience designing and maintaining Operational Data Stores (ODS)
2. Hands-On, Code-First Engineering (Must Have)
-
Strong hands-on programming experience in Python and SQL
-
Ability to write production-quality, scalable, and maintainable code
-
Strong knowledge of software engineering best practices and design patterns
-
Ability to review, assess, and improve existing codebases
-
Must be a true engineer, not dependent on AI-generated code for core development
3. Data Observability & Instrumentation (High Priority)
-
Experience implementing data observability tools
-
Strong knowledge of pipeline instrumentation
-
Experience with monitoring, alerting, and incident response
-
Continuous monitoring and improvement of data quality and reliability
4. Cloud & Platform Expertise (Azure Primary)
-
Hands-on experience with Azure Data Platform, including:
-
Azure SQL
-
Azure Synapse
-
Azure Data Factory
-
Experience with Azure Document Intelligence (OCR / document processing)
-
Experience integrating Azure AI Services
-
Knowledge of Microsoft 365 administration and governance
-
Experience managing Azure Active Directory (AAD)
5. Power BI Enterprise Implementation
-
Experience implementing enterprise-grade Power BI solutions
-
Strong understanding of data modeling and dimensional modeling
-
Performance tuning and optimization of Power BI datasets
-
Experience implementing security, governance, and access control
-
Ability to train and enable business users
Technical Stack
| Category | Tools / Technologies |
| Cloud Platform | Azure |
| Data Engineering | Databricks, Azure Data Factory, Azure Synapse |
| BI / Visualization | Power BI |
| AI / Automation | Azure AI Services, Azure Document Intelligence |
| Governance | Data Catalog, Lineage, Metadata Management Tools |
| Programming | Python, SQL |
| Collaboration | Microsoft 365, Azure Active Directory |
Ideal Candidate Profile
-
Strong engineering mindset with hands-on coding expertise
-
Experience working with operational, semi-structured, and unstructured data
-
Deep understanding of data governance and enterprise data systems
-
Ability to build production-grade, scalable data pipelines
-
Experience working in AI-enabled and automation-focused environments
-
Strong communication skills and ability to work with cross-functional teams