Overview
Full Time
Part Time
Accepts corp to corp applications
Contract - W2
Contract - Independent
Skills
Supervision
Database Administration
Decision-making
Management
Collaboration
Data Masking
Auditing
Meta-data Management
Dimensional Modeling
Visualization
Reporting
Business Intelligence
Dashboard
Workflow
Performance Tuning
Computer Science
Microsoft SQL Server
Data Modeling
Data Governance
Data Security
Extract
Transform
Load
ELT
Data Quality
Microsoft Power BI
Semantics
Modeling
Unity
Real-time
Data Processing
Streaming
Regulatory Compliance
DevOps
Terraform
Continuous Integration
Continuous Delivery
Soft Skills
Analytical Skill
Problem Solving
Conflict Resolution
Communication
SANS
Data Engineering
Analytics
Optimization
Microsoft Azure
Databricks
Apache Spark
Microsoft
Data Flow
SQL
Python
Pandas
PySpark
Technical Direction
Job Details
Role: Analytics Data Engineer - Azure/ETL (Must be local to Atlanta,GA)
Location: Atlanta, GA (Hybrid)
Short Description
Under general supervision, combines strong technical skills with knowledge of the database administration. Works on one or more projects of high complexity.
Complete Description
The Analytics Engineer will contribute to our modern data estate strategy by developing scalable data solutions using Microsoft Fabric and Azure Databricks. This role will be instrumental in building resilient data pipelines, transforming raw data into curated datasets, and delivering analytics-ready models that support enterprise-level reporting and decision-making.
Work Location & Attendance Requirements:
Must be physically located in Georgia
On-site: Tuesday to Thursday, per manager's discretion
Mandatory in-person meetings:
o All Hands
o Enterprise Applications
o On-site meetings
o DECAL All Staff
Work arrangements subject to management's decision
Key Responsibilities:
Data Engineering & Pipeline Development
Build and maintain ETL/ELT pipelines using Azure Databricks and Microsoft Fabric.
Implement medallion architecture (Bronze, Silver, Gold layers) to support data lifecycle and quality.
Develop real-time and batch ingestion processes from IES Gateway and other source systems.
Ensure data quality, validation, and transformation logic is consistently applied.
Use Python, Spark, and SQL in Databricks and Fabric notebooks for data transformation.
Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.
Integration with Azure Services: Integrating Databricks with other Azure services like Azure One Lake, Azure ADLS Gen2, and Microsoft fabric.
Data Modeling & Curation
Collaborate with the Domain Owners to design dimensional and real-time data models.
Create analytics-ready datasets for Power BI and other reporting tools.
Standardize field naming conventions and schema definitions across datasets.
Data Governance & Security
Apply data classification and tagging based on DECAL's data governance framework.
Implement row-level security, data masking, and audit logging as per compliance requirements.
Support integration with Microsoft Purview for lineage and metadata management.
Data Modeling:
Dimensional modeling
Real-time data modeling patterns
Reporting & Visualization Support
Partner with BI developers to ensure data models are optimized for Power BI.
Provide curated datasets that align with reporting requirements and business logic.
Create BI dashboards and train users.
DevOps & Automation
Support CI/CD pipelines for data workflows using Azure DevOps.
Assist in monitoring, logging, and performance tuning of data jobs and clusters.
Required Qualifications:
Bachelor's degree in computer science, Data Engineering, or related field.
3+ years of experience in data engineering or analytics engineering roles.
Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization.
Hands-on experience with:
o Azure Databricks (Spark, Delta Lake)
o Microsoft Fabric (Dataflows, Pipelines, OneLake)
o SQL and Python (Pandas, PySpark)
o SQL Server 2019+
Familiarity with data modeling, data governance, and data security best practices.
Strong understanding of ETL/ELT processes, data quality, and schema design.
Preferred Skills:
Experience with Power BI datasets and semantic modeling.
Knowledge of Microsoft Purview, Unity Catalog, or similar governance tools.
Exposure to real-time data processing and streaming architectures.
Knowledge of federal/state compliance requirements for data handling
Familiarity with Azure DevOps, Terraform, or CI/CD for data pipelines.
Certifications (preferred):
Microsoft Fabric Analytics Engineer.
Soft Skills:
Strong analytical and problem-solving abilities.
Excellent communication skills for technical and non-technical audiences.
Experience working with government stakeholders.
Required/Desired Skills
Skill | Required/Desired | Amount | of Experience |
---|---|---|---|
3+ years of experience in data engineering or analytics engineering roles. | Required | 3 | Years |
Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization. | Required | 3 | Years |
Azure Databricks (Spark, Delta Lake) | Required | 3 | Years |
Microsoft Fabric (Dataflows, Pipelines, OneLake) | Required | 3 | Years |
SQL and Python (Pandas, PySpark) | Required | 3 | Years |
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.