Data Automation Engineer

Overview

Remote
Depends on Experience
Contract - W2

Skills

Azure

Job Details

Data Automation Engineer
Department – Information Technology
Location – Remote w/ quarterly travel to DC
6-month contract, with possible extension
***Public Trust Required***

About Our Client
Our client is a long-standing organization that supports large-scale technology, analytics, and operational initiatives across enterprise environments. They provide advanced data, cloud, and automation solutions to customers in highly regulated industries. With teams distributed across multiple U.S. locations, they specialize in delivering secure, scalable, and high-performance data platforms. Their mission centers around innovation, customer experience, and empowering organizations through technology-driven insights and automation.

Job Description
We are seeking a customer experience focused Data Automation Engineer to work with a team of subject matter experts and developers to design and implement innovative data automation solutions for Azure cloud-based data lake, SQL, and NoSQL platform. As a data automation engineer, you will translate business requirements to data engineering and AI based solutions to support an enterprise scale Microsoft Azure based data analytics and reporting platform. Our ideal candidate is mission focused and delivery oriented and applies critical thinking to create innovative functions and solve technical issues.
In this role, you will work with a highly collaborative team that includes implementation specialists, engineering teams, and customer stakeholders. You will design, build, and optimize data pipelines that support enterprise-scale analytics and reporting workloads. You will ensure data quality, reliability, and performance while contributing to continuous improvement initiatives and Agile processes. Candidates who enjoy solving real operational challenges and innovating with new technologies will excel in this position.

Duties and Responsibilities
• Utilize Microsoft Azure services including Azure Data Factory, Synapse Pipelines, Apache Spark Notebooks, Python, SQL, stored procedures to develop high performing data pipelines.
• Continuously improve and optimize the automation toolset for reliability, scalability, and adaptability.
• Research and implement cutting-edge AI/ML and GenAI tools to rapidly develop intelligent applications, scripts, and ETL pipelines that automate data processes, and proactively eliminate workflow bottlenecks.
• Work closely with implementation specialists, engineering teams, and customer to understand data driven needs and build solutions that address real operational challenges.
• Work closely with client personnel and team members to understand data requirements and develop appropriate data solutions.
• Identify, create, prepare data required for advanced analytics, visualization, reporting, and AI/ML.
• Implement data migration, data integrity, data quality, metadata management, and data security functions to optimize data pipelines.
• Monitor and troubleshoot data related issues to maintain high availability and performance.
• Actively support Agile DevOps process, including Program Increment planning.
• Actively engage in continuous learning to increase relevant skills.
• Maintain strict versioning and configuration control to ensure integrity of data.


Required Experience/Skills
• BS degree in Computer Science or related field and 2+ years of experience in relevant field
• 2+ years of experience with more than one of the follow scripting languages: SQL, T-SQL, MDX/DAX, Python, and PySpark.
• Experience designing and building ETL/data engineering solutions, schedule and monitor utilizing various cloud services such as Azure Data Lake Services, Azure Synapse Analytics, Azure Data Factory, Integration Runtime.
• Experience working with Microsoft database and business intelligence tools, including SQL Server, including stored procedures, SSIS, SSRS, SSAS (cubes), and Power BI.
• Experience with data automation using Azure/AWS CLI tools with Bash or PowerShell scripting.
• Familiarity with Azure DevOps Repos or GitHub and pipeline versioning/release management.
• Demonstrated experience in supporting production, testing, integration, and development environments.
• Open mindset, ability to quickly adapt new technologies to solve customer problems.
• Experience in Agile projects, working with a multi-functional team.
• Must be detail oriented, and able to support multiple projects and tasks.
• Demonstrate continuous learning to increase relevant skills.
• ship and ability to successfully obtain a government-issued Public Trust clearance.


Nice-to-Haves
• Experience and/or certifications in Generative AI development, Generative AI for Data Analytics, and solution delivery.
• Microsoft certification in Azure fundamentals, data engineer, Power BI, AI or AWS certified data engineer.
• Integration knowledge using enterprise/open source ETL toolsets, REST API, Docker.
• Experience or exposure to performance fine tuning (indexing, execution plans, views), data profiling and query analytics.
• Knowledge of security compliance standards with data encryption, cloud virtual networks, routing, firewalls, log analytics, monitors
• Basic knowledge of ARM or Bicep templates for automation, familiarity with RBAC access controls.
• Data lineage and impact analysis using tools like Purview, Synapse pipeline tracing, etc.


Education
BS degree in Computer Science or related field

Pay & Benefits Summary
Up to $47/hr W2

Data Automation | Azure Data Factory | Synapse | Python | ETL | Data Engineer | Power BI | GenAI
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Catapult Solutions Group