JOB TITLE: DATA AUTOMATION ENGINEER
JOB LOCATION: WASHINGTON, DC (REMOTE)
WAGE RANGE*: $45.00-48.00 PER HOUR
JOB NUMBER: SAIJP00038724
REQUIRED EXPERIENCE:
BS degree in Computer Science or related field and 2+ years of experience in relevant field
2+ years of experience with more than one of the follow scripting languages: SQL, T-SQL, MDX/DAX, Python, and
PySpark.
Experience designing and building ETL/data engineering solutions, schedule and monitor utilizing various cloud
services such as Azure Data Lake Services, Azure Synapse Analytics, Azure Data Factory, Integration Runtime.
Experience working with Microsoft database and business intelligence tools, including SQL Server, including
stored procedures, SSIS, SSRS, SSAS (cubes), and Power BI.
Experience with data automation using Azure/AWS CLI tools with Bash or PowerShell scripting.
Familiarity with Azure DevOps Repos or GitHub and pipeline versioning/release management.
Demonstrated experience in supporting production, testing, integration, and development environments.
Open mindset, ability to quickly adapt new technologies to solve customer problems.
Experience in Agile projects, working with a multi functional team.
Must be detail oriented, and able to support multiple projects and tasks.
Demonstrate continuous learning to increase relevant skills.
ship and ability to successfully obtain a government-issued Public Trust clearance.
Not required, but additional education, certifications, and/or experience are a plus:
Experience and/or certifications in Generative AI development, Generative AI for Data Analytics, and solution
delivery.
JOB DESCRIPTION
Data Automation Engineer to work with a team of subject matter
experts and developers to design and implement innovative data automation solutions for Azure cloud-based data lake,
SQL, and NoSQL platform. As a data automation engineer, you will translate business requirements to data engineering
and AI based solutions to support an enterprise scale Microsoft Azure based data analytics and reporting platform. Our
ideal candidate is mission focused and delivery oriented and applies critical thinking to create innovative functions and
solve technical issues.
Utilize Microsoft Azure services including Azure Data Factory, Synapse Pipelines, Apache Spark Notebooks,
Python, SQL, stored procedures to develop high performing data pipelines.
Continuously improve and optimize the automation toolset for reliability, scalability, and adaptability.
Research and implement cutting-edge AI/ML and GenAI tools to rapidly develop intelligent applications, scripts,
and ETL pipelines that automate data processes, and proactively eliminate workflow bottlenecks.
Work closely with implementation specialists, engineering teams, and customer to understand data driven
needs and build solutions that address real operational challenges.
Work closely with client personnel and team members to understand data requirements and develop
appropriate data solutions.
Identify, create, prepare data required for advanced analytics, visualization, reporting, and AI/ML.
Implement data migration, data integrity, data quality, metadata management, and data security functions to
optimize data pipelines.
Monitor and troubleshoot data related issues to maintain high availability and performance.
Assist in ETL performance testing for data pipelines by executing test runs and validating data load performance
against expected benchmarks.
Support baseline performance analysis by comparing new pipeline performance with legacy system metrics
(e.g., load time, throughput, latency).
Help monitor pipeline execution metrics such as run time, data volume processed, and resource utilization to
identify bottlenecks.
Participate in performance test execution for Azure-based pipelines (ADF, Synapse, Databricks) in non-
production environments.
Assist in identifying performance issues and contribute to tuning efforts (e.g., query optimization, partitioning,
indexing basics).
Validate data consistency and completeness after performance test runs to ensure no data loss during high-
volume processing.
Collaborate with DevOps and infrastructure teams to understand how compute, memory, and scaling impact
pipeline performance.
Follow defined performance testing processes, checklists, and guidelines established by senior team members.
Document test results, issues, and observations clearly for team review and tracking.
Actively support Agile DevOps process, including Program Increment planning.
Actively engage in continuous learning to increase relevant skills.
Maintain strict versioning and configuration control to ensure integrity of data.
Equal opportunity employer as to all protected groups, including protected veterans and individuals with disabilities
* While an hourly range is posted for this position, an eventual hourly rate is determined by a comprehensive salary analysis which considers multiple factors including but not limited to: job-related knowledge, skills and qualifications, education and experience as compared to others in the organization doing substantially similar work, if applicable, and market and business considerations. Benefits offered include medical, dental and vision benefits; dependent care flexible spending account; 401(k) plan; voluntary life/short term disability/whole life/term life/accident and critical illness coverage; employee assistance program; sick leave in accordance with regulation. Benefits may be subject to generally applicable eligibility, waiting period, contribution, and other requirements and conditions. Benefits offered are in accordance with applicable federal, state, and local laws and subject to change at TCM's discretion.
#Dice
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
- Dice Id: cxtcml
- Position Id: 26-00661
- Posted 11 hours ago