Data Extract, Transform, Load (ETL) Systems Engineer

  • CHANTILLY, VA
  • Posted 16 hours ago | Updated 4 hours ago

Overview

On Site
Full Time

Skills

API
Data Security
Flat File
Data Cleansing
ROOT
Documentation
Collaboration
Data Quality
Security Clearance
Science
Mathematics
Extraction
Application Development
Interfaces
As-is Process
Tableau
Python
Java
Scala
Data Architecture
Scripting
Extract
Transform
Load
Data Integration
Apache Kafka
Apache Flume
Google Cloud
Google Cloud Platform
Data Flow
RESTful
Web Services
Relational Databases
SQL
MySQL
PostgreSQL
Oracle
NoSQL
Database
MongoDB
Apache Cassandra
Amazon DynamoDB
Data Warehouse
Amazon Redshift
Snow Flake Schema
Agile Management
Software Configuration
Amazon Web Services
Data Analysis
Cloudera
Microsoft Azure
Data Modeling
Data Governance
Artificial Intelligence
Machine Learning (ML)
Information Technology
Systems Engineering
FOCUS

Job Details

Job ID: 2506041

Location: CHANTILLY, VA, US

Date Posted: 2025-05-23

Category: Engineering and Sciences

Subcategory: Systems Engineer

Schedule: Full-time

Shift: Day Job

Travel: Yes, 10 % of the Time

Minimum Clearance Required: TS/SCI with Poly

Clearance Level Must Be Able to Obtain: None

Potential for Remote Work: No

Description

JOB DESCRIPTION:

SAIC is seeking a Data Extract, Transform, Load (ETL) Systems Engineer in Chantilly, VA to focus on Application Programming Interface design (API), implementation and maintenance to extract data from various sources, transform it, and visualize it for analysis and use by Senior decision makers. This role also involves advising on data infrastructure, data quality, and data security while employing a variety of technical tools and solutions.

The successful applicant will exhibit flexibility in task execution and provide technical expertise. As a SETA advisor, candidates will be required to demonstrate value-added judgment to advise the government on program activities. The successful candidate will produce recommendations and deliverables in a thorough, practicable, and consistent manner congruent with the organization's objectives. ship required. Active Top Secret/Sensitive Compartmented Information (TS/SCI) clearance with Poly is required to start and must be maintained.

Responsibilities:

Design, develop, and implement ETL processes to extract, transform, and load data from various sources (e.g., databases, flat files, APIs).

Build, maintain, and optimize data pipelines to ensure efficient and reliable data flow.

Ensure data quality and integrity throughout the ETL process, addressing data cleansing, validation, and security concerns.

Design and maintain data warehouse schema and data models, ensuring data consistency and accuracy.

Provide technical expertise and guidance on data infrastructure, data modeling, and data governance practices.

Monitor and optimize ETL pipeline performance, addressing bottlenecks and improving execution times.

Troubleshoot ETL issues, identify root causes, and implement solutions.

Create and maintain documentation for ETL processes, data mappings, and data models.

Collaborate with cross-functional teams (e.g., data analysts, business users) to understand data requirements and ensure data quality.

Qualifications

Required Skills and Qualifications:
  • Active TS/SCI clearance with polygraph.
  • Bachelor's degree in science, technology, engineering, or mathematics (STEM) and minimum three (3) years or more relevant experience; master's in STEM field and one (1) year or more experience.
  • Demonstrated proficiency in designing and implementing data pipelines that automate the Extraction, Transformation, and Loading (ETL) process.
  • Experience with Application Programming Interfaces (APIs) and implementing "as-is" or "reworking" existing APIs.
  • Experience with Tableau extract, transform, and load functions.
  • A strong background in Python, Java, Scala, and/or SQL.
  • Significant AWS service knowledge.
  • Comprehensive understanding of data architecture and best practices.
  • Demonstrated ability to write scripts for data ETL.
  • Experience with data integration tools and platforms, such as Apache Kafka, Apache Flume, AWS Glue, Azure Data Factory, or Google Cloud Dataflow.
  • Proficiency in working with RESTful APIs and web services.
  • Proficiency with relational databases (e.g., MS SQL, MySQL, PostgreSQL, Oracle).
  • Experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
  • Knowledgeable of data warehousing concepts and data warehousing platforms (e.g., Amazon Redshift, Google BigQuery, Snowflake).
  • Experienced in documenting project requirements and schedule using agile project management techniques.
  • Experience with software configuration management techniques.
  • Experience with Retrieval Augmented Generation (RAG).
  • AWS Solutions Architect, Data Analytics, or Developer Associate certification preferred.
  • Cloudera Data Platform, Azure Data Engineer, or Google Data Engineer certification(s) preferred.
  • Familiarity with data modeling, data cataloging, and/or data governance is desired.
  • Familiarity with the application and use of Artificial Intelligence (AI) and Machine Learning (ML) services is desired.



Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About SAIC