Overview
On Site
USD 70.00 - 80.00 per hour
Contract - W2
Skills
Generative Artificial Intelligence (AI)
Extract
Transform
Load
ELT
Management
Data Quality
Data Marts
Reporting
Analytics
Scalability
Collaboration
Functional Requirements
Apache Airflow
Documentation
Onboarding
Data Engineering
Agile
Scrum
Migration
Legacy Systems
Amazon S3
MySQL
PostgreSQL
Electronic Health Record (EHR)
Data Processing
Apache Spark
SQL
Java
Python
PySpark
Streaming
Apache Kafka
Amazon Kinesis
Apache Flink
Orchestration
Step-Functions
Data Warehouse
Amazon Redshift
Snow Flake Schema
DevOps
Terraform
Docker
Kubernetes
Encryption
Splunk
Cloud Computing
Grafana
Inspection
Business Intelligence
Visualization
Microsoft Power BI
Amazon Web Services
Data Analysis
Machine Learning (ML)
Linux
Operating Systems
Production Support
Soft Skills
Conflict Resolution
Problem Solving
Debugging
Mentorship
Coaching
MEAN Stack
Customer Service
Training And Development
SAP BASIS
Job Details
Software Guidance & Assistance, Inc., (SGA), is searching for a Senior Data Engineer for a Contract assignment with one of our premier Regulatory clients in Rockville, MD.
This position is hybrid (2 days/week onsite)
Role Objective
Design, build, and maintain scalable, cloud-native data infrastructure on AWS to support analytics, The ideal candidate will have strong experience with AWS data services, pipeline development, and modern data engineering best practices. Experience with Generative AI is a plus and highly valued.
The Senior Data Engineer must be able to:
Design, implement, and maintain scalable ETL/ELT pipelines using AWS-native tools (Glue, Lambda, Step Functions).
Build and manage data lakes and data warehouses (S3, Redshift, Athena) to support structured and semi-structured data.
Develop and optimize batch and streaming data pipelines using tools like Apache Spark, Kafka, Kinesis, or Flink.
Implement data cataloging, lineage, and governance using Glue Catalog, Lake Formation.
Ensure data quality, integrity, and reliability through validation checks, alerts, and monitoring (CloudWatch, SNS).
Create and maintain curated data models and data marts to support reporting and machine learning.
Enable self-service analytics by integrating with BI tools (QuickSight etc.).
Optimize cost, performance, and scalability of all data pipelines and infrastructure components.
Collaborate with cross-functional stakeholders to deliver data solutions including ML engineers , analysts, and product teams.
Non-Functional Requirements
Ensure solutions are secure, compliant, and follow least-privilege IAM practices.
Build reusable components to enable modular and maintainable pipeline development.
Use infrastructure-as-code tools (e.g., Terraform or CloudFormation) for repeatable deployments.
Working knowledge on frameworks like Apache Airflow
Deliver systems with 99.9% uptime and automated monitoring for failures and performance degradation.
Maintain documentation and onboarding guides for future engineers.
Experience Requirements
7+ years of experience in data engineering or related fields.
3+ years building data platforms on AWS (S3, Glue, Redshift, EMR, Lambda, etc.).
Demonstrated ability to lead technical projects and mentor junior engineers.
Experience working in Agile/Scrum development environments.
Prior involvement in migrating legacy systems to AWS is a plus.
Technical Skill Requirements (80%)
Cloud Platform: AWS (S3, Glue, Aurora(mysql/postgres), Lambda, EMR, Athena, IAM, CloudWatch)
Data Processing: Apache Spark, AWS Glue, SQL, Java/Python- PySpark
Streaming Data: Kafka, AWS Kinesis, Flink (Preferred)
Orchestration: Airflow, Step Functions, Lambda Triggers
Data Warehousing: Redshift, Snowflake (optional), BigQuery (optional)
DevOps/Automation: Terraform/CloudFormation, Docker, Kubernetes
Security & Governance: Lake Formation, KMS, Encryption, Fine-grained IAM
Logging, Tracing & Debugging: Splunk, Cloud-watch, Grafana, Code inspection, Prompting Knowledge with Amazon Q
BI/Visualization: QuickSight and/or Power BI
Preferred Qualifications
AWS Certified Data Analytics - Specialty(optional)
Experience with dbt for transformation
Exposure to data mesh or lakehouse architectures
Experience with ML model data pipelines (feature stores, model inputs)
Experience with Linux operating systems.
Experience with Production support.
Soft Skills
Strong problem-solving and debugging skills
Effective communicator and collaborator across teams
Ownership mindset with ability to lead cross-team efforts
Mentoring and coaching experience
SGA is a technology and resource solutions provider driven to stand out. We are a women-owned business. Our mission: to solve big IT problems with a more personal, boutique approach. Each year, we match consultants like you to more than 1,000 engagements. When we say let's work better together, we mean it. You'll join a diverse team built on these core values: customer service, employee development, and quality and integrity in everything we do. Be yourself, love what you do and find your passion at work. Please find us at .
SGA is an Equal Opportunity Employer and does not discriminate on the basis of Race, Color, Sex, Sexual Orientation, Gender Identity, Religion, National Origin, Disability, Veteran Status, Age, Marital Status, Pregnancy, Genetic Information, or Other Legally Protected Status. We are committed to providing access, equal opportunity, and reasonable accommodation for individuals with disabilities in employment, and our services, programs, and activities. Please visit our company to request an accommodation or assistance regarding our policy.
This position is hybrid (2 days/week onsite)
Role Objective
Design, build, and maintain scalable, cloud-native data infrastructure on AWS to support analytics, The ideal candidate will have strong experience with AWS data services, pipeline development, and modern data engineering best practices. Experience with Generative AI is a plus and highly valued.
The Senior Data Engineer must be able to:
Design, implement, and maintain scalable ETL/ELT pipelines using AWS-native tools (Glue, Lambda, Step Functions).
Build and manage data lakes and data warehouses (S3, Redshift, Athena) to support structured and semi-structured data.
Develop and optimize batch and streaming data pipelines using tools like Apache Spark, Kafka, Kinesis, or Flink.
Implement data cataloging, lineage, and governance using Glue Catalog, Lake Formation.
Ensure data quality, integrity, and reliability through validation checks, alerts, and monitoring (CloudWatch, SNS).
Create and maintain curated data models and data marts to support reporting and machine learning.
Enable self-service analytics by integrating with BI tools (QuickSight etc.).
Optimize cost, performance, and scalability of all data pipelines and infrastructure components.
Collaborate with cross-functional stakeholders to deliver data solutions including ML engineers , analysts, and product teams.
Non-Functional Requirements
Ensure solutions are secure, compliant, and follow least-privilege IAM practices.
Build reusable components to enable modular and maintainable pipeline development.
Use infrastructure-as-code tools (e.g., Terraform or CloudFormation) for repeatable deployments.
Working knowledge on frameworks like Apache Airflow
Deliver systems with 99.9% uptime and automated monitoring for failures and performance degradation.
Maintain documentation and onboarding guides for future engineers.
Experience Requirements
7+ years of experience in data engineering or related fields.
3+ years building data platforms on AWS (S3, Glue, Redshift, EMR, Lambda, etc.).
Demonstrated ability to lead technical projects and mentor junior engineers.
Experience working in Agile/Scrum development environments.
Prior involvement in migrating legacy systems to AWS is a plus.
Technical Skill Requirements (80%)
Cloud Platform: AWS (S3, Glue, Aurora(mysql/postgres), Lambda, EMR, Athena, IAM, CloudWatch)
Data Processing: Apache Spark, AWS Glue, SQL, Java/Python- PySpark
Streaming Data: Kafka, AWS Kinesis, Flink (Preferred)
Orchestration: Airflow, Step Functions, Lambda Triggers
Data Warehousing: Redshift, Snowflake (optional), BigQuery (optional)
DevOps/Automation: Terraform/CloudFormation, Docker, Kubernetes
Security & Governance: Lake Formation, KMS, Encryption, Fine-grained IAM
Logging, Tracing & Debugging: Splunk, Cloud-watch, Grafana, Code inspection, Prompting Knowledge with Amazon Q
BI/Visualization: QuickSight and/or Power BI
Preferred Qualifications
AWS Certified Data Analytics - Specialty(optional)
Experience with dbt for transformation
Exposure to data mesh or lakehouse architectures
Experience with ML model data pipelines (feature stores, model inputs)
Experience with Linux operating systems.
Experience with Production support.
Soft Skills
Strong problem-solving and debugging skills
Effective communicator and collaborator across teams
Ownership mindset with ability to lead cross-team efforts
Mentoring and coaching experience
SGA is a technology and resource solutions provider driven to stand out. We are a women-owned business. Our mission: to solve big IT problems with a more personal, boutique approach. Each year, we match consultants like you to more than 1,000 engagements. When we say let's work better together, we mean it. You'll join a diverse team built on these core values: customer service, employee development, and quality and integrity in everything we do. Be yourself, love what you do and find your passion at work. Please find us at .
SGA is an Equal Opportunity Employer and does not discriminate on the basis of Race, Color, Sex, Sexual Orientation, Gender Identity, Religion, National Origin, Disability, Veteran Status, Age, Marital Status, Pregnancy, Genetic Information, or Other Legally Protected Status. We are committed to providing access, equal opportunity, and reasonable accommodation for individuals with disabilities in employment, and our services, programs, and activities. Please visit our company to request an accommodation or assistance regarding our policy.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.