Data Engineer

Overview

Hybrid
Depends on Experience
Contract - W2
Contract - 6 Month(s)
No Travel Required

Skills

Amazon Redshift
Amazon S3
Amazon Web Services
Business Intelligence
Data Lake
Continuous Integration
PySpark
Python
Scripting
Extract
Transform
Load
Analytics
JavaScript
JIRA
TypeScript
Data Quality
Glue
IAC

Job Details

Job Title Sr. Data Engineer Duration 6 Months Location Bellevue, WA (Hybrid 3 days a week)

Job Description: Day to Day:
This person would be supporting the Operations and Analytics team within the overarching Production Analytics organization. Production Analytics handles all manufacturing outside of pre and post launch. Operations and Analytics specifically focuses on gathering insights from their in-house infrastructure for all workstreams (product, transport, launch team, HR, etc). The team is made up of Data Engineers, Business Intelligence Engineers, and a few Software Developers to support code migrations.

What does this person s day look like?

  • Percentage break This person should understand how different data systems work. How do we take the data from the different workstream, create and execute ETL pipelines to deploy the data through CDK automation.
  • The systems architecture has already been created, this person would need to be able to read it, understand what needs to be done, deep dive, and deliver results.


Must Have:

  • first three competencies (Core AWS skills, infrastructure as code and Programming) are critical.
  • Core AWS Skills: Advanced Redshift cluster management, query optimization, and security. AWS Glue ETL development with PySpark/Python transformations. Lambda serverless processing and S3 data lake architecture with lifecycle policies.
  • Infrastructure as Code: Expert AWS CDK with TypeScript (next generation of JavaScript) for multi-stage deployments (alpha beta gamma prod). Complex stack orchestration, custom resources, and environment-specific configurations.
  • Programming: Python for Glue scripts and Lambda functions, advanced SQL for Redshift optimization, TypeScript for infrastructure code. Data quality frameworks and incremental processing patterns.
  • Security & Governance: Lake Formation permissions, IAM roles, KMS encryption, and VPC networking. Tag-based access control and audit logging for government compliance.
  • Operations: Event Bridge scheduling for 100+ daily data pipelines, CloudWatch monitoring, CI/CD automation, and disaster recovery planning. Performance tuning for large-scale data processing.
  • Domain Knowledge: Manufacturing systems (MES) integration, third-party APIs (Smartsheet, JIRA), and financial reporting. Cross-region replication and scalability planning.
  • Soft Skills: Technical documentation, stakeholder communication, and Agile methodologies for production-grade satellite manufacturing data operations.


How many years of experience:

  • 5+ years and degree


What environment/technical ecosystem does your team work in/ is this project in?

  • A senior data engineer supporting Leo OAT's needs expertise across AWS data services and infrastructure automation


Is this work related to a new application build, migration or maintenance?

  • Maintenance, CI/CD


Pluses:

  • Debugging and coding
  • Cause Analysis
  • Production Data Base/Production Environment experience at an enterprise scale
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.