Overview
On Site
Contract - W2
Skills
Finance
Lean Methodology
Testing
Documentation
Accountability
Design Review
Continuous Improvement
Cloud Computing
Extract
Transform
Load
ELT
Scripting
Data Architecture
Semantics
Data Quality
Regulatory Compliance
Sprint
Streaming
Use Cases
Data Governance
Collaboration
Mentorship
Soft Skills
Agile
SQL
Python
Amazon Web Services
Amazon Kinesis
Amazon S3
Orchestration
Step-Functions
Continuous Integration
Continuous Delivery
Workflow
Snow Flake Schema
Databricks
Amazon Redshift
Informatica
Conflict Resolution
Problem Solving
Debugging
DICE
Job Details
What We Do/Project
As part of our transformation, we are evolving how finance, business and technology collaborate, shifting to lean-agile, user-centric small product-oriented delivery teams (PODs) that deliver integrated, intelligent, scalable solutions, and bring together engineers, product owners, designers, data architects, and domain experts.
Each pod is empowered to own outcomes end-to-end-refining requirements, building solutions, testing, and delivering in iterative increments. We emphasize collaboration over handoffs, working software over documentation alone, and shared accountability for delivery. Engineers contribute not only code, but also to design reviews, backlog refinement, and retrospectives, ensuring decisions are transparent and scalable across pods. We prioritize reusability, automation, and continuous improvement, balancing rapid delivery with long-term maintainability.
The Data Engineer is an integral member of the Platform Pod, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product pods, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals.
Job Responsibilities / Typical Day in the Role
Build & Maintain Pipelines
Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions).
Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Support Data Architecture & Models
Implement data models defined by architects into physical schemas.
Contribute to pipeline designs that align with canonical and semantic standards.
Collaborate with application pods to deliver pipelines tailored to product features.
Ensure Data Quality & Governance
Apply validation rules and monitoring to detect and surface data quality issues.
Tag, document, and register new datasets in the enterprise data catalog.
Follow platform security and compliance practices (e.g., Lake Formation, IAM).
Collaborate in Agile Pods
Actively participate in sprint ceremonies and backlog refinement.
Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
Promote reuse of pipelines and shared services across pods.
Must Have Skills / Requirements
1) Data Engineer experience or in a related role.
a. 3-5 years of experience
2) Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
a. 3-5 years of experience
3) Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
a. 3-5 years of experience
Nice to Have Skills / Preferred Requirements
1) Proven ability to optimize pipelines for both batch and streaming use cases.
2) Knowledge of data governance practices, including lineage, validation, and cataloging.
3) Strong collaboration and mentoring skills; ability to influence across pods and domains.
Soft Skills:
1) Collaborative mindset: Willingness to work in agile pods and contribute to cross-functional outcomes.
Technology Requirements:
1) Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
2) Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
3) Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
4) Strong problem-solving and debugging skills for pipeline operations.
Additional Notes
Hybrid - 3-days on-site, CA - Burbank
#LI-NN2
#LI-hybrid
#DICE
As part of our transformation, we are evolving how finance, business and technology collaborate, shifting to lean-agile, user-centric small product-oriented delivery teams (PODs) that deliver integrated, intelligent, scalable solutions, and bring together engineers, product owners, designers, data architects, and domain experts.
Each pod is empowered to own outcomes end-to-end-refining requirements, building solutions, testing, and delivering in iterative increments. We emphasize collaboration over handoffs, working software over documentation alone, and shared accountability for delivery. Engineers contribute not only code, but also to design reviews, backlog refinement, and retrospectives, ensuring decisions are transparent and scalable across pods. We prioritize reusability, automation, and continuous improvement, balancing rapid delivery with long-term maintainability.
The Data Engineer is an integral member of the Platform Pod, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product pods, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals.
Job Responsibilities / Typical Day in the Role
Build & Maintain Pipelines
Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions).
Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Support Data Architecture & Models
Implement data models defined by architects into physical schemas.
Contribute to pipeline designs that align with canonical and semantic standards.
Collaborate with application pods to deliver pipelines tailored to product features.
Ensure Data Quality & Governance
Apply validation rules and monitoring to detect and surface data quality issues.
Tag, document, and register new datasets in the enterprise data catalog.
Follow platform security and compliance practices (e.g., Lake Formation, IAM).
Collaborate in Agile Pods
Actively participate in sprint ceremonies and backlog refinement.
Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
Promote reuse of pipelines and shared services across pods.
Must Have Skills / Requirements
1) Data Engineer experience or in a related role.
a. 3-5 years of experience
2) Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
a. 3-5 years of experience
3) Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
a. 3-5 years of experience
Nice to Have Skills / Preferred Requirements
1) Proven ability to optimize pipelines for both batch and streaming use cases.
2) Knowledge of data governance practices, including lineage, validation, and cataloging.
3) Strong collaboration and mentoring skills; ability to influence across pods and domains.
Soft Skills:
1) Collaborative mindset: Willingness to work in agile pods and contribute to cross-functional outcomes.
Technology Requirements:
1) Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
2) Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
3) Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
4) Strong problem-solving and debugging skills for pipeline operations.
Additional Notes
Hybrid - 3-days on-site, CA - Burbank
#LI-NN2
#LI-hybrid
#DICE
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.