Overview
On Site
120k - 135k
Full Time
Skills
Real-time
Streaming
Use Cases
Data Validation
Snow Flake Schema
Databricks
Sarbanes-Oxley
PCI DSS
Analytics
Reporting
Data Engineering
SQL
Writing
Python
Workflow
Data Warehouse Architecture
Dimensional Modeling
Slowly Changing Dimensions
Talend
Orchestration
Cloud Computing
Amazon Web Services
Amazon S3
Amazon Redshift
Data Governance
Regulatory Compliance
Communication
Value Engineering
Data Lake
Apache HTTP Server
Job Details
We're looking for a Senior Data Engineer who knows their way around complex data systems and enjoys building at scale. This role is for someone who's not just technical, but also curious, collaborative, and motivated to help shape a modern data platform that supports both real-time operations and big-picture analytics.
You'll be hands-on with everything from architecture to implementation, and you'll have a real voice in how we evolve our data stack. This is a high-impact role that sits at the center of engineering, analytics, governance, and business ops.
What You'll Do
You'll be hands-on with everything from architecture to implementation, and you'll have a real voice in how we evolve our data stack. This is a high-impact role that sits at the center of engineering, analytics, governance, and business ops.
What You'll Do
- Build and maintain scalable data pipelines- batch and streaming across a variety of use cases.
- Set up solid frameworks for data validation, quality checks, and system reconciliation.
- Work with modern cloud tools like AWS, Snowflake, Databricks, etc., to build reliable, secure data infrastructure.
- Help keep us in check on the compliance side - GDPR, SOX, PCI-DSS, and similar frameworks.
- Prototype new ideas and test out improvements - this team is always iterating.
- Work cross-functionally with business teams, auditors, analysts, and other engineers to align systems with real-world needs.
- Support analytics and reporting teams by building clean, trusted models
- Look for new ways to add value, share what you know, and help grow the team.
- 10+ years working in data engineering or similar roles.
- Strong command of SQL - complex joins, CTEs, subqueries, windowing functions - all that good stuff.
- Confident writing and maintaining Python code to support data pipelines, workflows, and automation.
- Solid experience with data warehouse architecture, dimensional modeling, SCDs, and aggregation strategies.
- Comfortable working with both structured and semi-structured data.
- Experience using tools like Airflow, Prefect, dbt, Talend, or FiveTran for orchestration and transformation.
- Familiarity with cloud services, especially AWS - things like S3, Redshift, Glue, Lambda, and Secrets Manager.
- A good understanding of data governance, especially how it applies to compliance-heavy environments.
- Strong communication skills- able to work with both technical and business teams without missing a beat.
- Bonus if you've worked with Data Lake / Lakehouse tech like Delta Lake, Apache Iceberg, or Hudi.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.