Senior Data Engineer
Employment Type: Full-Time
Start Date: ASAP
Compensation: $150,000 $180,000 base salary (commensurate with experience)
Location: Remote (U.S.-based)
Interview Process:
Overview
Our client, a fast-growing logistics organization, is continuing to invest in a modern data platform to support analytics, reporting, and operational decision-making across the business. This role sits within a centralized Data Products / Data Services team and plays a key role in building reliable, scalable, and governed data foundations used by BI, analytics, and downstream integrations.
This is a high-impact opportunity for a senior-level data engineer who enjoys owning data pipelines end to end, partnering closely with analytics and business teams, and helping shape data platform standards in a growing organization.
Position Summary
The Senior Data Engineer will design, build, and support scalable data pipelines and analytics-ready data products using modern cloud-native tooling. This role serves as a bridge between raw source data and downstream BI, analytics, and operational use cases, with a strong focus on data quality, reliability, governance, and performance.
In addition to hands-on development, this role contributes to platform standards, design reviews, and cross-team collaboration to ensure data solutions align with business needs and service-level expectations.
Key Responsibilities Data Pipeline & Platform Development
-
Build, maintain, and optimize data pipelines using dbt, Prefect, and Terraform
-
Develop and manage connectors across sources and targets, including Kafka, relational databases, Snowflake, and Materialize
-
Implement schema evolution strategies, validation rules, and automated testing
-
Support high-availability and disaster recovery design for Snowflake and Materialize environments
Data Product Engineering
-
Author and review schemas and data contracts to ensure consistency, quality, and governance
-
Develop and optimize dbt models for Snowflake and Materialize analytics layers
-
Configure clusters, shared environments, and role-based access controls
-
Document datasets and models to ensure discoverability and correct usage across teams
Stakeholder Collaboration
-
Partner with BI developers, analysts, and business teams to deliver datasets that support reporting, dashboards, and integrations
-
Investigate and resolve data quality or pipeline issues, implementing durable fixes
-
Participate in design and architecture reviews to align technical solutions with business requirements
Collaboration & Standards
-
Contribute to pull request and design reviews for data pipelines and models
-
Support platform governance, observability, and best practices for data quality and reliability
-
Collaborate with adjacent teams (Operations & Reliability, Analytics, Product) to align on SLAs and shared data definitions
-
Perform other duties as assigned
Required Skills & Experience
-
Strong proficiency in Python and SQL for building and optimizing data pipelines
-
Hands-on experience with dbt for modeling and testing, and Terraform for infrastructure-as-code
-
Experience with modern data platforms such as Snowflake, Materialize, Kafka, HVR, Fivetran, or Stitch
-
Understanding of data contracts, observability, and governance practices
-
Experience working with CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
-
Ability to translate business requirements into scalable, maintainable technical solutions
-
Familiarity with compliance frameworks (e.g., GDPR, CCPA, SOC 2) is a plus