Sr. Data Engineer DBT / Snowflake / Python / Airflow - Multilple Locations

• Posted 5 hours ago • Updated 1 minute ago
Contract W2
Company Branding Image
Fitment

Dice Job Match Score™

🔗 Matching skills to job...

Job Details

Skills

  • python
  • Snowflake
  • Airflow
  • DBP

Summary

Sr. Data Engineer DBT / Snowflake / Python / Airflow

Number of positions: Multiple

Location: Multiple Locations Across the USA Interview:

Video Work Type: Hybrid

ABOUT THE ROLE

We are seeking an experienced Senior Data Engineer with strong hands-on expertise in DBT and Snowflake to join a high-impact data engineering team. The ideal candidate will have a proven track record of building and managing scalable data pipelines, transforming large datasets, and enabling reliable data delivery across the enterprise. This role sits at the intersection of data infrastructure, analytics engineering, and pipeline automation.

KEY RESPONSIBILITIES

  • Design, develop, and maintain scalable data transformation pipelines using DBT (hands-on, production-level experience required)
  • Build and optimize data models, marts, and transformation logic within Snowflake
  • Develop and manage end-to-end data pipelines using Apache Airflow for orchestration and scheduling
  • Write clean, efficient, and reusable Python scripts for data ingestion, transformation, and automation tasks
  • Implement DBT best practices including modular model design, testing frameworks, documentation, and version control
  • Define and enforce data quality checks, testing, and monitoring within DBT pipelines
  • Optimize Snowflake performance through query tuning, clustering, partitioning, and warehouse sizing strategies
  • Collaborate with data analysts, data architects, and business stakeholders to understand data requirements and translate them into scalable engineering solutions
  • Manage and maintain DAGs in Airflow for pipeline scheduling, dependency management, and failure alerting
  • Support migration or integration efforts involving legacy systems such as Oracle databases and IBM DataStage pipelines
  • Participate in data architecture discussions and contribute to the evolution of the enterprise data platform
  • Document pipelines, data flows, transformation logic, and operational runbooks
  • Perform root cause analysis on data quality issues and pipeline failures and drive timely resolution
  • Contribute to CI/CD practices for data pipeline deployment using Git-based version control workflows

REQUIRED QUALIFICATIONS (Primary Skills)

  • 5+ years of overall experience in Data Engineering or related field
  • Hands-on production experience with DBT (dbt Core or dbt Cloud) - this is a strict requirement; theoretical or tutorial-level exposure will not be considered
  • Proficiency in building and managing DBT models, tests, macros, seeds, snapshots, and documentation
  • Strong hands-on experience with Snowflake including data modeling, query optimization, role-based access control, and virtual warehouse management
  • Solid Python programming skills for data pipeline development, scripting, and automation
  • Hands-on experience building and managing workflows in Apache Airflow including DAG development, task dependencies, and error handling
  • Strong understanding of dimensional modeling, data vault, or similar data warehousing methodologies
  • Experience with Git and CI/CD practices for pipeline version control and deployment
  • Strong analytical and problem-solving skills with the ability to work across complex data environments

PREFERRED QUALIFICATIONS (Secondary Skills)

  • Experience working with Oracle databases including SQL querying, schema understanding, and data extraction
  • Hands-on or working knowledge of IBM DataStage for ETL pipeline development and migration
  • Familiarity with cloud platforms: AWS, Azure, or Google Cloud Platform
  • Experience with data lakehouse architectures or modern data stack tooling
  • Knowledge of data governance, lineage tracking, and metadata management
  • Exposure to tools such as Great Expectations, Monte Carlo, or similar data observability platforms
  • Experience in financial services, healthcare, or other regulated industry data environments

GOOD TO HAVE

  • Snowflake SnowPro certification
  • DBT certification (Analytics Engineering)
  • Familiarity with Spark, Kafka, or streaming data pipelines
  • Experience with Looker, Tableau, or Power BI for downstream analytics support

WORK ARRANGEMENT

This position is Hybrid with on-site presence expected on a regular basis. Fully remote work arrangement will be considered for candidates with exceptional experience and a demonstrated track record of delivering results in distributed team environments. All interviews will be conducted via video.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91140876
  • Position Id: 2026-220
  • Posted 5 hours ago

Company Info

About Cliff Services Inc

Cliff Services Inc. is an IT services and consulting company into planning and implementing cutting-edge IT business solutions and services for various business problems, in retail, healthcare, finance, education, food and various other industries. With our vast technology and industry expertise we provide scalable business solutions and assist our clients in achieving their business objectives with the use of technology.

About_Company_OneAbout_Company_Two
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

McLean, Virginia

Yesterday

Easy Apply

Contract

Depends on Experience

Remote

3d ago

Easy Apply

Contract

Depends on Experience

Hybrid in Oaks, Pennsylvania

3d ago

Easy Apply

Contract, Third Party

Depends on Experience

Remote

Yesterday

Easy Apply

Contract

Depends on Experience

Search all similar jobs