Data Engineer **Hybrid in Cupertino**

  • Cupertino, CA
  • Posted 2 days ago | Updated 2 days ago

Overview

Hybrid
Up to $60
Contract - W2
Contract - 12 Month(s)
No Travel Required

Skills

Python
SQL
dbt
Postgres
Snowflake
ETL
Data pipelines
Github

Job Details

Do something big and innovative! Stretch your creative muscles and work on big issues. Since 1989, we have developed technology environments, applications, and tools by providing experienced teams to implement, enhance, and maintain our clients essential systems and applications. Come join the Scalence team!
Title: Data Engineer
Start Date: ASAP
Duration: 12+ months
Location: Cupertino (Hybrid, 3 days onsite)
Work Hours: 8 AM 5 PM PST
Pay Rate: $60/hr. W2

Job Summary
We are seeking a highly skilled Data Engineer with strong expertise in Python, SQL, and data pipeline development. This role will focus on creating, optimizing, and maintaining scalable data solutions, using modern tools like dbt (Data Build Tool), Postgres, Snowflake, and GitHub to support our analytics and business intelligence needs.
You will collaborate closely with analysts, data scientists, and business stakeholders to ensure that our data is clean, reliable, and accessible.
Key Responsibilities

  • Design, build, and maintain efficient data pipelines and ETL/ELT processes.
  • Develop robust data models and transformations using dbt.
  • Manage and optimize Postgres and Snowflake databases for performance and scalability.
  • Write complex and efficient SQL queries for data exploration, transformation, and reporting.
  • Implement version control best practices using GitHub for collaborative development.
  • Monitor, debug, and troubleshoot data workflows to ensure high-quality and accurate data.
  • Collaborate with data analysts, engineers, and business teams to deliver actionable insights.
  • Ensure compliance with data governance, security, and privacy requirements.

Required Qualifications

  • Strong proficiency in Python for data-related tasks.
  • Advanced knowledge of SQL for data wrangling and optimization.
  • Proven experience with dbt for data transformation.
  • Hands-on experience with Postgres and Snowflake.
  • Proficiency in GitHub for version control and collaborative development.
  • Demonstrated ability to design and implement scalable data pipelines.


Preferred

  • Familiarity with cloud data platforms (AWS, Google Cloud Platform, Azure) and modern data stack tools.
  • Experience with workflow orchestration tools like Airflow or Prefect.
  • Understanding of CI/CD for data infrastructure.
  • Strong problem-solving skills and attention to detail.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.