AWS Data Engineer with Databricks and Spark (PP number is must to share)

  • Mountain View, CA
  • Posted 14 hours ago | Updated 14 hours ago

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

aws
Big data
Databricks
Delta Lake
Spark
SQL

Job Details

Must have :

Strong Cloud & Big Data Expertise:

o Proficiency with the AWS data ecosystem (S3, Glue, Lambda, Redshift, etc.).

o Hands-on experience with Databricks, including Delta Lake and Spark (PySpark or Scala).

  • What You'll Do
  • Design & Build Data Pipelines: Architect, build, and maintain efficient and reliable data pipelines to ingest, process, and transform data within our AWS-based data lake and Databricks platform.
  • Develop Data Marts & a Metrics Store: Create and own curated, analysis-ready data marts that support critical product initiatives, including multi-product customer onboarding funnels and a centralized metrics store.
  • Drive Data Integration: Lead complex data integration projects, unifying data from across the Intuit ecosystem, including sources like TurboTax, Credit Karma, and QuickBooks, to create a holistic view of our customers.
  • Empower Data-Driven Decisions: Collaborate closely with the data analytics team to understand their needs, providing them with high-quality, trusted data sets that enable them to deliver key insights and reporting.
  • Champion Data Quality & Governance: Implement best practices for data modeling, data quality, and data governance to ensure the consistency and reliability of our data assets.
  • Thrive in Ambiguity: Work in a fast-paced, agile environment, demonstrating a strong ability to navigate unclear requirements and proactively drive projects to completion.
  • What We're Looking For
  • Proven Data Engineering Experience: 5+ years of hands-on experience in a data engineering or analytics engineering role, with a track record of building and managing complex data pipelines.
  • Strong Cloud & Big Data Expertise:
  • Proficiency with the AWS data ecosystem (S3, Glue, Lambda, Redshift, etc.).
  • Hands-on experience with Databricks, including Delta Lake and Spark (PySpark or Scala).
  • Expert-Level SQL & Data Modeling: Exceptional SQL skills and a deep understanding of data modeling concepts (e.g., dimensional modeling, star schemas) and data warehousing principles.
  • Exceptional Communicator: The ability to clearly and effectively communicate technical concepts to both technical and non-technical audiences. You are comfortable leading discussions and building consensus.
  • Collaborative & Product-Focused: A team player who is passionate about understanding the "why" behind the data and is dedicated to building solutions that drive business and product success.
  • Problem-Solving Mindset: You are intellectually curious, enjoy tackling complex challenges, and are comfortable with ambiguity.
  • Fintech Experience (Bonus): Previous experience working in the fintech or financial services industry is a strong plus.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.