Overview
On Site
$Market
Full Time
Contract - W2
Contract - 6+ month(s)
Skills
Spark
Python
Cloud
ETL
Azure
DATA ENGINEER
Databricks
Datalake
Job Details
Job Opening: Senior Data Engineer/Lead
Location: Denver, CO (Hybrid)
Duration: 6+ months | Type: W2 only
Location: Denver, CO (Hybrid)
Duration: 6+ months | Type: W2 only
Core Technologies: Databricks, Spark, Python, ETL
Role Overview
Join a high-impact engineering team as a Senior Data Engineer, driving performance and scalability across modern data platforms. You'll work hands-on with Spark and Databricks to optimize complex Python-based notebooks, build robust ETL pipelines, and architect data flows into a Lakehouse environment.
Join a high-impact engineering team as a Senior Data Engineer, driving performance and scalability across modern data platforms. You'll work hands-on with Spark and Databricks to optimize complex Python-based notebooks, build robust ETL pipelines, and architect data flows into a Lakehouse environment.
What You'll Do
- Tune and maintain Python notebooks for advanced calculation logic in Spark
- Build and enhance ETL workflows using Databricks and Azure Data Factory
- Design scalable data pipelines integrating diverse systems into a Lakehouse
- Collaborate with analytics teams to align data processes with business goals
- Lead improvements in data quality and monitoring systems
- Tune and maintain Python notebooks for advanced calculation logic in Spark
- Build and enhance ETL workflows using Databricks and Azure Data Factory
- Design scalable data pipelines integrating diverse systems into a Lakehouse
- Collaborate with analytics teams to align data processes with business goals
- Lead improvements in data quality and monitoring systems
What You Bring
- Deep experience in Python, Spark, and Databricks
- Proven track record in building and optimizing large-scale data pipelines
- Strong grasp of Azure cloud infrastructure and data lake architecture
- Ability to modernize legacy systems into efficient Spark-based solutions
- Excellent troubleshooting and remote collaboration skills
- Deep experience in Python, Spark, and Databricks
- Proven track record in building and optimizing large-scale data pipelines
- Strong grasp of Azure cloud infrastructure and data lake architecture
- Ability to modernize legacy systems into efficient Spark-based solutions
- Excellent troubleshooting and remote collaboration skills
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.