Data Engineer

  • Minneapolis, MN
  • Posted 3 hours ago | Updated 3 hours ago

Overview

On Site
Hybrid
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

Python
PySpark
Git
MLflow
Azure
Data Build Tool
DevOps

Job Details

We are seeking to fill this role with an individual with 5 7 years of experience in data engineering and a strong background in Databricks
This role is ideal for someone passionate about building scalable data solutions, enabling analytics, and driving innovation in a modern data platform
5 7 years of experience as a Data Engineer, with at least 3 years working in Databricks
Strong proficiency in Python, PySpark, and Spark SQL
Hands-on experience with DBT in a cloud data platform
Experience with DABS for workflow packaging and deployment
Proven expertise in Azure DevOps, Git, and CI/CD pipeline development
Solid understanding of data modeling, ETL/ELT, and performance optimization
Experience implementing monitoring and observability for data pipelines
Excellent communication and collaboration skills
The work environment is generally favorable
Lighting and temperature are adequate and there are no hazardous or unpleasant conditions caused by noise, dust, etc
Specific vision abilities required by this job include close vision, distance vision, color vision, peripheral vision, depth perception and ability to adjust focus
Frequently required to sit and/or stand
Responsibilities
Design and develop scalable data pipelines using PySpark, Spark SQL, and Python within the Databricks environment
Build modular, version-controlled transformation workflows using DBT (Data Build Tool)
Package and deploy Databricks workflows and notebooks using Databricks Asset Bundles (DABS) for CI/CD and environment management
Integrate Databricks workflows with Azure DevOps for automated testing, deployment, and version control
Develop and maintain robust CI/CD pipelines for data engineering workflows using Git, Azure DevOps, and DABS
Implement and optimize dimensional data models, ELT/ETL processes, and performance tuning in cloud data platforms
Collaborate with data scientists, analysts, and business stakeholders to deliver high-impact data solutions
Implement logging, alerting, and monitoring for data pipelines using tools like Databricks Jobs, MLflow, or Azure Monitor
Reasonable accommodations will be evaluated and may be implemented to enable individuals with disabilities to perform essential functions of this position
This job operates in a professional office environment and routinely uses standard office equipment such as computers, phones, photocopiers, filing cabinets, etc

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About MASH Pro Tech