Azure Data Engineer

Overview

Remote
Up to $140,000
Full Time

Skills

Azure
Databricks
ADF
bigdata
spark
python
sql
ETL "data Engineer"
devops
pipeline

Job Details

Senior Data Engineer

Location: San Diego, CA (preferred) or Remote, USA
Full-Time | Direct Hire with Client


About the Role

We are seeking a Senior Data Engineer to design, build, and optimize data pipelines that power analytics and business intelligence initiatives across the organization. This role requires deep expertise in Azure Databricks and Azure Data Factory, with proven experience in developing scalable, reliable big data solutions in cloud environments. You will be a key contributor in transforming raw data into actionable insights, enabling data-driven decision making across business units.

This position is remote-friendly; however, candidates located in San Diego, CA are preferred.


Responsibilities

  • Design, develop, and maintain scalable data pipelines using Azure Databricks and Azure Data Factory.

  • Build and manage ETL workflows integrating multiple structured and unstructured data sources.

  • Partner with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.

  • Optimize big data workflows for performance and cost efficiency.

  • Monitor, troubleshoot, and resolve issues with data pipelines to ensure reliability and quality.

  • Apply best practices for data engineering, including version control, testing, and CI/CD deployment.

  • Enforce data governance, compliance, and security standards.

  • Mentor junior engineers and support knowledge sharing across the data team.

  • Perform other related duties as assigned.


Required Qualifications

  • Bachelor s degree in Computer Science, Information Technology, or related field.

  • 7+ years of data engineering experience with a strong focus on Azure Databricks and Azure Data Factory.

  • Proven background in building and deploying cloud-based big data architectures.

  • Strong programming skills in Python, SQL, and Spark.

  • Experience with data modeling, data warehousing, and ETL design.

  • Familiarity with data governance, compliance, and security frameworks.

  • Strong analytical and problem-solving skills with ability to debug complex workflows.

  • Excellent communication and collaboration skills.

  • Must be authorized to work in the U.S.; this position is not eligible for visa sponsorship.


Preferred Qualifications

  • Experience with DevOps practices and CI/CD tools for data pipeline deployment.


Work Environment & Physical Demands

  • Hybrid work balance: ~30% meetings/collaboration, ~70% individual analytical work.

  • Minimal travel required (up to 10%) for team meetings or project needs.

  • Occasional light lifting of files/materials; long periods of computer-based work.

  • Flexible hours may be required to meet deadlines, maintain system availability, and support business continuity.


Collaboration

  • Internal: Data scientists, business analysts, IT teams.

  • External: Vendors, contractors, and consultants.


If you are passionate about building scalable data solutions and enabling organizations to harness the power of data, we encourage you to apply!

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.