Senior Azure Data Integrations Engineer
Location: Remote (MST hours required)
Employment Type: Full-time (Contract)
Overview
We are seeking a Senior Azure Data Integrations Engineer to play a critical role in building and operating a modern, enterprise-grade data platform that supports analytics and data-driven decision making.
This role is hands-on and execution-focused, while carrying senior-level ownership and accountability. You will be responsible for designing, implementing, and operating scalable data ingestion frameworks, enabling analytics teams with reliable and well-governed datasets, and ensuring strong environment discipline, security, and operational stability across the platform.
You will collaborate closely with data science leadership, BI developers, platform stakeholders, and external vendors to translate complex enterprise data flows into trusted, production-ready datasets within Azure and Databricks.
Responsibilities
β Design, build, and operate data ingestion pipelines from diverse enterprise systems into Azure Data Lake and Databricks.
β Implement batch and near-real-time ingestion patterns with support for incremental processing, schema evolution, and replayability.
β Apply Medallion architecture principles (Bronze / Silver / Gold) to ensure raw, refined, and curated data layers are clearly defined, governed, and analytics-ready.
β Own the curated data layer that serves analytics and reporting platforms, ensuring datasets are performant, well-modeled, and consistently defined.
β Partner closely with BI developers and analytics users to translate reporting and dashboard requirements into backend data models and datasets optimized for
consumption.
β Support analytics tools (e.g., Sigma, Power BI, or similar platforms) by ensuring reliable connectivity, appropriate data access patterns, and production-ready data structures.
β Establish and maintain Dev / Test / Prod environment segregation across data pipelines, storage, and Databricks assets.
β Implement and manage CI/CD pipelines for data assets, including ADF pipelines, Databricks notebooks/jobs, and configuration promotion.
β Enforce deployment discipline, including approvals, validation, rollback strategies, and environment-specific parameterization.
β Implement secure access patterns using managed identities, service principals, Key Vault, and least-privilege principles.
β Ensure sensitive data is handled appropriately through access controls, segmentation, and governance standards.
β Define and maintain operational practices, including monitoring, alerting, error handling, and runbooks for production support.
β Proactively identify and address data quality, performance, reliability, and cost optimization concerns.
β Set technical standards and best practices for data integration and pipeline development across the platform.
β Review designs and implementations to ensure scalability, maintainability, and alignment with platform architecture.
β Collaborate with architects, data science leadership, and external partners to align execution with broader platform strategy.
Required Qualifications
β 8+ years of experience in data engineering, cloud data integration, or analytics platform development.
β Strong hands-on experience with:
β Azure Data Factory (ADF) – dynamic pipelines, triggers, integration runtimes
β Databricks – Spark / PySpark development, Delta Lake, job orchestration
β Azure Data Lake Storage (ADLS Gen2)
β Proven experience implementing Medallion architecture in production environments.
β Strong SQL skills and working knowledge of Python for transformations and automation.
β Experience enabling analytics and BI platforms through Databricks or similar data backends.
β Solid understanding of CI/CD practices applied to data platforms and multi-environment deployments.
β Strong grasp of security, identity, and access management in cloud data environments.
β Ability to operate effectively in complex, enterprise, multi-stakeholder settings.
β Clear, concise communication skills and the ability to translate technical concepts for non-technical partners.
β Availability during Mountain Standard Time (MST) working hours.
Preferred Qualifications
β Experience building metadata-driven ingestion frameworks.
β Familiarity with API-based ingestion and secure credential management.
β Experience working with external vendors or implementation partners.
β Background in highly governed or regulated environments.
β Azure certifications (e.g., DP-203) are a plus.