Azure Data Engineer -P&SC Data and Intelligence

• Posted 4 hours ago • Updated 1 hour ago
Full Time
Part Time
Company Branding Image
Fitment

Dice Job Match Score™

⭐ Evaluating experience...

Job Details

Skills

  • Network
  • Incident Management
  • Batch File
  • API
  • Collaboration
  • Business Rules
  • Product Requirements
  • Design Review
  • Microsoft SQL Server
  • SAP
  • Documentation
  • Testing
  • Mentorship
  • Data Engineering
  • Unity
  • Microsoft
  • Analytics
  • IQ
  • Orchestration
  • Databricks
  • Workflow
  • Apache Airflow
  • SQL
  • Data Validation
  • Systems Analysis
  • WMS
  • TMS
  • Data Quality
  • SLA
  • DevOps
  • Version Control
  • Git
  • Continuous Integration
  • Continuous Delivery
  • Automated Testing
  • Technical Writing
  • Real-time
  • Apache Kafka
  • Microsoft Azure
  • Apache Spark
  • Streaming
  • Oracle ERP
  • Enterprise Resource Planning
  • Supply Chain Management
  • Logistics
  • Reverse Logistics
  • Procurement
  • Migration
  • Python
  • PySpark
  • Scripting
  • SANS

Summary

Job Description:

Mandatory Areas

Microsoft fabric

Databricks

Data pipelines

Data products

Kafka, Azure event hub



Mandatory Skills:

Microsoft fabric, Databricks, Data pipelines, Data products, Kafka, Azure event hub

Preferred Skills:

Apache airflow, Azure data factory, Medallion architecture, ci/cd, devops, tms



The Data Engineer (Level IV) designs, builds, and operates the data pipelines and lakehouse data products that power P&SC intelligence across device supply chain, reverse logistics, procurement, network supply chain, and real-time control tower capabilities. Operating within a squad focused on a specific SC domain or cross-cutting platform function, the Data Engineer is a hands-on technical contributor who builds to production quality, owns pipeline reliability and KTLO, and actively raises the engineering standard of the team around them.



CORE RESPONSIBILITIES:

Design and build production-grade data pipelines spanning source ingestion, bronze landing, silver transformation, and gold-layer data product delivery within the squad's domain scope.

Own pipeline KTLO (Keep the Lights On) for assigned data products, including monitoring, alerting, incident response, and ongoing reliability improvements.

Implement data ingestion patterns for assigned source systems including batch file ingestion, API-based ingestion, and event-driven streaming (Kafka, Azure Event Hub) depending on squad scope.

Apply medallion architecture (bronze, silver, gold) and Fabric IQ certification standards consistently across all data product builds.

Collaborate with System Analysts to implement field-level transformations, business rule logic, and data quality checks as specified in product requirements documentation.

Participate in and contribute to pipeline design reviews, ensuring solutions align with the organization's Databricks and Fabric engineering standards.

Support the migration and deprecation of legacy platforms including SCOpsBI SQL Server and SAP boundary systems, following the organization's extract, validate, rebuild, cutover, and decommission pattern. Write and maintain comprehensive pipeline documentation including data lineage, transformation logic, SLA definitions, and dependency maps. Contribute to the organization's DevOps and engineering reliability practices including CI/CD pipeline setup, testing frameworks, and incident runbooks.

Mentor and technically guide junior and mid-level data engineers within the squad.



REQUIRED QUALIFICATIONS:

7-10 years of data engineering experience with a strong track record delivering production data pipelines in large enterprise environments.

Expert proficiency in PySpark or Spark SQL for large-scale data transformation on distributed compute platforms.

Hands-on experience with Databricks including Delta Lake, Unity Catalog, and workflow orchestration.

Experience with Microsoft Fabric or Azure Synapse Analytics; familiarity with Fabric IQ and OneLake is a plus.

Proficiency with pipeline orchestration tools such as Azure Data Factory, Databricks Workflows, or Apache Airflow.

Solid SQL skills for data validation, transformation logic, and ad-hoc source system analysis.

Experience building and maintaining ingestion pipelines from enterprise operational systems such as ERP, WMS, TMS, or comparable platforms.

Strong understanding of data quality frameworks, including implementing checks, alerting on anomalies, and maintaining SLA-compliant pipeline health.

Experience with DevOps practices for data pipelines including version control (Git), CI/CD, and automated testing.

Ability to operate independently on complex technical problems with minimal oversight while maintaining clear technical documentation.



PREFERRED QUALIFICATIONS:

Experience with real-time streaming technologies including Kafka, Azure Event Hub, Delta Live Tables, or Spark Structured Streaming.

Familiarity with Oracle, ERP ingestion patterns, or large-scale ERP migration programs. Background in supply chain, logistics, reverse logistics, or procurement data domains.

Experience with legacy platform migration and decommission programs.

Python development skills beyond PySpark, including utility scripting and framework contributions.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91166511
  • Position Id: INFT 2749-1807-1778080928
  • Posted 4 hours ago

Company Info

About INFT Solutions inc

At INFT Solutions, we understand that technology is the backbone of modern businesses. Our goal is to empower companies with innovative, scalable, and future-ready solutions that drive efficiency and growth.

With a deep commitment to excellence, we provide end-to-end IT services, ensuring seamless integration of technology into your business operations. Whether you need application development, IT staffing, or cutting-edge digital transformation solutions, our expertise guarantees measurable success.

Partner with us to leverage industry-leading technologies and a customer-centric approach that delivers real results. Our team stays ahead of the curve, constantly evolving to meet the dynamic needs of businesses across diverse sectors. Let INFT Solutions be your trusted technology partner in achieving sustainable success.

About_Company_One
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

No location provided

Today

Easy Apply

Full-time, Part-time, Contract, Third Party

Atlanta, Georgia

Today

Easy Apply

Full-time, Part-time, Contract, Third Party

Search all similar jobs