Software Engineering

Bellevue, WA, US • Posted 1 day ago • Updated 1 day ago
Contract Independent
Travel Required
On-site
$50 - $52/hr
Company Branding Image
Fitment

Dice Job Match Score™

🤯 Applying directly to the forehead...

Job Details

Skills

  • ADF
  • Active Directory
  • Big Data
  • Change Data Capture
  • Continuous Delivery
  • Continuous Integration
  • Data Architecture
  • Data Lake
  • Data Flow
  • Data Warehouse
  • Data Quality
  • DevOps
  • Databricks
  • Dimensional Modeling
  • GitHub
  • Microsoft Azure
  • Microsoft Power BI
  • PySpark
  • Git
  • RBAC
  • Python
  • Query Optimization
  • SCD
  • SQL
  • SLA
  • SQL Azure
  • Semantics
  • Root Cause Analysis
  • Real-time
  • Software Engineering

Summary

Position: Lead I - Software Engineering 
Location: Bellevue, WA   
Duration: 6 Months 
Job Type: Temporary Assignment   
Work Type: Hybrid
 
Job Description  
  • This role builds, and maintains scalable data pipelines and lakehouse infrastructure on Microsoft Azure to support efficient extraction, transformation, and loading of data across batch and real-time workloads. It involves implementing and managing the Medallion Architecture (Bronze → Silver → Gold) using Azure Data Factory, Databricks-PySpark, and Azure SQL Database and Databricks unity catalogue. 
  • The role requires ensuring SLA-adherent data quality standards. Success is measured by pipeline reliability, data freshness SLA compliance, and the quality of Gold-layer datasets powering Power BI executive dashboards. 
  • The work supports organizational decision-making by delivering trusted, well-governed data to business executives and analytics consumers. 
Required Skills: 
  • Experience building and optimizing big data pipelines using Azure Data Factory, PySpark, and SQL across structured and semi-structured data sets 
  • Hands-on experience implementing Medallion Architecture (Bronze/Silver/Gold) 
  • Experience with Delta Lake — ACID transactions, incremental loading, schema evolution, partitioning strategies 
  • Experience performing root cause analysis on pipeline failures and data quality issues to resolve SLA breaches and identify platform improvement opportunities 
Azure Foundational Services : 
  • Working knowledge of: Azure Data Factory (ADF), ADLS Gen2, Azure SQL Database, Azure Blob Storage, Azure Key Vault, Azure Monitor / Log Analytics, Azure Event Hubs, Microsoft Fabric Lakehouse, Azure Active Directory / Entra ID (RBAC, Service Principals)
Programming Languages: 
  • Proficiency in Python and PySpark for data transformation, pipeline automation, and large-scale distributed processing; strong SQL skills including window functions, CTEs, and query optimization across relational and lakehouse engines 
Data Architecture: 
  • Solid understanding of Medallion Architecture, dimensional modeling (Star Schema, SCD Types 1/2/3), and the trade-offs between lakehouse, data warehouse, and data lake patterns 
Pipeline Engineering: 
  • Ability to build robust ADF pipelines with ForEach, Lookup, Copy Activity, and Data Flows; incremental loading via watermark or CDC; error handling, retry logic, and dead-letter patterns 
Data Quality Experience: 
  • Experience implementing SLA-based data quality checks (freshness, completeness, row count), monitoring via Azure Monitor and ADF diagnostic logs, and defining data quality agreements with business stakeholders. 
  • DevOps for Data: 
  • Experience with Git-based workflows, ADF Git integration, CI/CD pipeline promotion across Dev/Test/Prod using Azure DevOps or GitHub Actions 
Reporting Layer Awareness: 
  • Understanding of how Gold-layer data feeds Power BI — DirectQuery vs. Import mode trade-offs, dataset refresh patterns, and semantic model collaboration with BI teams 
  • Ability to manage work across multiple concurrent pipeline projects, prioritize by business impact, and communicate status clearly to technical and non-technical stakeholders 
Good to have skills: 
  • Experience with Microsoft Fabric (Lakehouse, Notebooks, OneLake, Fabric Pipelines) — active migration or greenfield project 
  • Experience with real-time / streaming workloads using Azure Event Hubs or Structured Streaming in PySpark 
  • Experience delivering data platforms for executive-level reporting via Power BI semantic models 
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91159673
  • Position Id: 8960013
  • Posted 1 day ago

Company Info

About S3 Staffing USA

S3staffingusa is an industry-leading consultant firm specialising in IT-based staffing and software development solutions. We serve as an effective teammate for our international partners to help meet the business objectives set by end customers. Over the past few years, we’ve maintained a steadily growing presence in the industry, and can currently boast of a 200+ strong resource organisation to supervise our customers’ critical operations and intricate business applications.

Our firm is based on the philosophy of cultivating long-term relationships with clients, which is achieved by mastering development of cutting-edge technologies and incorporating proven process-driven methodologies into our workflow. Such competitive advantages are the secret behind how we provide effective, timely technology solutions to our clients, while strictly adhering to budget limits and being involved in a collaborative approach with them. If technology-based solutions to tackle a dynamic and demanding business environment are necessary, then look no further than S3staffingusa.

Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

It looks like there aren't any Similar Jobs for this job yet.

Search all similar jobs