Overview
Skills
Job Details
Bellevue WA
Job Description:
>> Coordinate with Customer and work on MS Fabric, SynapseADF, SQL, ETL, PySpark
>> Work across Fabric workloads Data Factory (ETL), Synapse Data Engineering (Spark), Synapse Data Warehousing (SQL), and OneLake for storage
>> Design data loading patterns, lakehouse architectures, and orchestration processes for enterprise-scale analytics
>> Prepare and enrich data for analysis, ensuring data quality and consistency checks
>> Monitor and optimize Fabric pipelines and workloads for cost efficiency and high availability
>> Build and manage data pipelines using Azure Data Factory (ADF) and Synapse pipelines for batch and incremental ingestion
>> Develop PySpark notebooks and SQL scripts to clean, transform, and convert raw files into optimized formats (ParquetDelta)