Fabric Data Engineer (Remote)
We are seeking a skilled Data Engineer with strong Python/PySpark expertise to build and maintain scalable cloud-based pipelines within Azure environments such as Microsoft Fabric or Synapse.
As part of our process after applying, you may receive an invitation from our AI Recruiter Avery for a short conversation that lets you share more about your background beyond your resume. For questions, contact .
- Job Type: Long-term contract
- Compensation: This job expects to pay about $60 - $72 per hour plus benefits
- Location: Remote
- No C2C. No Visa Sponsorship Available for this role
What You ll Do:
- Design, develop, test, and maintain PySpark notebooks to ingest data from: SharePoint (REST/Graph API), Outlook (Graph/REST API), and 3rd party REST APIs
- Bronze layer (raw): persist ingestion in Delta Lake/Parquet with minimal transformations, preserving schema drift and auditing metadata
- Silver layer (curated): apply data quality checks, deduplication, normalization, standardization, and business-key alignment
- Implement and maintain a star-schema data model (facts and dimensions) for analytics
- Build and maintain CI/CD for data pipelines (Git, unit tests, deployment to Databricks/Azure Synapse, etc.)
- Implement monitoring, alerting, retries, and idempotent ingest strategies
- Data governance: lineage, masking of PII/PII-sensitive fields, role-based access
- Documentation: pipeline design docs, data dictionaries, and runbooks
- Collaborate with Data Analysts, Data Scientists, and Business Stakeholders to translate requirements into scalable pipelines
- Optimize performance (partitioning, caching, Delta tables, spark configurations) and control costs
What Gets You the Job:
- Bachelor's or Master s in Computer Science, Data Engineering, or related field
- 4+ years of experience as a Data Engineer
- Experience in Microsoft Fabric is highly preferred or experience with Synapse, Power BI or Data Analytics is acceptable
- Must be proficient in Python / PySpark notebook
- Strong data ingestion skills are required
- Must have strong SQL skills
- Experience with Alteryx / MapLogic
- Experience ingesting data from REST APIs (SharePoint/Graph API, Outlook/Exchange, 3rd party APIs)
- Demonstrated experience with Bronze/Silver/Gold data modeling, and star schema design
- Experience with Delta Lake / data lake architectures
- Familiarity with data quality frameworks and profiling
- Experience with cloud data platforms (Azure preferred: Databricks, ADLS Gen2, Synapse)
- Strong problem-solving, collaboration, and communication skills
Nice-to-Have
- Databricks certification or Azure Data Engineer Associate
- Experience with Graph API authentication (OAuth2), service principals
- Data governance tools and data masking concepts
- Experience with streaming/batch hybrid ETL patterns
- Knowledge of healthcare/finance data domain or other regulated data
Irvine Technology Corporation (ITC) connects top talent with exceptional opportunities in IT, Security, Engineering, and Design. From startups to Fortune 500s, we partner with leading companies nationwide. Our AI recruiter, Avery helps streamline the first step of your journey so we can focus on what matters most: helping you grow. Join us. Let us ELEVATE your career!
Irvine Technology Corporation provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Irvine Technology Corporation complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities.
![]()