Overview
On Site
$45 - $60
Contract - W2
Skills
Azure Data Fabric
legacy systems
Cobol
Oracle
Generative AI
Azure Data Lake
Data Factory
Power BI
Data Modeling
workflows
data governance
data mapping
ELT/ETL piplines
PySpark
storage
OneLake
AI/ML
CI/CD
Databricks
ADF
Azure data certifications
Job Details
Senior Data Solutions Architect; Lead
Engineer: Case Management Platform
Modernization (AI-Ready Focus)
Engineer: Case Management Platform
Modernization (AI-Ready Focus)
We are seeking a highly skilled and experienced Senior Data Solutions Architect; Lead Engineer to spearhead the modernization of our enterprise case management data platform. This pivotal role requires a hybrid professional capable of defining the strategic architecture while simultaneously leading the hands-on engineering effort to migrate legacy data systems to a cutting-edge, AI-ready Azure Data Fabric environment.
The ideal candidate will bridge the gap between enterprise data strategy and tactical implementation, ensuring data is modeled correctly, migrated efficiently from legacy sources (including COBOL and Oracle), and made available for high-impact analytical insights, operational reporting, and future Artificial Intelligence/Machine Learning (AI/ML) initiatives.
Key Responsibilities
Strategic Architecture Definition: Design the comprehensive, end-to-end data architecture for the new Azure Data Fabric platform (including Data Lakehouse, Data Factory, Synapse, and Power BI integration), ensuring alignment with business objectives for case management operations and future AI needs.
Advanced Data Modeling; Governance: Develop robust conceptual, logical, and physical data models for complex case management workflows. Establish and enforce comprehensive data governance, quality standards, and security protocols in alignment with regulatory requirements (e.g., GDPR, PII handling).
Legacy Migration Leadership; Execution: Own the strategy and execution of all data migration activities from existing legacy systems, including data extraction from active COBOL systems and Oracle databases. Define data mapping specifications, validate data integrity, and manage migration risks.
Hands-On Pipeline Engineering: Design, build, and optimize scalable ELT/ETL data pipelines using Azure Data Factory and PySpark in Databricks/Fabric notebooks to ingest, process, and structure raw case management data into clean, curated data assets (Medallion Architecture).
AI Readiness; Platform Optimization: Implement data storage solutions within Azure OneLake, optimize data structures for performance and cost, and ensure data accessibility and structure supports efficient consumption by data science and AI/ML workloads. Automate operational processes for monitoring, alerting, and CI/CD pipelines.
Technical Leadership; Collaboration: Act as the primary technical liaison among business stakeholders, infrastructure teams, and data consumers. Provide mentorship and establish best practices for data engineering teams.
Collaboration on experimental Generative AI Initiatives - Partner with cross functional teams to contribute technical expertise for experimental large language model projects, ensuring data readiness, scalable architecture, and alignment with modern AI engineering practices. Required Skills & Qualifications
Experience: Minimum of 8+ years of progressive experience in data architecture, data engineering, or a related discipline, with demonstrated experience leading large-scale data modernization projects.
Legacy Systems Integration: Direct, hands-on experience extracting and interpreting data from legacy environments, specifically working with active COBOL systems and enterprise Oracle databases.
Azure Fabric Expertise: Deep, hands-on expertise with the full Microsoft Azure data stack, specifically Microsoft Fabric, Azure Data Lake Storage Gen2 (OneLake), Azure Data Factory (ADF), Azure Synapse Analytics, and Azure Databricks.
Technical Proficiency: Expert-level proficiency in SQL and advanced programming skills in Python or PySpark. Experience with data modeling tools and methodologies is essential.
AI/ML Familiarity: Understanding of data preparation requirements for analytical and machine learning use cases and experience enabling data science teams.
Generative AI Familiarity: Strong foundational understanding of generative AI concepts and modern AI engineering patterns, such as grounding, retrieval
augmented generation (RAG), vector indexing, prompt engineering, and dependency aware copilot design. Experience building or supporting AI copilots or related intelligent automation systems is a plus.
Certifications (Preferred): Relevant Microsoft Azure Data certifications, such as the Microsoft Certified: Fabric Analytics Engineer Associate (DP-600) or legacy Azure Data Engineer/Architect certifications.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.