Senior Data Modeler

  • Broken Arrow, OK
  • Posted 60+ days ago | Updated 5 hours ago

Overview

On Site
Contract - W2
Contract - Independent

Skills

Analytics
Decision-making
Use Cases
Stakeholder Engagement
Optimization
Workflow
Scalability
Data Processing
Data Flow
Collaboration
Data Architecture
Management
Data Integration
Cloud Computing
Microsoft Azure
Amazon Web Services
Google Cloud
Google Cloud Platform
Business Acumen
SQL
Python
PySpark
Data Manipulation
Modeling
Extract
Transform
Load
ELT
Data Warehouse
Data Modeling
ERwin
Soft Skills
Communication
Documentation
Computer Science
Information Systems
Databricks
Data Governance
Data Quality
Agile
DevOps
Continuous Integration
Continuous Delivery
Data Engineering
Business Intelligence
Data Visualization
Microsoft Power BI
Tableau
Reporting

Job Details

System One has an exciting Data Modeler opportunity with a partner in the Columbus, OH area. This position will be responsible for working with complex data sets across an organization spanning multiple states.

Successful candidates must be able to provide proof of ability to work in the U.S. without sponsorship. This position is not open to corp-to-corp, subcontractor or independent consulting arrangements.

About the Role
We are seeking an experienced and highly skilled Senior Data Modeler to join a data engineering team. This role will be responsible for designing and implementing data models to drive analytics, reporting, and data-driven decision-making. The ideal candidate will have hands-on experience with industry-standard data modeling methodologies (Kimball, Inmon, Data Vault 2.0) and have applied them in modern data architectures, including Medallion Architecture on Databricks for Delta Live Tables (DLT). You will work closely with business stakeholders to gather requirements and create effective data flows that support enterprise-level data solutions.

Key Responsibilities
  • Data Modeling:
    • Develop, implement, and maintain high-quality data models using methodologies such as Kimball (dimensional), Inmon (3NF), and Data Vault 2.0, tailoring each approach to fit specific business use cases.
  • Medallion Architecture:
    • Design and optimize data models following the Medallion Architecture (Bronze, Silver, Gold layers) on Databricks, particularly for Delta Live Tables (DLT) to facilitate data ingestion, transformation, and consumption.
  • Data Flow:
    • Develop good understanding of existing data flows and pipelines.
  • Stakeholder Engagement:
    • Partner with business stakeholders to understand their data needs and translate business requirements into robust data models and workflows.
  • Optimization and Performance:
    • Continuously optimize models and workflows for performance, scalability, and reliability. Troubleshoot and resolve data and model-related issues to ensure efficient data processing.
  • Documentation and Best Practices:
    • Develop and maintain detailed documentation for data models, data flows, and data governance processes, promoting best practices across the data modeling function.
  • Collaboration:
    • Collaborate with data engineers, data analysts, and data scientists to integrate models seamlessly into the broader data platform, ensuring alignment with organizational data standards.

Required Qualifications
  • Experience:
    • Minimum of 10+ years in data modeling or data architecture roles, with a strong portfolio of designing and implementing data models across various methodologies (Kimball, Inmon, and Data Vault 2.0).
    • 2+ years of Data Vault 2.0 experience is a must have
  • Technical Expertise:
    • Proven experience with Databricks, particularly in building and managing Delta Live Tables (DLT).
    • Proficient in Medallion Architecture (Bronze, Silver, Gold layers) and how it applies to scalable, modern data warehousing solutions.
    • Hands-on experience with data integration, data quality, and data transformation on cloud platforms (e.g., Azure, AWS, Google Cloud Platform).
  • Business Acumen:
    • Demonstrated ability to work with business stakeholders to gather requirements, translate them into data models, and design data solutions that meet their needs.
  • Tools & Technologies:
    • Proficient with SQL, Python, and/or PySpark for data manipulation and modeling.
    • Familiarity with ETL/ELT tools, data warehousing platforms, and data modeling tools (e.g., ERwin, Lucidchart).
  • Soft Skills:
    • Excellent communication, documentation, and interpersonal skills. Ability to work collaboratively in cross-functional teams.
  • Education:
    • Bachelor's degree in Computer Science, Information Systems, or a related field. Advanced degrees or relevant certifications (e.g., CDMP, Databricks certification) are a plus.

Preferred Qualification
  • Experience with data governance and data quality frameworks.
  • Familiarity with Agile methodologies and DevOps practices, including CI/CD pipelines for data engineering.
  • Knowledge of BI and data visualization tools (e.g., Power BI, Tableau) for downstream reporting.


Ref: #208-Eng Tulsa
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.