Senior Data Architect

Overview

Hybrid
$80 - $100
Contract - W2
Contract - 12 Month(s)

Skills

data architect
aws
snowflake
databricks

Job Details

Senior Data Architect

12 Months Contract

Hybrid in Burbank CA (3 days onsite in a week)

Summary

The Senior Data Architect plays a critical role in designing this foundation - ensuring that data models, pipelines, and integration frameworks are scalable, performant, and aligned with enterprise data governance and platform goals.

Embedded within the Platform Pod, the Senior Data Architect partners with the Platform Owner, Cloud Architect, data engineering teams, and product-aligned pods to ensure the architecture supports both immediate product needs and long-term platform evolution. This role directly enables the delivery of reliable, real-time, and reusable data products across multiple Studio Economics workstreams.

Responsibilities

  • Design scalable and consistent Data Architecture.
  • Define and maintain canonical data models, entity relationships, and semantic layer specifications that ensure consistent use of data across products and domains.
  • Develop and evolve logical and physical data models that support real-time analytics, forecasting, and scenario planning.
  • Collaborate with product-aligned pods to design domain-aligned data products that are modular, governed, and discoverable.
  • Build reusable, performant data pipelines.
  • Architect data pipelines that support both batch and near real-time processing using AWS-native services (e.g., Glue, Kinesis, Lambda, Step Functions)
  • Guide ingestion, transformation, and enrichment strategies that optimize for resilience, scalability, and lineage traceability.
  • Work closely with the Cloud Architect to ensure that infrastructure and orchestration layers meet pipeline and data SLAs.
  • Embed governance and stewardship by design.
  • Partner with enterprise data governance teams to implement standardized metadata, lineage, and access controls using tools such as Lake Formation, Informatica, or Snowflake.
  • Define data quality rules, validation checkpoints, and anomaly detection processes to support trusted analytics and ML pipelines.
  • Contribute to the enterprise data catalog and enable self-service access through secure, well-documented APIs and schemas.
  • Collaborate across platform and product pods.
  • Work with the Platform Owner to define and deliver shared data services and reusable semantic models that support multi-pod alignment.
  • Support data scientists and analysts by enabling ML/AI-ready data pipelines and structuring data to accelerate model development and deployment.
  • Participate in cross-pod architecture planning to coordinate integration strategies, resolve semantic conflicts, and align on domain boundaries.

Qualifications

  • 7+ years' experience in data architecture and engineering, with a focus on cloud-native data platforms and modern analytics workflows: designing and delivering data architecture for cloud-based platforms, with strong knowledge of AWS (e.g., Glue, Lambda, Step Functions, Lake Formation) and modern tooling (e.g., Snowflake, Databricks, Informatica)
  • 7+ years hands-on pipeline design and orchestration: architecting and optimizing complex data pipelines - ensuring performance, resilience, and real-time capabilities; hands-on experience building batch and streaming pipelines that are performant, resilient, and traceable - using orchestration frameworks that support real-time and ML/AI-ready processing.
  • 7+ years' expertise in Canonical Modeling and Semantic Design: ability to design scalable, reusable data models and translate them into physical implementations that align with business domains and analytic needs; deep proficiency in designing canonical and semantic data models, with proven experience aligning data structures to business domains and analytic use cases.

Functional Knowledge / Skills

  • Think in domains and products.
  • Understand that good data architecture starts with clear business semantics, and you design models that reflect the real-world entities behind Studio workflows.
  • Bridge the gap between models and platforms.
  • Work fluidly across logical design, physical deployment, and infrastructure orchestration - partnering with Cloud Architects and Engineers to bring your models to life.
  • Govern through enablement.
  • Ensure compliance, lineage, and quality by embedding governance directly into design - making the right path the easy path for product and engineering teams.
  • Build for reuse and interoperability.
  • Optimize for consistency and extensibility - designing assets and APIs that can be adopted across products, use cases, and future data science workflows.
  • Promote transparency and stewardship.
  • Advocate for shared ownership of data definitions and practices and help business and technical stakeholders understand the value of consistency and quality.
  • Cross-functional collaboration - strong ability to work across disciplines - including engineering, analytics, product, and compliance - and communicate design decisions in a way that drives alignment and adoption.
  • Bias for structure and clarity - drive resolution of semantic conflicts, minimize redundancy, and create architectural clarity that simplifies downstream implementation.
  • Excellent collaboration and communication skills, with the ability to facilitate cross-functional alignment and translate architectural decisions across technical and business audiences.

Technology Requirements

  • Deep Data Architecture experience.
  • Experience designing and delivering data architecture for cloud-based platforms, with strong knowledge of AWS (e.g., Glue, Lambda, Step Functions, Lake Formation) and modern tooling (e.g., Snowflake, Databricks, Informatica)
  • Expertise in Canonical Modeling and Semantic Design.
  • Proven ability to design scalable, reusable data models and translate them into physical implementations that align with business domains and analytic needs.
  • Deep proficiency in designing canonical and semantic data models, with proven experience aligning data structures to business domains and analytic use cases.
  • Hands-on pipeline design and orchestration.
  • Experience architecting and optimizing complex data pipelines - ensuring performance, resilience, and real-time capabilities.
  • Hands-on experience building batch and streaming pipelines that are performant, resilient, and traceable - using orchestration frameworks that support real-time and ML/AI-ready processing.
  • Governance and Metadata awareness.
  • Familiarity with data governance practices, including stewardship, lineage, access controls, and cataloging across enterprise environments.
  • Familiarity with data governance frameworks, including data quality rules, lineage tracking, access controls, and enterprise metadata management.

Other Qualifications

  • Ability to partner with platform and cloud engineering teams to ensure infrastructure and orchestration layers support data reliability, scalability, and SLAs.
  • Strong understanding of data product design principles and ability to develop modular, reusable data services that support multiple products and delivery pods.
  • Experience contributing to or maintaining an enterprise data catalog and enabling self-service access through secure, well-documented APIs.

Education

N/A

The estimated pay range for this position is USD $95.00/hr - USD $101.50/hr. Exact compensation and offers of employment are dependent on job-related knowledge, skills, experience, licenses or certifications, and location. We also offer comprehensive benefits. The Talent Acquisition Partner can share more details about compensation or benefits for the role during the interview process

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Milestone Technologies, Inc.