Overview
On Site
Contract - W2
Skills
Financial Forecast
Economics
Sales
Embedded Systems
Data Engineering
Forecasting
Amazon Kinesis
FOCUS
Data Science
Workflow
Analytics
Regulatory Compliance
Clarity
Collaboration
Communication
Data Architecture
Amazon Web Services
Step-Functions
Snow Flake Schema
Databricks
Informatica
Modeling
Semantics
Data Structure
Analytical Skill
Use Cases
Streaming
Real-time
Machine Learning (ML)
Artificial Intelligence
Data Governance
Data Quality
Access Control
Meta-data Management
Cloud Computing
Orchestration
Scalability
Product Design
DICE
Job Details
What We Do/Project
As part of the transformation, we are establishing a modern, governed, and reusable data foundation to power financial forecasting, title economics, sales planning, and AI-driven insights.
The Senior Data Architect plays a critical role in designing this foundation-ensuring that data models, pipelines, and integration frameworks are scalable, performant, and aligned with enterprise data governance and platform goals.
Embedded within the Platform Pod, the Senior Data Architect partners with the Platform Owner, Cloud Architect, data engineering teams, and product-aligned pods to ensure the architecture supports both immediate product needs and long-term platform evolution. This role directly enables the delivery of reliable, real-time, and reusable data products across multiple workstreams.
Job Responsibilities / Typical Day in the Role
Design Scalable and Consistent Data Architecture
Define and maintain canonical data models, entity relationships, and semantic layer specifications that ensure consistent use of data across products and domains.
Develop and evolve logical and physical data models that support real-time analytics, forecasting, and scenario planning.
Collaborate with product-aligned pods to design domain-aligned data products that are modular, governed, and discoverable.
Build Reusable, Performant Data Pipelines
Architect data pipelines that support both batch and near real-time processing using AWS-native services (e.g., Glue, Kinesis, Lambda, Step Functions).
Guide ingestion, transformation, and enrichment strategies that optimize for resilience, scalability, and lineage traceability.
Work closely with the Cloud Architect to ensure that infrastructure and orchestration layers meet pipeline and data SLAs.
Embed Governance and Stewardship by Design
Partner with enterprise data governance teams to implement standardized metadata, lineage, and access controls using tools such as Lake Formation, Informatica, or Snowflake.
Define data quality rules, validation checkpoints, and anomaly detection processes to support trusted analytics and ML pipelines.
Contribute to the enterprise data catalog and enable self-service access through secure, well-documented APIs and schemas.
Collaborate Across Platform and Product Pods
Work with the Platform Owner to define and deliver shared data services and reusable semantic models that support multi-pod alignment.
Support data scientists and analysts by enabling ML/AI-ready data pipelines and structuring data to accelerate model development and deployment.
Participate in cross-pod architecture planning to coordinate integration strategies, resolve semantic conflicts, and align on domain boundaries.
Must Have Skills / Requirements
1) Experience in data architecture and engineering, with a focus on cloud-native data platforms and modern analytics workflows.
a. 7+ years of experience; Designing and delivering data architecture for cloud-based platforms, with strong knowledge of AWS (e.g., Glue, Lambda, Step Functions, Lake Formation) and modern tooling (e.g., Snowflake, Databricks, Informatica).
2) Hands-On Pipeline Design and Orchestration
a. 7+ years of experience; Experience architecting and optimizing complex data pipelines-ensuring performance, resilience, and real-time capabilities; Hands-on experience building batch and streaming pipelines that are performant, resilient, and traceable-using orchestration frameworks that support real-time and ML/AI-ready processing.
3) Expertise in Canonical Modeling and Semantic Design
a. 7+ years of experience; Proven ability to design scalable, reusable data models and translate them into physical implementations that align with business domains and analytic needs; Deep proficiency in designing canonical and semantic data models, with proven experience aligning data structures to business domains and analytic use cases.
Nice to Have Skills / Preferred Requirements
1) None
Functional Knowledge / Skills in the following areas:
1) You'll thrive in this role if you naturally:
a. Think in Domains and Products
b. You understand that good data architecture starts with clear business semantics, and you design models that reflect the real-world entities behind Studio workflows.
c. Bridge the Gap Between Models and Platforms
d. You work fluidly across logical design, physical deployment, and infrastructure orchestration-partnering with Cloud Architects and Engineers to bring your models to life.
e. Govern Through Enablement
f. You ensure compliance, lineage, and quality by embedding governance directly into design-making the right path the easy path for product and engineering teams.
g. Build for Reuse and Interoperability
h. You optimize for consistency and extensibility-designing assets and APIs that can be adopted across products, use cases, and future data science workflows.
i. Promote Transparency and Stewardship
j. You advocate for shared ownership of data definitions and practices and help business and technical stakeholders understand the value of consistency and quality.
2) You're likely a fit for this role if you bring:
a. Cross-Functional Collaboration
b. Strong ability to work across disciplines-including engineering, analytics, product, and compliance-and communicate design decisions in a way that drives alignment and adoption.
c. Bias for Structure and Clarity
d. You drive resolution of semantic conflicts, minimize redundancy, and create architectural clarity that simplifies downstream implementation.
e. Excellent collaboration and communication skills, with the ability to facilitate cross-functional alignment and translate architectural decisions across technical and business audiences.
Technology Requirements:
1) You're likely a fit for this role if you bring:
a. Deep Data Architecture Experience
b. Experience designing and delivering data architecture for cloud-based platforms, with strong knowledge of AWS (e.g., Glue, Lambda, Step Functions, Lake Formation) and modern tooling (e.g., Snowflake, Databricks, Informatica).
c. Expertise in Canonical Modeling and Semantic Design
d. Proven ability to design scalable, reusable data models and translate them into physical implementations that align with business domains and analytic needs.
e. Deep proficiency in designing canonical and semantic data models, with proven experience aligning data structures to business domains and analytic use cases.
f. Hands-On Pipeline Design and Orchestration
g. Experience architecting and optimizing complex data pipelines-ensuring performance, resilience, and real-time capabilities.
h. Hands-on experience building batch and streaming pipelines that are performant, resilient, and traceable-using orchestration frameworks that support real-time and ML/AI-ready processing.
i. Governance and Metadata Awareness
j. Familiarity with data governance practices, including stewardship, lineage, access controls, and cataloging across enterprise environments.
k. Familiarity with data governance frameworks, including data quality rules, lineage tracking, access controls, and enterprise metadata management.
2) Other Qualifications:
a. Ability to partner with platform and cloud engineering teams to ensure infrastructure and orchestration layers support data reliability, scalability, and SLAs.
b. Strong understanding of data product design principles and ability to develop modular, reusable data services that support multiple products and delivery pods.
c. Experience contributing to or maintaining an enterprise data catalog and enabling self-service access through secure, well-documented APIs.
Additional Notes
Hybrid schedule (Tues-Thurs) required (Burbank, CA)
#LI-NN2
#LI-hybrid
#DICE
As part of the transformation, we are establishing a modern, governed, and reusable data foundation to power financial forecasting, title economics, sales planning, and AI-driven insights.
The Senior Data Architect plays a critical role in designing this foundation-ensuring that data models, pipelines, and integration frameworks are scalable, performant, and aligned with enterprise data governance and platform goals.
Embedded within the Platform Pod, the Senior Data Architect partners with the Platform Owner, Cloud Architect, data engineering teams, and product-aligned pods to ensure the architecture supports both immediate product needs and long-term platform evolution. This role directly enables the delivery of reliable, real-time, and reusable data products across multiple workstreams.
Job Responsibilities / Typical Day in the Role
Design Scalable and Consistent Data Architecture
Define and maintain canonical data models, entity relationships, and semantic layer specifications that ensure consistent use of data across products and domains.
Develop and evolve logical and physical data models that support real-time analytics, forecasting, and scenario planning.
Collaborate with product-aligned pods to design domain-aligned data products that are modular, governed, and discoverable.
Build Reusable, Performant Data Pipelines
Architect data pipelines that support both batch and near real-time processing using AWS-native services (e.g., Glue, Kinesis, Lambda, Step Functions).
Guide ingestion, transformation, and enrichment strategies that optimize for resilience, scalability, and lineage traceability.
Work closely with the Cloud Architect to ensure that infrastructure and orchestration layers meet pipeline and data SLAs.
Embed Governance and Stewardship by Design
Partner with enterprise data governance teams to implement standardized metadata, lineage, and access controls using tools such as Lake Formation, Informatica, or Snowflake.
Define data quality rules, validation checkpoints, and anomaly detection processes to support trusted analytics and ML pipelines.
Contribute to the enterprise data catalog and enable self-service access through secure, well-documented APIs and schemas.
Collaborate Across Platform and Product Pods
Work with the Platform Owner to define and deliver shared data services and reusable semantic models that support multi-pod alignment.
Support data scientists and analysts by enabling ML/AI-ready data pipelines and structuring data to accelerate model development and deployment.
Participate in cross-pod architecture planning to coordinate integration strategies, resolve semantic conflicts, and align on domain boundaries.
Must Have Skills / Requirements
1) Experience in data architecture and engineering, with a focus on cloud-native data platforms and modern analytics workflows.
a. 7+ years of experience; Designing and delivering data architecture for cloud-based platforms, with strong knowledge of AWS (e.g., Glue, Lambda, Step Functions, Lake Formation) and modern tooling (e.g., Snowflake, Databricks, Informatica).
2) Hands-On Pipeline Design and Orchestration
a. 7+ years of experience; Experience architecting and optimizing complex data pipelines-ensuring performance, resilience, and real-time capabilities; Hands-on experience building batch and streaming pipelines that are performant, resilient, and traceable-using orchestration frameworks that support real-time and ML/AI-ready processing.
3) Expertise in Canonical Modeling and Semantic Design
a. 7+ years of experience; Proven ability to design scalable, reusable data models and translate them into physical implementations that align with business domains and analytic needs; Deep proficiency in designing canonical and semantic data models, with proven experience aligning data structures to business domains and analytic use cases.
Nice to Have Skills / Preferred Requirements
1) None
Functional Knowledge / Skills in the following areas:
1) You'll thrive in this role if you naturally:
a. Think in Domains and Products
b. You understand that good data architecture starts with clear business semantics, and you design models that reflect the real-world entities behind Studio workflows.
c. Bridge the Gap Between Models and Platforms
d. You work fluidly across logical design, physical deployment, and infrastructure orchestration-partnering with Cloud Architects and Engineers to bring your models to life.
e. Govern Through Enablement
f. You ensure compliance, lineage, and quality by embedding governance directly into design-making the right path the easy path for product and engineering teams.
g. Build for Reuse and Interoperability
h. You optimize for consistency and extensibility-designing assets and APIs that can be adopted across products, use cases, and future data science workflows.
i. Promote Transparency and Stewardship
j. You advocate for shared ownership of data definitions and practices and help business and technical stakeholders understand the value of consistency and quality.
2) You're likely a fit for this role if you bring:
a. Cross-Functional Collaboration
b. Strong ability to work across disciplines-including engineering, analytics, product, and compliance-and communicate design decisions in a way that drives alignment and adoption.
c. Bias for Structure and Clarity
d. You drive resolution of semantic conflicts, minimize redundancy, and create architectural clarity that simplifies downstream implementation.
e. Excellent collaboration and communication skills, with the ability to facilitate cross-functional alignment and translate architectural decisions across technical and business audiences.
Technology Requirements:
1) You're likely a fit for this role if you bring:
a. Deep Data Architecture Experience
b. Experience designing and delivering data architecture for cloud-based platforms, with strong knowledge of AWS (e.g., Glue, Lambda, Step Functions, Lake Formation) and modern tooling (e.g., Snowflake, Databricks, Informatica).
c. Expertise in Canonical Modeling and Semantic Design
d. Proven ability to design scalable, reusable data models and translate them into physical implementations that align with business domains and analytic needs.
e. Deep proficiency in designing canonical and semantic data models, with proven experience aligning data structures to business domains and analytic use cases.
f. Hands-On Pipeline Design and Orchestration
g. Experience architecting and optimizing complex data pipelines-ensuring performance, resilience, and real-time capabilities.
h. Hands-on experience building batch and streaming pipelines that are performant, resilient, and traceable-using orchestration frameworks that support real-time and ML/AI-ready processing.
i. Governance and Metadata Awareness
j. Familiarity with data governance practices, including stewardship, lineage, access controls, and cataloging across enterprise environments.
k. Familiarity with data governance frameworks, including data quality rules, lineage tracking, access controls, and enterprise metadata management.
2) Other Qualifications:
a. Ability to partner with platform and cloud engineering teams to ensure infrastructure and orchestration layers support data reliability, scalability, and SLAs.
b. Strong understanding of data product design principles and ability to develop modular, reusable data services that support multiple products and delivery pods.
c. Experience contributing to or maintaining an enterprise data catalog and enabling self-service access through secure, well-documented APIs.
Additional Notes
Hybrid schedule (Tues-Thurs) required (Burbank, CA)
#LI-NN2
#LI-hybrid
#DICE
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.