Senior Data Architect - Only LOCAL CANDIDATES (WA)

Overview

On Site
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

API
Amazon Kinesis
Amazon Redshift
Amazon Web Services
Apache Airflow
Apache Avro
Apache Hadoop
Apache Hive
Apache Kafka
Apache Parquet
Apache Spark
Big Data
Cloud Computing
Data Architecture
Data Governance
Data Integration
Data Modeling
Data Quality
Data Warehouse
Databricks
Extract
Transform
Load
ELT
MySQL
Python
Relational Databases
Scala
Snow Flake Schema
Stacks Blockchain
Star Schema
Microsoft Azure
Java
PostgreSQL
Analytics
Unstructured Data

Job Details

Role: Senior Data Architect
Location Snoqualmie, WA - 5 Days On-Site - Only LOCAL CANDIDATES

Client is looking for a senior Data Architect with 7+ years of proven experience in designing, implementing, and optimizing data architecture for complex and large-scale systems. The ideal candidate will own the end-to-end architecture across structured, semi-structured, and unstructured data, and will work closely with engineering, analytics, and business teams to enable scalable data solutions.
This role combines strategic leadership, hands-on technical design, and cross-functional collaboration to deliver modern, secure, and performant data platforms.
Key Responsibilities


Define and implement enterprise-level data architecture strategy
Lead the design and implementation of data pipelines, data lakes, and data warehouses
Build architectural patterns for data ingestion, transformation, governance, and cataloging
Design schemas and optimize performance for analytical and operational workloads
Define data quality, lineage, and metadata management frameworks
Ensure compliance with privacy and security regulations (GDPR, CCPA, etc.)
Collaborate with business stakeholders to gather data needs and translate them into scalable solutions
Provide technical leadership to data engineers and guide best practices across systems
Required Skills & Experience


7+ years in Data Architecture, including hands-on work with modern data stacks
Deep experience with:
o Relational databases (PostgreSQL, SQL Server, MySQL, etc.)
o Big Data platforms (Spark, Hadoop, Hive)
o Cloud-native data tools (Azure Synapse, Databricks, Snowflake, AWS Redshift)
o Data lakehouse designs and lake formats (Delta Lake, Parquet, Avro)
Strong background in ETL/ELT tools (e.g., Apache Airflow, Azure Data Factory, dbt)
Solid understanding of API-based data integration, real-time streaming (Kafka, Kinesis), and batch pipelines
Experience with data modeling techniques (3NF, star schema, data vault, etc.)
Familiar with data governance tools like Collibra, Alation, or Microsoft Purview
Proficient in at least one programming language (Python, Scala, Java)

Must Have Skills

Skill 1 Data Architecture 7+ years
Skill 2 Data Pipelines: 7+ years
Skill 3 Data Lakes: 7+ years
Skill 4 Data Ingestion 7 + years
Skill 5 Relational Databases : 7+ years
Skill 6: ETL/ ELT Tools: 5+ years

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Chabez Tech LLC