Senior Databricks Architect,Satsyil Corp
Location: Tysons Corner (Hybrid), VA. Employment: Full-Time, W2. Clearance: U.S. Citizenship
Required: Ability to obtain Public Trust (minimum)
Position Overview
The Data Architect will design, implement, and optimize enterprise data architectures using
Databricks as the primary data lakehouse platform. This role requires deep expertise in
Databricks platform capabilities, medallion architecture patterns, and modern data engineering
practices to deliver scalable, governed, and performant data solutions that support analytics,
AI/ML, and business intelligence initiatives.
Core Responsibilities
Design and implement end-to-end data lakehouse architectures using Databricks on
AWS/Azure, incorporating medallion architecture patterns (bronze, silver, gold layers) for
progressive data refinement
Architect Delta Lake storage solutions with appropriate partitioning strategies,
Z-ordering, and optimization techniques to support high-performance query workloads
Design Unity Catalog implementations for centralized data governance, including
metastore configuration, catalog/schema hierarchies, cross-workspace data sharing, and
fine-grained access controls (table-level, column-level, row-level security)
Design parameterized, idempotent, and auditable data pipelines using Databricks
workflows, Delta Live Tables, Apache Spark, Structured Streaming, and Auto Loader
Implement data quality frameworks with automated validation checks, anomaly
detection, data profiling, and data lineage tracking integrated into pipeline workflows
Establish encryption strategies, audit logging frameworks, and compliance controls
aligned with FISMA, FedRAMP, HIPAA, or other regulatory requirements
Optimize Databricks cluster configurations, query performance, and implement cost
management strategies using photon acceleration, serverless compute, and resource
tagging
Architect feature stores, MLOps pipelines with MLflow, and analytical data models
optimized for BI tools and self-service analytics
Design infrastructure-as-code implementations using Terraform and CI/CD pipelines
integrating Databricks components with version control
Create comprehensive architecture documentation, develop platform standards and best
practices, and provide technical guidance to data engineering teams
Requirements Bachelor's degree in Computer Science, Information Systems, Data Engineering, or
related field (Master's preferred)
8+ years of data architecture experience with at least 3 years of hands-on Databricks
platform implementation
Expert-level proficiency with Databricks platform components: Delta Lake, Unity
Catalog, Databricks SQL, Delta Live Tables, workflows, Auto Loader, Structured
Streaming, MLflow, and Feature Store
Deep understanding of Apache Spark architecture, execution plans, performance tuning,
and optimization techniques
Strong SQL skills and proficiency in Python and/or Scala for data engineering and
pipeline development
Hands-on experience with AWS (S3, IAM, VPC, Glue) or Azure (ADLS, Azure AD, ADF)
and Infrastructure-as-Code tools (Terraform, CloudFormation)
Proven experience designing enterprise-scale data architectures serving 100+ data
pipelines and multiple consumer applications
Experience implementing data governance frameworks aligned with regulatory
requirements (FISMA, FedRAMP, HIPAA, GDPR)
Strong data modeling skills including dimensional modeling, data vault, and
normalized/denormalized design patterns
Excellent communication skills to translate complex technical concepts to non-technical
stakeholders
Databricks certification (Data Engineer Professional or Solutions Architect) preferred.