Sr. Databricks Architect

Remote • Posted 60+ days ago • Updated 1 day ago
Full Time
Remote
Depends on Experience
Fitment

Dice Job Match Score™

📋 Comparing job requirements...

Job Details

Skills

  • PySpark
  • SQL
  • Data Engineering
  • Databricks

Summary

Satsyil Corp is seeking an experienced Databricks Architect to lead the design and development of enterprise-grade data solutions. The ideal candidate will have extensive expertise in Databricks, Apache Spark, and cloud-native data platforms, with the ability to architect, optimize, and scale data pipelines for structured and unstructured data at enterprise scale.
Key Responsibilities
  • Architect and implement Databricks solutions supporting enterprise data engineering and analytics initiatives.
  • Design and develop Spark applications in Databricks using Python, SQL, and Scala/PySpark.
  • Build, optimize, and manage ETL/ELT pipelines leveraging Databricks Delta Lake.
  • Define best practices for cluster management, job orchestration, and cost optimization in Databricks.
  • Collaborate with business and technical teams to ensure data governance, security, and compliance within Databricks environments.
  • Conduct performance tuning, query optimization, and troubleshooting for Databricks and Spark workloads.
  • Integrate Databricks with cloud-native services (AWS, Azure, or Google Cloud Platform), APIs, and external data sources.
  • Provide technical leadership and mentor engineers on Databricks architecture, coding standards, and deployment practices.
Required Qualifications
  • 10+ years of hands-on experience in big data engineering with Apache Spark and at least one programming language (Python, Scala).
  • Strong expertise in the Databricks platform (Delta Lake, SQL Analytics, MLflow, job orchestration).
  • Proven experience building scalable ETL pipelines and implementing data lake house architectures.
  • Proficiency in PySpark for data ingestion, transformation, and merging large datasets.
  • Strong knowledge of SQL, database concepts, and data modeling.
  • Experience with CI/CD pipelines, version control (Git), and data application development lifecycle.
  • Deep understanding of query tuning, performance optimization, and distributed system operations in Databricks.
  • Demonstrated ability to deliver and operate large-scale, highly available data platforms.
Preferred Skills
  • Experience with cloud-native data services (AWS Glue, S3, Redshift; Azure Data Factory, Synapse; or Google Cloud Platform equivalents).
  • Knowledge of Unity Catalog, Delta Sharing, and data governance frameworks in Databricks.
  • Familiarity with real-time streaming (Kafka, Event Hubs) and API-driven data ingestion.
  • Databricks certifications (Architect, Data Engineer Professional).
Join our dynamic team of data experts and contribute your skills to shaping the future of our Enterprise Data Services project.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 10491343
  • Position Id: 8778922
  • Posted 30+ days ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Remote

6d ago

Easy Apply

Contract

Depends on Experience

Remote

14d ago

Easy Apply

Contract

Depends on Experience

Remote or Long Beach, California

Today

Full-time

Remote

Today

Easy Apply

Contract

Depends on Experience

Search all similar jobs