Overview
Skills
Job Details
Job Title: Data Architect with Databricks
Location: Columbus,OH - Onsite
Job Type: Contract W2 / 1099
Job Summary:
We are seeking a highly skilled Data Architect with expertise in Databricks to lead the design, development, and implementation of scalable data architectures. The ideal candidate will have deep experience in modern data platforms, especially cloud-based environments (Azure, AWS, or Google Cloud Platform), and be proficient in designing enterprise-grade data pipelines using Databricks, Delta Lake, and Apache Spark.
Key Responsibilities:
Design and implement scalable and secure data architectures using Databricks on cloud platforms (Azure, AWS, or Google Cloud Platform).
Define and manage data models, schemas, and metadata for structured and unstructured data.
Develop end-to-end ETL/ELT data pipelines using Databricks, Spark, and Delta Lake.
Collaborate with data engineers, analysts, and business stakeholders to translate business requirements into scalable data solutions.
Ensure data governance, quality, security, and compliance across all data systems.
Optimize data workflows for performance, cost-efficiency, and scalability.
Provide architectural guidance for data lakehouse and data mesh implementations.
Integrate Databricks with other tools and services (e.g., Power BI, Tableau, Snowflake, Kafka, ADLS).
Maintain and improve CI/CD pipelines and DevOps practices for data platforms.
Stay current with the latest advancements in data engineering, Databricks features, and industry best practices.
Required Qualifications:
Bachelor s or Master s degree in Computer Science, Information Systems, Data Engineering, or related field.
7+ years of experience in data architecture or data engineering roles.
3+ years of hands-on experience with Databricks and Apache Spark.
Experience with Delta Lake, Lakehouse architecture, and cloud data platforms (Azure Data Lake, AWS S3, Google Cloud Storage).
Strong proficiency in Python, SQL, and Spark SQL.
Experience with data modeling (dimensional and normalized) and designing data warehouses/lakes.
Familiarity with MLflow, Unity Catalog, DBFS, and Databricks Notebooks.
Knowledge of data governance frameworks and tools (e.g., Collibra, Purview, Alation).
Familiarity with modern orchestration tools (e.g., Airflow, Azure Data Factory, dbt).
Excellent communication and leadership skills.
Preferred Qualifications:
Databricks Certification (e.g., Databricks Certified Data Engineer Associate or Professional).
Experience with streaming data architectures using Kafka, Spark Structured Streaming.
Experience integrating Databricks with BI tools (Power BI, Tableau).
Exposure to DevOps and CI/CD tools (Azure DevOps, GitHub Actions, Jenkins).
Regards,
Radiantze Inc.