Overview
On Site
Full Time
Skills
Cloud Storage
Database
Collaboration
Workflow
Knowledge Sharing
Management
Data Quality
Mentorship
Computer Science
Information Systems
Data Engineering
Programming Languages
Scala
Big Data
Apache Hadoop
Apache Kafka
Data Warehouse
Regulatory Compliance
RBAC
Auditing
Conflict Resolution
Problem Solving
Analytical Skill
Communication
Streaming
Real-time
Analytics
Agile
Scrum
Databricks
Apache Spark
Python
SQL
Data Modeling
Warehouse
Extract
Transform
Load
ELT
Process Modeling
Cloud Computing
Microsoft Azure
Amazon Web Services
Google Cloud Platform
Google Cloud
Data Governance
Performance Tuning
Technical Writing
Job Details
We are seeking an experienced Senior Databricks Consultant to design, build, and maintain a scalable Data Lakehouse solutions. The ideal candidate will collaborate with cross-functional teams-including data scientists, analysts, and software engineers-to deliver high-quality, secure, and efficient data solutions that support our business objectives.
Key Responsibilities:
Required Qualifications:
Preferred Qualifications:
Key Skills:
Key Responsibilities:
- Design, develop, and maintain robust data pipelines, data products and ETL/ELT processes using Databricks and Apache Spark.
- Integrate Databricks with various data sources (cloud storage, databases, APIs) and platforms.
- Optimize data pipelines for performance, reliability, and cost-efficiency.
- Implement and monitor data quality checks, validation, and data governance best practices.
- Collaborate with data scientists and analysts to understand requirements and deliver actionable data solutions.
- Document technical designs, workflows, and data models to ensure knowledge sharing and maintainability.
- Configure and manage Databricks environments, including clusters, notebooks, and security settings.
- Troubleshoot and resolve issues related to Databricks performance, data quality, and infrastructure.
- Stay current with the latest Databricks features, big data technologies, and best practices.
- Provide mentorship and technical guidance to junior team members as needed.
Required Qualifications:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
- 3+ years of experience in data engineering, with hands-on experience in Databricks and Apache Spark.
- Strong proficiency in programming languages such as Python and SQL; Scala is a plus.
- Experience with big data technologies (e.g., Hadoop, Kafka).
- Solid understanding of data warehousing concepts, data modeling, and ETL/ELT processes.
- Familiarity with cloud platforms (Azure, AWS, or Google Cloud Platform) and integrating Databricks with cloud services.
- Experience with data governance, security, and compliance, including RBAC and audit logging.
- Strong problem-solving, analytical, and communication skills.
Preferred Qualifications:
- Databricks certification (e.g., Databricks Certified Data Engineer Associate).
- Knowledge of Delta Lake, MLflow, or similar Databricks ecosystem tools.
- Experience with data streaming and real-time analytics.
- Familiarity with Agile/Scrum methodologies.
Key Skills:
- Databricks
- Apache Spark
- Python & SQL
- Data Modeling & Warehousing
- ETL/ELT Process Design
- Cloud Platforms (Azure, AWS, Google Cloud Platform)
- Data Governance & Security
- Performance Optimization
- Technical Documentation
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.