Overview
On Site
$30 - $40
Contract - W2
Contract - Independent
Contract - 8 Month(s)
No Travel Required
Skills
Databricks
Developer
azure
pyspark
ETL
Job Details
Hello
Please find the JD Below and let me know if you have any Suitable match
Job Title : Databricks Engineer / Developer
Location : Louisville KY (Local only) -- Kentucky DL/ Proof is a must.
and the created date should be more than 5 years.
Role Description:
We are seeking a skilled Databricks Engineer to design, develop, and optimize scalable data solutions using Databricks, Apache Spark, and cloud-based data platforms. In this role, you will collaborate with data scientists, analysts, and business stakeholders to build robust pipelines, enable advanced analytics, and support data-driven decision-making across the organization.
Key Responsibilities
- Design, build, and maintain scalable and efficient data pipelines on the Databricks platform (using Apache Spark and related technologies).
- Develop ETL/ELT workflows to ingest, transform, and load data from diverse structured and unstructured data sources.
- Collaborate with data architects and business teams to understand data requirements and deliver high-quality solutions.
- Optimize Spark jobs for performance, scalability, and cost-efficiency in cloud environments (Azure Databricks / AWS / Google Cloud Platform).
- Implement data quality checks, validation frameworks, and monitoring mechanisms to ensure reliable data delivery.
- Manage Databricks Workspaces, Notebooks, Clusters, and Jobs including automation and CI/CD integration.
- Integrate Databricks solutions with cloud data services (e.g., Azure Data Lake, AWS S3, Delta Lake, SQL Databases, etc.).
- Ensure data security, governance, and compliance standards are applied within all workflows and storage solutions.
- Collaborate with data scientists and ML engineers to enable advanced analytics and machine learning workloads.
- Document technical solutions, architecture diagrams, and best practices for team knowledge sharing.
Required Skills & Qualifications
- Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field (or equivalent experience).
- 3+ years hands-on experience developing solutions in Databricks and Apache Spark.
- Strong proficiency in Python, PySpark, and SQL (experience with Scala or Java is a plus).
- Experience working with Delta Lake, Lakehouse architectures, and cloud storage solutions (Azure, AWS, or Google Cloud Platform).
- Familiarity with Databricks SQL, Notebooks, and cluster management.
- Experience in building and optimizing large-scale ETL/ELT pipelines and distributed data processing workflows.
- Strong understanding of data modeling, data warehousing concepts, and performance tuning.
- Experience with version control (Git), CI/CD pipelines, and DevOps practices for Databricks deployments.
- Excellent problem-solving, communication, and collaboration skills.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.