Overview
Skills
Job Details
Databricks Engineer : Onsite Louisville, KY (Local candidates only)
We are looking to hire a candidate with the skills sets mentioned and experience for one of our clients.
Job Summary
We are seeking a Databricks Engineer to join our team in Louisville, KY. In this role, you will design, develop, and optimize scalable data solutions using Databricks, Apache Spark, and cloud-based platforms (Azure, AWS, or Google Cloud Platform). You will work closely with data scientists, analysts, and business stakeholders to build robust data pipelines, enable advanced analytics, and support data-driven decisions across the organization.
*This is an onsite position, local candidates only.
Key Responsibilities:
- Design, build, and maintain scalable and efficient data pipelines on the Databricks platform (using Apache Spark and related technologies).
- Develop ETL/ELT workflows to ingest, transform, and load data from diverse structured and unstructured sources.
- Collaborate with data architects, data scientists, and business teams to understand data needs and deliver high-quality solutions.
- Optimize Spark jobs for performance, scalability, and cost-efficiency in cloud environments (Azure Databricks, AWS, or Google Cloud Platform).
- Implement robust data quality checks, validation frameworks, and monitoring to ensure reliable data delivery.
- Manage Databricks Workspaces, Notebooks, Clusters, and Jobs including automation and CI/CD integration.
- Integrate Databricks solutions with cloud data services such as Azure Data Lake, AWS S3, Delta Lake, and SQL databases.
- Apply data security, governance, and compliance standards to all workflows and storage solutions.
- Support advanced analytics and machine learning workloads in collaboration with data science and ML teams.
- Document technical solutions, architecture, and best practices for team knowledge sharing.
Required Skills & Qualifications:
- Bachelor s degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience).
- 3+ years of hands-on experience developing solutions using Databricks and Apache Spark.
- Strong proficiency in Python, PySpark, and SQL (experience with Scala or Java is a plus).
- Experience working with Delta Lake, Lakehouse architectures, and cloud storage solutions (Azure, AWS, or Google Cloud Platform).
- Familiarity with Databricks SQL, Notebooks, and cluster management.
- Experience building and optimizing large-scale ETL/ELT pipelines and distributed data processing workflows.
- Strong understanding of data modeling, data warehousing concepts, and performance tuning.
- Experience with version control (Git), CI/CD pipelines, and DevOps practices for Databricks deployments.
- Excellent problem-solving, communication, and collaboration skills.
Other Job Details:
- Location: Louisville, KY (Onsite; local candidates only)
- Pay Rate: $50/hr (C2C basis)
- Visa Restrictions: No CPT, OPT, or candidates
- Local Proof Required: Valid Kentucky driver s license
- Interview Process: Initial phone screen followed by video interview