Databricks Developer

Overview

On Site
$40 - $50
Contract - W2
Contract - 12 Month(s)
100% Travel
Able to Provide Sponsorship

Skills

Databricks & AWS Big Data Architecture Certification would be plus.
Solid experience with SQL-based database is required
Delta lake
dbConnect
db API 2.0
Databricks workflows orchestration

Job Details

Title: Databricks Developer

Location: DMV Area

Rate: $50 on W2

In this role, the Databricks Developer is responsible for solving the real-world cutting-edge problem to meet both functional and non-functional requirements.

Responsibilities

  1. Closely work with Architect and lead to design solutions to meet functional and non-functional requirements.
  2. Participate to understand architecture and solution design artifacts.
  3. Proactively implement engineering methodologies, standards, and leading practices.
  4. Identify, communicate, and mitigate Risks, Assumptions, Issues, and Decisions throughout the full lifecycle.

Minimum qualifications

  1. Databricks & AWS Big Data Architecture Certification would be plus.
  2. Solid experience with SQL-based database is required.
  3. Two-year experience with Hadoop/Spark in either an administrative, development or support role is required.
  4. Experience in engineering practices such as development, code refactoring, and leveraging design patterns, CI/CD, and building highly scalable data applications and processes.
  5. Knowledge of advanced data engineering concepts such as dimensional modeling, ETL, data governance, data warehousing involving structured and unstructured data
  6. Must have excellent coding skills either Python or Scala.
  7. Must have experience in Data Engineering domain.
  8. Must have implemented at least 2 project end-to-end in Databricks.
  9. Must have experience on Databricks which consists of various components as below.
    1. Delta lake
    2. dbConnect
    3. db API 2.0
    4. Databricks workflows orchestration
  10. Must be well versed with Databricks Lakehouse concept, Medallion architecture and its implementation in enterprise environments.
  11. Must have good understanding to create complex data pipeline.
  12. Must have extensive knowledge of Spark and Hive data processing framework.

Preferred Qualifications/Skills

  1. Good to have Unity catalog and basic governance knowledge.
  2. Good to have MicroStrategy or Tableau reporting experience.
  3. Good to have knowledge of docker and Kubernetes.
  4. Good to have Linux/Unix Admin or development skills.