ETL Developer(W2 Only)

  • Ellicott City, MD
  • Posted 12 hours ago | Updated 12 hours ago

Overview

Hybrid
$100,000 - $120,000
Full Time
25% Travel

Skills

Data Pipelining
Databricks

Job Details

Job Summary:
We are seeking an experienced ETL Developer with expertise in Databricks to join our data engineering team. In this role, you will be responsible for designing, developing, and maintaining robust ETL pipelines that support our data lake and data warehouse solutions. You will work closely with data scientists, analysts, and other engineers to ensure high data quality, scalability, and performance across platforms.
Key Responsibilities:
  • Design, build, and optimize scalable ETL pipelines using Apache Spark on Databricks.
  • Integrate data from various sources including structured, semi-structured, and unstructured formats (e.g., APIs, SQL/NoSQL databases, flat files, cloud storage).
  • Implement data ingestion, cleansing, transformation, and enrichment processes in Databricks.
  • Collaborate with cross-functional teams to define data requirements and deliver data solutions that support analytics, reporting, and machine learning.
  • Monitor ETL workflows, troubleshoot issues, and ensure data reliability and accuracy.
  • Develop and maintain documentation for ETL processes, data flows, and data models.
  • Apply best practices in data governance, security, and performance optimization.
  • Participate in code reviews, testing, and continuous improvement initiatives.

Required Qualifications:
  • Bachelor s or Master s degree in Computer Science, Information Systems, Engineering, or related field.
  • 7+ years of experience in ETL development and data engineering.
  • 5+ years of hands-on experience with Databricks and Apache Spark (preferably in a cloud environment such as Azure, AWS, or Google Cloud Platform).
  • Proficiency in PySpark, SQL, and scripting languages like Python.
  • Experience with Delta Lake, data lakes, and data warehousing concepts.
  • Strong understanding of data modeling, data architecture, and performance tuning.
  • Familiarity with CI/CD practices, version control (e.g., Git), and Agile methodologies.

Preferred Qualifications:
  • Experience with Azure Data Factory, AWS Glue, or similar orchestration tools.
  • Knowledge of MLflow, Spark SQL, and streaming data pipelines.
  • Understanding of data security, compliance, and governance frameworks.
  • Certification in Databricks or relevant cloud platforms (e.g., Azure Data Engineer Associate).
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About 4A-Consulting