Databricks Data Engineer - REMOTE (Quarterly travel to Gaithersburg, MD)

  • Posted 10 hours ago | Updated 10 hours ago

Overview

Remote
$0 - $0
Contract - Independent
Contract - W2
Contract - 6 Month(s)

Skills

Databricks
Python
Spark
R
.NET
Azure
AWS

Job Details

Databricks Data Engineer
REMOTE (Quarterly travel to Gaithersburg, MD)
Public Trust

In this role, you will:
- Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
- Manage ingestion routines for processing multi-terabyte datasets efficiently for multiple projects simultaneously, where each project may have multiple Databricks workspaces.
- Integrate data from various structured and unstructured sources to enable high-quality business insights. Proficiency in data analysis techniques for deriving insights from large datasets.
- Implement effective data management strategies to ensure data integrity, availability, and accessibility. Identify opportunities for cost optimization in data storage, processing, and analytics operations.
- Monitor and support user requests, addressing platform or performance issues, cluster stability, Spark optimization, and configuration management.
- Collaborate with the team to enable advanced AI-driven analytics and data science workflows.
- Integrate with various Azure services including Azure Functions, Storage Services, Data Factory, Log Analytics, and User Management for seamless data workflows.
- Experience with the above Azure services is a plus.
- Provision and manage infrastructure using Infrastructure-as-Code (IaC)
- Apply best practices for data security, data governance, and compliance, ensuring support for federal regulations and public trust standards.
- Proactively collaborate with technical and non-technical teams to gather requirements and translate business needs into data solutions.
For this position, you must possess:
- BS degree in Computer Science or related field and 3+ years or Master s degree with 2+ years of experience
- 3+ years of experience developing and designing Ingestion flows (structured, streaming, and unstructured data) using cloud platform services with data quality
- Databricks Data Engineer certification and 2+ years of experience maintaining Databricks platform and development in Spark
- Ability to work directly with clients and act as front line support for requests coming in from clients. Clearly document and express the solution in form of architecture and interface diagrams.
- Proficient at Python, Spark and R are essential. .NET based development is a plus.
- Knowledge and experience with data governance, including metadata management, enterprise data catalog, design standards, data quality governance, and data security.
- Experience with Agile process methodology, CI/CD automation, and cloud-based developments (Azure, AWS).
- Not required, but additional education, certifications, and/or experience are a plus: Certifications in Azure cloud, Knowledge of FinOps principles and cost managementto be considered

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.