Lead Data Engineer

Overview

Hybrid
Depends on Experience
Contract - W2
Contract - 12 Month(s)

Skills

AWS
Databricks
Python
PySpark
SQL
RDBMS
Oracle
SQL Server
MongoDB
Data Pipelines
Data Modeling
Data Lake
Data Warehouse
ETL
Data Integration
Streaming Data
Batch Processing
Data Quality
Data Governance
GitHub
JIRA
Confluence
Jenkins
Terraform
Splunk
Dynatrace
Informatica PowerCenter
Informatica Data Quality
Informatica Data Catalog
Contact Center Data
Salesforce
ServiceNow
Genesys Cloud
Genesys InfoMart
Calabrio
Nuance
IBM Chatbot

Job Details

Title: Lead Data Engineer
Location: Roseland, NJ (Hybrid 3 days onsite)
Duration: 12-Month Contract
Employment Type: W2
Relocation: Accepted

Job Description

We are seeking an experienced Lead Data Engineer specializing in AWS and Databricks to drive the design, development, and delivery of a scalable data hub/marketplace supporting internal analytics, data science, and downstream applications. This role focuses on building robust data integration workflows, enterprise data models, and curated datasets to support complex business needs.

You will collaborate across engineering, analytics, and business teams to define integration rules, data acquisition strategies, data quality standards, and metadata best practices. This position requires strong leadership, hands-on technical depth, and the ability to communicate effectively with technical and non-technical stakeholders.

Must-Have Skills

  • AWS
  • Databricks
  • Python
  • PySpark
  • Lead/Staff-level engineering experience
  • Contact Center experience (nice to have)

Key Responsibilities

  • Lead the architecture, design, and delivery of an enterprise data hub/marketplace.
  • Build and optimize data integration workflows, ingestion pipelines, and subscription-based services.
  • Develop and maintain enterprise data models for data lakes, warehouses, and analytics environments.
  • Define integration rules and data acquisition methods (batch, streaming, replication).
  • Conduct detailed data analysis to validate source systems and support use-case requirements.
  • Establish data quality standards, monitoring practices, and governance alignment.
  • Maintain enterprise data taxonomy, lineage, and catalog metadata.
  • Mentor junior developers and collaborate closely with architects and peer engineering teams.
  • Communicate clearly with business and technical stakeholders.

Required Qualifications

  • Bachelor s degree in Computer Science, Information Technology, or related field.
  • 8+ years of experience integrating and transforming data into standardized, consumption-ready datasets.
  • Strong expertise with AWS, Databricks, Python, PySpark, and SQL.
  • Advanced knowledge of cloud-based data platforms and warehouse technologies.
  • Strong experience with RDBMS (Oracle, SQL Server).
  • Familiarity with NoSQL (MongoDB).
  • Experience designing scalable data pipelines for structured and unstructured data.
  • Strong understanding of data quality, governance, compliance, and security.
  • Experience building ingestion pipelines and lakehouse-style architectures.
  • Ability to define and design complex data engineering solutions with minimal guidance.
  • Excellent communication, analytical, and problem-solving skills.

Nice-to-Have Skills

  • Knowledge of contact center technologies: Salesforce, ServiceNow, Oracle CRM, Genesys Cloud/InfoMart, Calabrio, Nuance, IBM Chatbot, etc.
  • Experience with GitHub, JIRA, Confluence.
  • CI/CD experience with Jenkins, Terraform, Splunk, Dynatrace.
  • Knowledge of Informatica PowerCenter, Data Quality, Data Catalog.
  • Experience with Agile methodology.
  • Databricks Data Engineer Associate certification.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Black Rock Group