Overview
Skills
Job Details
Title: Lead Data Engineer
Location: Roseland, NJ (Hybrid 3 days onsite)
Duration: 12-Month Contract
Employment Type: W2
Relocation: Accepted
Job Description
We are seeking an experienced Lead Data Engineer specializing in AWS and Databricks to drive the design, development, and delivery of a scalable data hub/marketplace supporting internal analytics, data science, and downstream applications. This role focuses on building robust data integration workflows, enterprise data models, and curated datasets to support complex business needs.
You will collaborate across engineering, analytics, and business teams to define integration rules, data acquisition strategies, data quality standards, and metadata best practices. This position requires strong leadership, hands-on technical depth, and the ability to communicate effectively with technical and non-technical stakeholders.
Must-Have Skills
- AWS
- Databricks
- Python
- PySpark
- Lead/Staff-level engineering experience
- Contact Center experience (nice to have)
Key Responsibilities
- Lead the architecture, design, and delivery of an enterprise data hub/marketplace.
- Build and optimize data integration workflows, ingestion pipelines, and subscription-based services.
- Develop and maintain enterprise data models for data lakes, warehouses, and analytics environments.
- Define integration rules and data acquisition methods (batch, streaming, replication).
- Conduct detailed data analysis to validate source systems and support use-case requirements.
- Establish data quality standards, monitoring practices, and governance alignment.
- Maintain enterprise data taxonomy, lineage, and catalog metadata.
- Mentor junior developers and collaborate closely with architects and peer engineering teams.
- Communicate clearly with business and technical stakeholders.
Required Qualifications
- Bachelor s degree in Computer Science, Information Technology, or related field.
- 8+ years of experience integrating and transforming data into standardized, consumption-ready datasets.
- Strong expertise with AWS, Databricks, Python, PySpark, and SQL.
- Advanced knowledge of cloud-based data platforms and warehouse technologies.
- Strong experience with RDBMS (Oracle, SQL Server).
- Familiarity with NoSQL (MongoDB).
- Experience designing scalable data pipelines for structured and unstructured data.
- Strong understanding of data quality, governance, compliance, and security.
- Experience building ingestion pipelines and lakehouse-style architectures.
- Ability to define and design complex data engineering solutions with minimal guidance.
- Excellent communication, analytical, and problem-solving skills.
Nice-to-Have Skills
- Knowledge of contact center technologies: Salesforce, ServiceNow, Oracle CRM, Genesys Cloud/InfoMart, Calabrio, Nuance, IBM Chatbot, etc.
- Experience with GitHub, JIRA, Confluence.
- CI/CD experience with Jenkins, Terraform, Splunk, Dynatrace.
- Knowledge of Informatica PowerCenter, Data Quality, Data Catalog.
- Experience with Agile methodology.
- Databricks Data Engineer Associate certification.