Senior Data Engineer

  • Bridgewater, NJ
  • Posted 1 day ago | Updated 23 hours ago

Overview

On Site
Depends on Experience
Full Time

Skills

Analytical Skill
Apache Spark
Continuous Delivery
Continuous Integration
Data Engineering
Data Extraction
Data Governance
Data Integration
Data Lake
Data Manipulation
Data Modeling
Data Quality
Data Security
Data Warehouse
Databricks
Design Review
Extract
Transform
Load
Orchestration
Microsoft Azure
Problem Solving
SQL
Rust
Python

Job Details

Job Title: Senior Data Engineer

Location: Bridgewater, New Jersey

Employment Type: Full-Time


Job Summary:

We are seeking a highly skilled Senior Data Engineer to design, develop, and maintain enterprise-grade data pipelines and ETL solutions. The ideal candidate will have deep expertise in Databricks, Python, Oracle PL/SQL, and Azure, with strong experience in building scalable and efficient data engineering solutions. This role involves a balance of 50% architecture and 50% hands-on development, focusing on data integration, transformation, and performance optimization in an Azure-based environment.


Key Responsibilities:

  • Design, architect, and implement scalable ETL and data pipeline solutions using Apache Spark, Databricks, and Airflow (or similar tools like DBT).
  • Collaborate with business stakeholders, data analysts, and software engineers to define data requirements and deliver high-quality data solutions.
  • Develop robust, reusable, and efficient data workflows for ingestion, transformation, and storage of structured and unstructured data.
  • Implement and optimize complex SQL and PL/SQL queries within Oracle databases for data extraction and transformation.
  • Automate and orchestrate data workflows using PowerShell, Airflow, or similar scripting/orchestration tools.
  • Ensure data quality, integrity, and consistency across multiple systems and data sources.
  • Work within Azure Data Services (Data Factory, Data Lake, Synapse, etc.) for deployment, monitoring, and scaling of pipelines.
  • Leverage Python (NumPy, Pandas) for data manipulation, cleansing, and automation tasks.
  • Implement best practices for version control, CI/CD pipelines, testing, and documentation of ETL processes.
  • Provide technical guidance and mentorship to junior engineers and participate in code/design reviews.

Required Skills and Qualifications:

  • Bachelor s or Master s degree in Computer Science, Information Systems, or a related field.
  • 5+ years of professional experience in Data Engineering or related roles.
  • Strong programming skills in Python (including NumPy and Pandas).
  • Proficient in Oracle PL/SQL for building and optimizing complex queries and stored procedures.
  • Hands-on experience with Databricks, Apache Spark, and ETL pipeline development.
  • Proficient with Azure data ecosystem (Data Factory, Synapse, Data Lake).
  • Familiarity with Airflow, DBT, or similar orchestration and transformation tools.
  • Experience with PowerShell for automation and system scripting.
  • Solid understanding of data warehousing concepts, data modeling, and performance tuning.
  • Strong problem-solving, analytical, and debugging skills.
  • Excellent communication and collaboration abilities in cross-functional teams.

Preferred Skills:

  • Exposure to additional programming languages such as Scala, Java, or Rust.
  • Experience implementing CI/CD for data pipelines.
  • Knowledge of data governance, metadata management, and data security practices.
  • Familiarity with Agile/Scrum methodologies.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.