Sr Data Engineer (DataBricks, PySpark & AWS/Azure || Princeton, NJ (Hybrid) || Only Local and s

Overview

Hybrid
Depends on Experience
Contract - W2

Skills

Amazon Web Services
Databricks
PySpark
Python

Job Details

Hi,
I am Suresh Durgam from iPivot. Please find the job description below for your reference. If interested, reply with an updated resume.
Job Title: Sr Data Engineer (DataBricks, PySpark & AWS/Azure)
Location: Princeton, NJ (Hybrid)
Duration: W2 Contract
Required Skills and Qualifications
Bachelor's in Computer Science or related field, with 5+ years in data engineering including strong Databricks experience.
Proficiency in PySpark, Python, SQL, Azure Data Factory, Kafka for streaming, and data modeling (e.g., medallion architecture).
Hands-on with cloud platform (AWS/Azure), ETL/ELT, data lakes/warehouses, and performance optimization
Key Responsibilities
Design, develop, and optimise scalable data pipelines using Databricks, PySpark, and Delta Lake for batch and real-time processing.
Implement ELT processes, data quality checks, monitoring, and governance using tools like Unity Catalog, ensuring compliance and performance.
Collaborate with data scientists, analysts, and stakeholders to integrate data from diverse sources and support analytics/ML workflows.
Mentor junior engineers, lead cloud migrations, and manage CI/CD pipelines with IaC tools like Terraform.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Cloud Bridge Solutions