AWS Data Engineer with Databricks and DBT : Lebanon, NJ : Long-Term Contract

Jersey City, NJ, US • Posted 19 hours ago • Updated 19 hours ago
Contract W2
Contract Corp To Corp
Contract Independent
No Travel Required
On-site
Depends on Experience
Fitment

Dice Job Match Score™

👾 Reticulating splines...

Job Details

Skills

  • AWS
  • DBT

Summary

Hi,
 
I''d like you to take a look at a great new position we now have available! Your opinion would be valued with regards to this opening. I really appreciate sending an updated resume & the best number with the best time to reach you.
 
Role: AWS Data Engineer with Databricks and DBT
Location: Lebanon, NJ
Duration: Long-Term Contract


Responsibilities
Design and Development of Data Pipelines:
  • Design, build, and optimize robust ETL/ELT pipelines using AWS services (S3, Glue, Lambda) and the Databricks platform (Spark, Delta Lake, DLT).
  • Ingest and process large volumes of structured and semi-structured data from various sources (APIs, databases, streaming platforms like Kafka or Kinesis) into a centralized data lake or lakehouse.
Data Transformation and Modeling:
  • Develop and maintain data models (e.g., star/snowflake schemas, medallion architecture) optimized for analytics and BI tools using dbt (Data Build Tool).
  • Write complex and efficient SQL queries and Python/PySpark code for data manipulation, transformation, and validation within the Databricks environment.
  • Implement data quality checks, tests, and documentation as part of the dbt workflow, enforcing data governance and security standards.
Orchestration and Automation:
  • Orchestrate and monitor data workflows using Databricks Jobs or external tools like AWS MWAA (Managed Workflows for Apache Airflow).
  • Implement CI/CD pipelines and version control (Git) for all data engineering artifacts (code, configurations, dbt models) to ensure reliable and consistent deployments.
Performance Optimization and Operations:
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments to ensure high performance, reliability, and cost-efficiency.
  • Tune Spark jobs and optimize Delta Lake features (Z-Order, partitioning) to handle growing data volumes and complexity.
Collaboration and Support:
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights.
  • Provide expertise and guidance on data best practices, promoting a culture of data quality and governance.
Must Have skills
  • SQLDBT core and DBT Cloud AWS (redshift)
  • Data bricks with AWS SQL server DB
  • Stone branch scheduling tool
  • Should understand CI/CD and Git.
  • Work in Agile environment with JIRA
Other Skills required/ Good to have:
  • Tableau experience
  • Harness devops Proficient in Linux / Unix environments
 
 

--

Thanks & Regards
HAN IT
   STAFFING

Chigiri Rohith Reddy

Sr.Technical Recruiter

100 Wood Ave S, Suite 102, Iselin NJ, 08830
 
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 90838445
  • Position Id: 8948177
  • Posted 19 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

New York, New York

22d ago

Easy Apply

Contract

Depends on Experience

Hybrid in New York, New York

Today

Easy Apply

Contract

55

Jersey City, New Jersey

Today

Easy Apply

Contract

Depends on Experience

Hybrid in New York, New York

12d ago

Easy Apply

Contract, Third Party

Depends on Experience

Search all similar jobs