Data Engineer

Overview

Hybrid
Depends on Experience
Full Time

Skills

Data Warehousing
Snowflake
AWS
DBT
Informatica
Python
SSIS
ETL
Data Lake
Azure Data Factory
AWS Glue
Scala
Ggit
CI/CD

Job Details

Job Title: Data Engineer

Location: Princeton, NJ (Hybrid/On-site)

Type of Employment: Full-time

 

About the Role:

We are seeking a highly skilled Data Engineer with 5–7 years of experience to design, develop, and optimize data pipelines, ETL processes, and cloud-based data platforms. The ideal candidate has strong problem-solving abilities, hands-on experience with modern data engineering tools, and a solid understanding of data architecture and analytics workflows.

 

Key Responsibilities:

  • Design, build, and maintain scalable ETL pipelines, data ingestion processes, and data transformation workflows.
  • Develop and manage Data Lakes, Data Warehouses, and cloud-based data platforms.
  • Work with platforms such as Azure Data Factory, AWS Glue, or similar orchestration tools to automate data workflows.
  • Build, optimize, and maintain SQL queries, stored procedures, and database structures.
  • Collaborate with data scientists, analysts, and business stakeholders to deliver reliable, high-quality datasets.
  • Ensure data quality, integrity, and security across multiple systems and environments.
  • Implement best practices for data governance, metadata management, and data cataloging.
  • Monitor and troubleshoot data pipelines to ensure high availability and performance.
  • Participate in Agile ceremonies, code reviews, and continuous improvement initiatives.

 

Required Qualifications:

  • 5–7 years of hands-on experience as a Data Engineer or similar role.
  • Strong proficiency in ETL development, data integration, and pipeline orchestration.
  • Experience with Data Lakes, Data Warehouses, and distributed data processing.
  • Hands-on experience with Azure Data Factory, AWS Glue, or equivalent tools.
  • Strong SQL skills with the ability to write complex queries and optimize performance.
  • Experience with Python or Scala for data processing tasks.
  • Familiarity with cloud platforms (Azure, AWS, or Google Cloud Platform) and data storage solutions.
  • Experience with version control tools (Git) and CI/CD pipelines.
  • Understanding of data modeling (dimensional, relational) and BI concepts.

 

Preferred Skills:

  • Experience with Big Data technologies (Spark, Databricks, Hadoop).
  • Knowledge of Azure Synapse, Snowflake, Redshift, or BigQuery.
  • Background in API integrations, streaming technologies (Kafka, EventHub), or real-time data pipelines.
  • Experience with data quality frameworks and monitoring tools.
  • Understanding of security and compliance standards (PII, HIPAA, GDPR).

 

Education:

Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field (or equivalent experience).

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Brilliant Infotech Inc.