Overview
Skills
Job Details
Data Engineer
Department: Information Technology
Location - Newark, NJ. (Hybrid - candidate must be local to the area and able to go into the office periodically)
Role Type: 6-Month Contract (possibility for extension)
About Our Client
Our client is a highly respected organization within the utilities and technology industry, supporting large-scale enterprise information technology projects across the northeast. They are dedicated to driving innovation, operational efficiency, and sustainable infrastructure through advanced data management and analytics solutions. This role offers the opportunity to work with cutting-edge technologies in a collaborative, mission-driven environment focused on service reliability and modernization.
Job Description
We’re seeking a Data Engineer who is motivated by variety, is seeking a challenging environment for professional growth, and is interested in supporting large information technology projects for one of our clients in the northeast.
The Data Engineer will build and maintain the infrastructure for the organization's data by designing, constructing, and optimizing data pipelines and architecture.
Duties and Responsibilities
• Ingest data from SAP, Salesforce, Google Analytics, OKTA, and other sources into AWS S3 Raw layer
• Curate and transform data into standardized datasets in the Curated layer
• Define and implement data mapping and transformation logic/code and load data into Redshift tables and views for optimal performance
• Deploy and promote data pipeline code from lower environments (Dev/Test) to Production following governance and change control processes
• Develop and maintain ELT/ETL pipelines using AWS Glue, Step Functions, Lambda, DMS, and AppFlow
• Automate transformations and model refresh using Python, PySpark and SQL
• Implement end-to-end source-to-target data integration, mapping and transformation across data source ? Raw ? Curated ? Redshift
• Proficient in Redshift SQL for transformations, optimization, and model refresh
• Integrate with on-prem and SaaS data sources, such as SAP (via Simplement), Salesforce, OKTA, MuleSoft and JAMS
• Implement CI/CD deployment using GitHub
• Familiar with CloudWatch, CloudTrail, Secrets Manager for monitoring and security
• Manage metadata and lineage via AWS Glue Data Catalog
Required Experience/Skills
• Bachelor’s degree in related field with 8+ years of experience as a Data Engineer (additional years of experience may be considered in lieu of a degree)
• Proficiency in programming languages like Python or Java, strong SQL skills, and knowledge of big data tools like Apache Hadoop, Spark, or Kafka
• Experience with cloud platforms (AWS, Azure, GCP) and data warehousing solutions (Snowflake, Redshift, BigQuery)
• Self-driven and have demonstrated the ability to work independently with minimum guidance
• Demonstrated multitasking ability, problem-solving skills, and a consistent record of on-time delivery and customer service
• Excellent organizational and communication skills
Nice-to-Haves
• Utilities experience
• AWS Certified Data Engineer – Associate
• AWS Certified Developer – Associate
Education
Bachelor’s degree in related field or equivalent experience
Pay & Benefits Summary
- Pay rate up 62/h W2
Data Engineer | AWS | Redshift | ETL | Python | Data Pipelines | Cloud | SQL | Spark | Glue | Lambda