Overview
Remote
$0 - $0
Contract - Independent
Contract - W2
Contract - 6 Month(s)
Skills
SAP
Salesforce
Google Analytics
OKTA
ELT/ETL
AWS Glue
Step Functions
Lambda
DMS
and AppFlow
Python
PySpark
SQL
Job Details
Data Engineer
We re seeking a Data Engineer who is motivated by variety, is seeking a challenging environment for professional growth, and is interested in supporting large information technology projects for one of our clients in the northeast.
The Data Engineer will build and maintain the infrastructure for the organization's data by designing, constructing, and optimizing data pipelines and architecture.
Location: This position located in Newark, NJ. Ideal candidate must be local to the area and able to go into the office periodically when needed. If not local, they must be willing to travel as needed.
Responsibilities:
- Ingest data from SAP, Salesforce, Google Analytics, OKTA, and other sources into AWS S3 Raw layer
- Curate and transform data into standardized datasets in the Curated layer
- Define and implement data mapping and transformation logic/code and load data into Redshift tables and views for optimal performance
- Deploy and promote data pipeline code from lower environments (Dev/Test) to Production following PSEG governance and change control processes
- Develop and maintain ELT/ETL pipelines using AWS Glue, Step Functions, Lambda, DMS, and AppFlow
- Automate transformations and model refresh using Python, PySpark and SQL
- Implement end-to-end source-to-target data integration, mapping and transformation across data source Raw Curated Redshift
- Proficient in Redshift SQL for transformations, optimization, and model refresh
- Integrate with on-prem and SaaS data sources, such as SAP (via Simplement), Salesforce, OKTA, MuleSoft and JAMS
- Implement CI/CD deployment using Github
- Familiar with CloudWatch, CloudTrail, Secrets Manager for monitoring and security
- Manage metadata and lineage via AWS Glue Data Catalog
Required Education & Experience:
- Bachelor's degree in related field with 8+ years of experience as a data engineer Additional years of experience may be considered in lieu of a degree.
- Proficiency in programming languages like Python or Java, strong SQL skills, and knowledge of big data tools like Apache Hadoop, Spark, or Kafka.
- Experience with cloud platforms (AWS, Azure, Google Cloud Platform) and data warehousing solutions (Snowflake, Redshift, BigQuery)
- Self-driven and have demonstrated the ability to work independently with minimum guidance
- Demonstrated multitasking ability, problem solving skills and a consistent record of on time delivery and customer service
- Excellent organizational and communication skills
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.