Data Engineer

Overview

Hybrid
$60 - $70
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)

Skills

Amazon Redshift
Amazon S3
Amazon Web Services
Analytics
Apache Airflow
Big Data
Collaboration
Continuous Delivery
Continuous Integration
Data Engineering
Data Processing
Data Quality
Database
Database Administration
Electronic Health Record (EHR)
Extract
Transform
Load
Management
NoSQL
PySpark
Python
Reporting
SQL
Scripting
Testing
Unstructured Data
Workflow
ELT

Job Details

Data Engineer- Irvine, CA OR Los Angeles, CA (Hybrid)

Job Summary:

We are seeking a highly skilled and motivated Data Engineer with hands-on experience in Python, PySpark, Apache Airflow, AWS services, and database management. The ideal candidate will work on designing, developing, and maintaining scalable data pipelines and infrastructure for large-scale data processing and analytics.

Key Responsibilities:

Design and implement scalable ETL/ELT pipelines using PySpark and Airflow

Develop and maintain robust Python scripts and automation tools

Work with structured and unstructured data across AWS services such as S3, Glue, Lambda, EMR, Redshift, etc.

Design efficient data models and manage databases (SQL and NoSQL) for analytics and reporting

Optimize the performance of big data processing workflows and ensure data quality and reliability

Collaborate with data scientists, analysts, and other engineering teams to support data needs

Monitor and troubleshoot data pipelines, ensuring smooth operations and minimal downtime

Implement best practices for code versioning, testing, and CI/CD for data engineering workflows.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.