Data Engineer

Overview

On Site
Depends on Experience
Full Time

Skills

Advanced Analytics
Amazon Web Services
Analytics
Big Data
Communication
Conflict Resolution
Continuous Delivery
Continuous Integration
Data Processing
Data Quality
Data Warehouse
Electronic Health Record (EHR)
Extract
Transform
Load
Finance
Management
Problem Solving
PySpark
Python
Snow Flake Schema
Testing
Workflow

Job Details

Job Description:
Our client is seeking highly skilled and motivated Data Engineers with strong experience in Python, PySpark, Snowflake, EMR, and EKS. The ideal candidate will play a critical role in building and maintaining scalable data pipelines, enabling advanced analytics, and supporting enterprise-level data initiatives.

Responsibilities:

  • Design, build, and optimize scalable data pipelines and ETL processes using Python and PySpark.

  • Develop and manage data workflows on AWS services including EMR and EKS.

  • Implement, maintain, and optimize solutions in Snowflake for data warehousing and analytics.

  • Work closely with cross-functional teams including data architects, analysts, and business stakeholders.

  • Ensure data quality, consistency, and security across multiple systems.

  • Troubleshoot and optimize performance for large-scale data processing applications.

  • Contribute to best practices in coding, testing, and deployment for enterprise data solutions.

Preferred Qualifications:

  • Prior experience working with Freddie Mac or similar financial clients is highly preferred.

  • Strong understanding of data warehousing concepts and big data ecosystems.

  • Hands-on experience with CI/CD pipelines and containerized deployments.

  • Excellent problem-solving and communication skills.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.