Overview
Skills
Job Details
Job Description:
Our client is seeking highly skilled and motivated Data Engineers with strong experience in Python, PySpark, Snowflake, EMR, and EKS. The ideal candidate will play a critical role in building and maintaining scalable data pipelines, enabling advanced analytics, and supporting enterprise-level data initiatives.
Responsibilities:
Design, build, and optimize scalable data pipelines and ETL processes using Python and PySpark.
Develop and manage data workflows on AWS services including EMR and EKS.
Implement, maintain, and optimize solutions in Snowflake for data warehousing and analytics.
Work closely with cross-functional teams including data architects, analysts, and business stakeholders.
Ensure data quality, consistency, and security across multiple systems.
Troubleshoot and optimize performance for large-scale data processing applications.
Contribute to best practices in coding, testing, and deployment for enterprise data solutions.
Preferred Qualifications:
Prior experience working with Freddie Mac or similar financial clients is highly preferred.
Strong understanding of data warehousing concepts and big data ecosystems.
Hands-on experience with CI/CD pipelines and containerized deployments.
Excellent problem-solving and communication skills.