Overview
On Site
Full Time
Part Time
Accepts corp to corp applications
Contract - Independent
Contract - W2
Skills
Employment Authorization
Extract
Transform
Load
Data Quality
Systems Design
Big Data
Version Control
Git
Continuous Integration
Continuous Delivery
Jenkins
Microsoft Azure
DevOps
Python
PySpark
Apache Spark
Apache Hadoop
Apache Hive
HDFS
SQL
NoSQL
Database
Debugging
Performance Tuning
Problem Solving
Conflict Resolution
Job Details
Hiring: W2 Candidates Only
Visa: Open to any visa type with valid work authorization in the USA
Responsibilities:
- XXgn, develop, and maintain scalable ETL pipelines using PySpark and Python to process large-scale datasets across distributed environments.
- Implement complex data transformation logic and optimize Spark jobs for performance and cost-efficiency.
- CollaboXX with data scientists, analysts, and business stakeholders to understand data requirements and deliver robust solutions.
- IntegXX data from multiple sources (structured, semi-structured, unstructured) into unified data models.
- Ensure data quality, consistency, and governance through validation, monitoring, and error handling mechanisms.
- Participate in system design discussions and contribute to architectural decisions for big data platforms.
- Document technical solutions and maintain version control using Git and CI/CD tools like Jenkins or Azure DevOps.
Required Skills:
- Strong programming skills in Python and PySpark.
- Hands-on experience with Apache Spark, Hadoop, Hive, and HDFS.
- Proficiency in SQL and working with both relational and NoSQL databases.
- Strong debugging, performance tuning, and problem-solving skills.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.