Overview
On Site
Depends on Experience
Contract - W2
Contract - Independent
Skills
Amazon Web Services
Apache Spark
Big Data
Data Engineering
Data Warehouse
Data Lake
Data Security
Data Quality
PySpark
Python
SQL
Microsoft Azure
Data Processing
Job Details
Role: Data Engineer (Python + Pyspark)
Onsite Role in Charlotte, NC
W2 Contract position
JD:
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Spark/PySpark.
- Automate workflows and job scheduling using Autosys.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements.
- Optimize data processing for performance and scalability.
- Ensure data quality, integrity, and governance across all pipelines.
- Monitor and troubleshoot production data pipelines and workflows.
- Participate in code reviews and contribute to best practices in data engineering.
Required Skills & Qualifications:
- 8 12 years of experience in data engineering or related roles.
- Strong hands-on experience with Apache Spark and PySpark.
- Proficiency in Autosys for job scheduling and workflow automation.
- Solid programming skills in Python and SQL.
- Experience with distributed data processing and big data technologies.
- Familiarity with cloud platforms (AWS, Azure, or Google Cloud Platform) is a plus.
- Strong problem-solving and communication skills.
- Bachelor s or Master s degree in Computer Science, Engineering, or a related field.
Preferred Qualifications:
- Experience with data lake architectures and data warehousing.
- Exposure to CI/CD pipelines and DevOps practices.
- Knowledge of data security and compliance standards.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.