Overview
On Site
USD 76.73 - 76.73 per hour
Contract - W2
Skills
Advanced Analytics
Management
Workflow
Mentorship
Team Building
Documentation
Computer Science
Data Engineering
Performance Tuning
Identity Management
SQL
Data Architecture
Cloud Computing
Amazon Web Services
Microsoft Azure
Google Cloud
Google Cloud Platform
Amazon S3
Amazon Redshift
Orchestration
RDBMS
Scala
Optimization
PySpark
Data Processing
Business Rules
Data Governance
Regulatory Compliance
DevOps
Terraform
Git
Continuous Integration
Continuous Delivery
Streaming
Apache Kafka
Real-time
Meta-data Management
Access Control
RBAC
Data Masking
Machine Learning (ML)
Open Source
Apache Spark
Databricks
Life Insurance
Screening
Writing
Career Counseling
Recruiting
Law
Testing
Job Details
Senior Spark Data Engineer - Contract or CTP - Chicago, IL - $ 76.73- 76.73/hr.
The final salary or hourly wage, as applicable, paid to each candidate/applicant for this position is ultimately dependent on a variety of factors, including, but not limited to, the candidate's/applicant's qualifications, skills, and level of experience as well as the geographical location of the position.
Applicants must be legally authorized to work in the United States. Sponsorship not available.
Our client is seeking a Senior Spark Data Engineer in Chicago, IL.
Role Description
We are seeking an experienced Senior Data Engineer with strong expertise in Apache Spark to help build and manage scalable, secure, and efficient data platforms. This role will be instrumental in designing data architectures and pipelines that support both advanced analytics and governed data access across the organization. You will work with cross-functional teams to enable data discovery, lineage, and compliance while delivering high-performance data processing systems.
________________________________________
Key Responsibilities:
Design, build, and optimize scalable data pipelines using Apache Spark.
Manage and govern data access and metadata using AWS Data Zone.
Implement and enforce data access controls, lineage tracking, and data classification.
Integrate data across cloud platforms and on-prem systems into unified data lakes and warehouses.
Partner with data analysts, scientists, and product teams to deliver clean, reliable, and well-governed datasets.
Develop and automate ingestion, transformation, and quality validation workflows.
Ensure data compliance and security policies are implemented consistently across the platform.
Contribute to architecture and governance strategy for enterprise-scale data platforms.
Support performance tuning, troubleshooting, and monitoring of Spark jobs and data pipelines.
Mentor junior engineers and support team development through code reviews and documentation.
Skills & Requirements
Required Skills & Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with production-level data systems.
Expert in Apache Spark (PySpark or Scala), including performance tuning and optimization.
Strong experience with Datazone for data governance and access management.
Proficient in SQL and modern data architecture concepts (e.g., lakehouse, Delta Lake).
Hands-on experience with cloud platforms (AWS, Azure, or Google Cloud Platform), especially in data services (e.g., S3, ADLS, Redshift, Synapse).
Experience with orchestration tools like Airflow, Athena, or similar.
Building the Frame work to load data for the data Ingestion for the sources files & RDBMS
Designing and developing data layers using Framework configuration by using the ABCR Meta data.
Using PySpark/Scala to load data, created schema, processed data and sent to kafka.
Optimization of Spark Jobs using Pyspark.
Performing data processing such as aggregation, joins, filter as per business rule.
Strong knowledge of data governance, lineage, access control, and compliance frameworks.
Familiarity with DevOps and infrastructure-as-code tools (e.g., Terraform, Git, CI/CD pipelines).
________________________________________
Preferred Qualifications:
Experience with Spark Structured Streaming, Kafka, or similar real-time systems.
Working knowledge of enterprise metadata management and data catalog tools.
Prior experience implementing role-based access control (RBAC), row-level security, and data masking.
Exposure to MLflow, Feature Store, and integration of data pipelines with ML models.
Contribution to open-source or community initiatives around Spark or Databricks.
Benefits/Other Compensation
This position is a contract/temporary role where Hays offers you the opportunity to enroll in full medical benefits, dental benefits, vision benefits, 401K and Life Insurance ($20,000 benefit).
Why Hays?
You will be working with a professional recruiter who has intimate knowledge of the industry and market trends. Your Hays recruiter will lead you through a thorough screening process in order to understand your skills, experience, needs, and drivers. You will also get support on resume writing, interview tips, and career planning, so when there's a position you really want, you're fully prepared to get it.
Nervous about an upcoming interview? Unsure how to write a new resume?
Visit the Hays Career Advice section to learn top tips to help you stand out from the crowd when job hunting.
Hays is committed to building a thriving culture of diversity that embraces people with different backgrounds, perspectives, and experiences. We believe that the more inclusive we are, the better we serve our candidates, clients, and employees. We are an equal employment opportunity employer, and we comply with all applicable laws prohibiting discrimination based on race, color, creed, sex (including pregnancy, sexual orientation, or gender identity), age, national origin or ancestry, physical or mental disability, veteran status, marital status, genetic information, HIV-positive status, as well as any other characteristic protected by federal, state, or local law. One of Hays' guiding principles is 'do the right thing'.
We also believe that actions speak louder than words.
In that regard, we train our staff on ensuring inclusivity throughout the entire recruitment process and counsel our clients on these principles. If you have any questions about Hays or any of our processes, please contact us.
In accordance with applicable federal, state, and local law protecting qualified individuals with known disabilities, Hays will attempt to reasonably accommodate those individuals unless doing so would create an undue hardship on the company. Any qualified applicant or consultant with a disability who requires an accommodation in order to perform the essential functions of the job should call or text .
Drug testing may be required; please contact a recruiter for more information.
#LI-DNI
The final salary or hourly wage, as applicable, paid to each candidate/applicant for this position is ultimately dependent on a variety of factors, including, but not limited to, the candidate's/applicant's qualifications, skills, and level of experience as well as the geographical location of the position.
Applicants must be legally authorized to work in the United States. Sponsorship not available.
Our client is seeking a Senior Spark Data Engineer in Chicago, IL.
Role Description
We are seeking an experienced Senior Data Engineer with strong expertise in Apache Spark to help build and manage scalable, secure, and efficient data platforms. This role will be instrumental in designing data architectures and pipelines that support both advanced analytics and governed data access across the organization. You will work with cross-functional teams to enable data discovery, lineage, and compliance while delivering high-performance data processing systems.
________________________________________
Key Responsibilities:
Design, build, and optimize scalable data pipelines using Apache Spark.
Manage and govern data access and metadata using AWS Data Zone.
Implement and enforce data access controls, lineage tracking, and data classification.
Integrate data across cloud platforms and on-prem systems into unified data lakes and warehouses.
Partner with data analysts, scientists, and product teams to deliver clean, reliable, and well-governed datasets.
Develop and automate ingestion, transformation, and quality validation workflows.
Ensure data compliance and security policies are implemented consistently across the platform.
Contribute to architecture and governance strategy for enterprise-scale data platforms.
Support performance tuning, troubleshooting, and monitoring of Spark jobs and data pipelines.
Mentor junior engineers and support team development through code reviews and documentation.
Skills & Requirements
Required Skills & Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with production-level data systems.
Expert in Apache Spark (PySpark or Scala), including performance tuning and optimization.
Strong experience with Datazone for data governance and access management.
Proficient in SQL and modern data architecture concepts (e.g., lakehouse, Delta Lake).
Hands-on experience with cloud platforms (AWS, Azure, or Google Cloud Platform), especially in data services (e.g., S3, ADLS, Redshift, Synapse).
Experience with orchestration tools like Airflow, Athena, or similar.
Building the Frame work to load data for the data Ingestion for the sources files & RDBMS
Designing and developing data layers using Framework configuration by using the ABCR Meta data.
Using PySpark/Scala to load data, created schema, processed data and sent to kafka.
Optimization of Spark Jobs using Pyspark.
Performing data processing such as aggregation, joins, filter as per business rule.
Strong knowledge of data governance, lineage, access control, and compliance frameworks.
Familiarity with DevOps and infrastructure-as-code tools (e.g., Terraform, Git, CI/CD pipelines).
________________________________________
Preferred Qualifications:
Experience with Spark Structured Streaming, Kafka, or similar real-time systems.
Working knowledge of enterprise metadata management and data catalog tools.
Prior experience implementing role-based access control (RBAC), row-level security, and data masking.
Exposure to MLflow, Feature Store, and integration of data pipelines with ML models.
Contribution to open-source or community initiatives around Spark or Databricks.
Benefits/Other Compensation
This position is a contract/temporary role where Hays offers you the opportunity to enroll in full medical benefits, dental benefits, vision benefits, 401K and Life Insurance ($20,000 benefit).
Why Hays?
You will be working with a professional recruiter who has intimate knowledge of the industry and market trends. Your Hays recruiter will lead you through a thorough screening process in order to understand your skills, experience, needs, and drivers. You will also get support on resume writing, interview tips, and career planning, so when there's a position you really want, you're fully prepared to get it.
Nervous about an upcoming interview? Unsure how to write a new resume?
Visit the Hays Career Advice section to learn top tips to help you stand out from the crowd when job hunting.
Hays is committed to building a thriving culture of diversity that embraces people with different backgrounds, perspectives, and experiences. We believe that the more inclusive we are, the better we serve our candidates, clients, and employees. We are an equal employment opportunity employer, and we comply with all applicable laws prohibiting discrimination based on race, color, creed, sex (including pregnancy, sexual orientation, or gender identity), age, national origin or ancestry, physical or mental disability, veteran status, marital status, genetic information, HIV-positive status, as well as any other characteristic protected by federal, state, or local law. One of Hays' guiding principles is 'do the right thing'.
We also believe that actions speak louder than words.
In that regard, we train our staff on ensuring inclusivity throughout the entire recruitment process and counsel our clients on these principles. If you have any questions about Hays or any of our processes, please contact us.
In accordance with applicable federal, state, and local law protecting qualified individuals with known disabilities, Hays will attempt to reasonably accommodate those individuals unless doing so would create an undue hardship on the company. Any qualified applicant or consultant with a disability who requires an accommodation in order to perform the essential functions of the job should call or text .
Drug testing may be required; please contact a recruiter for more information.
#LI-DNI
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.