Overview
Skills
Job Details
STRATEGIC STAFFING SOLUTIONS (S3) HAS AN OPENING!
Strategic Staffing Solutions is currently looking for a Big Data Engineer w/ Hadoop and ETL, a W2 contract opportunity with one of our largest clients!
Candidates should be willing to work on our W2 ONLY, NO C2C
Job Title: Big Data Engineer w/ Hadoop and ETL
Role Type: W2 only
Duration: 12 months
Location: Charlotte, NC
Schedule: Hybrid
W2 hourly rate: $65-70
TOP SKILLS: The ideal candidate will have strong Hadoop and ETL pipeline creation and modeling skills.
- Hadoop (ability to operate in hive environment with SPARK and understanding of internal SPARK
- Experience moving data from MS SQL server platform to Hadoop/Hive platform
- SQL SERVER
- SQL ETL Data Modeling experience is essential
- Big Data
- Confluence
- Jenkins
- GitHub
PROJECT DETAILS
This contractor will be joining the Operational Risk Data Integration and Delivery teams who support the front line and second line Analytics and Reporting. On this team Requests are received from Product Owners to update data sets into CARAT and RDS which allow the control team to conduct deeper analytics and reporting.
DUTIES
This resource will support daily jobs that populate which need to be monitored and tasked with making enhancements to current data sets and integration of new data sets
Key Responsibilities:
- Design, develop, and maintain scalable ETL pipelines for batch processing using Big Data technologies.
- Work with large datasets in Hadoop environments, ensuring performance, reliability, and scalability.
- Build and maintain Spark/PySpark data transformation jobs and standalone transformation processes.
- Support and optimize existing data models and pipelines in the Snapshot portal.
- Ensure data quality and pipeline reliability for ongoing CARAT application migration.
- Perform data modeling and understand schema designs for Big Data.
- Collaborate with the hiring manager, technical leads, and cross-functional teams to design and implement data engineering solutions.
- Assist in maintaining and occasionally enhancing legacy SQL Server elements as needed.
- Participate in whiteboarding sessions and problem-solving discussions as part of the interview and team collaboration process.
Required Qualifications:
- 5+ years of experience in ETL development and data pipeline construction.
- 3+ years of hands-on experience in Big Data technologies, including Hadoop, Spark, and PySpark.
- Proven experience building and optimizing ETL workflows in batch processing environments.
- Strong understanding of data transformation and performance optimization in distributed systems.
- Working knowledge of SQL Server and ability to support legacy systems.
- Experience with snapshot-based data modeling approaches.
- Strong problem-solving and communication skills; able to clearly explain solutions and technical decisions.
- Must be available for in-person interview, including resume walkthrough, problem-solving, and whiteboarding session.
Preferred Qualifications:
- Experience with DCP (Data Control Platform) nice to have.
- Prior experience in banking or financial services environments.
- Experience working with banking systems or projects a strong plus.
SOFT SKILLS
- LEADERSHIP SKILLS - Able to be a lead, no direct reports but this person will be expected to understand analysis of various features, perform code reviews, support group sessions, and lead on insight.
- Strong communication skills - written and verbal
Beware of scams. S3 never asks for money during its onboarding process.