Location: Charlotte, NC
Salary: $53.00 USD Hourly - $57.00 USD Hourly
Description: Senior Software Engineer, SparkFlow Framework (Contract)Location: Charlotte, NC
Type: Contingent / Contract
About the RoleWe are looking for a Senior Software Engineer to help advance
SparkFlow, our enterprise-scale data processing framework built on Apache Spark. In this role, you will design and implement new framework capabilities, enhance developer experience, and contribute to AI-assisted tooling that streamlines development and operations. You will also play a key role in integrating SparkFlow into the Unity control plane, ensuring consistent orchestration and operational reliability.
This position involves consulting on and contributing to moderately complex Software Engineering initiatives, participating in large-scale planning, and collaborating closely with internal engineering partners.
Responsibilities- Design, build, and enhance SparkFlow framework features, including:
- Data sources/targets
- Transformations
- Governance, controls, and reliability mechanisms
- Develop and extend framework APIs, configuration models, and libraries to improve composability and reusability.
- Improve developer ergonomics by:
- Simplifying configuration patterns (e.g., JSON-based pipeline configs)
- Reducing onboarding friction
- Enhancing diagnostics and observability
- Build AI-enabled solutions that support developers (e.g., guided config generation, validation, troubleshooting tools).
- Implement components required for Unity control plane integration, such as adapters, operators, automation workflows, and integration testing.
- Participate in design reviews, code reviews, and operational/on-call support rotations as needed.
- Collaborate with engineering partners to analyze and resolve moderately complex technical issues.
Minimum Qualifications- 4+ years of software engineering experience or equivalent (training, military, education, or consulting).
- Hands-on experience with Apache Spark using Java or Scala (Python is a plus).
- Experience designing and building frameworks/libraries, including abstraction and API design.
- Proficiency with Spark SQL and distributed data processing patterns.
- Strong experience with version control and build/CI tooling (Git, Maven, Gradle, CI/CD pipelines).
- Experience with enterprise-scale data ecosystem components such as:
- Hadoop/Hive
- Kafka
- Cloud storage/warehouse technologies
- Solid understanding of engineering fundamentals including unit and integration testing.
- Knowledge of database fundamentals and familiarity with UNIX shell scripting.
Preferred Qualifications- Experience improving data pipeline onboarding and deployment workflows (config-driven patterns, scheduler integration, launcher scripts).
- Familiarity with data governance features such as audit trails, metadata/lineage, and data-in-motion controls.
- Hands-on experience with cloud or hybrid Spark environments (e.g., Google Cloud Platform Dataproc, AWS EMR).
- Knowledge of data modeling, normalization, and Spark performance tuning.
- Experience with S3, Iceberg, and other modern data lake technologies.
- Background developing cloud-native applications and deploying to AWS or Google Cloud Platform.
Key Skills- Java or Scala
- Apache Spark
- SQL
- Shell scripting
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact: This job and many more are available through The Judge Group. Please apply with us today!