Software Engineer

Overview

On Site
USD 69.00 - 74.00 per hour
Contract - Independent

Skills

Operational Efficiency
Optimization
Amazon S3
Unstructured Data
Advanced Analytics
Data Quality
Workflow
Testing
Data Flow
Software Engineering
FOCUS
Data Engineering
Analytics
PySpark
Extract
Transform
Load
Data Warehouse
Finance
Slowly Changing Dimensions
SCD
Smart Order Routing
Onboarding
Communication
Collaboration
Cloud Computing
Amazon Web Services
Data Lake
Business Intelligence
Financial Services
Privacy
Marketing

Job Details

Location: Charlotte, NC
Salary: $69.00 USD Hourly - $74.00 USD Hourly
Description:
Job Title: Software Engineer - Data Engineering (Contingent Resource)

Location: Charlotte, NC

About the Role:

We are seeking a highly skilled Software Engineer with deep expertise in PySpark, ETL development, and Data Warehousing/Business Intelligence (DW/BI) to support complex, large-scale data initiatives. This contingent role involves strategic collaboration with client teams to design and implement robust data solutions that drive financial insights and operational efficiency.

Responsibilities:
  • Lead the design, development, and optimization of scalable ETL pipelines using PySpark, AWS S3, and Dremio.
  • Support modernization efforts for ProfitView and other financial attribution systems.
  • Engineer solutions for data ingestion, transformation, and loading into data lakes and data warehouses.
  • Work with structured and unstructured data from diverse sources to enable advanced analytics.
  • Collaborate cross-functionally with BI developers, data analysts, and business stakeholders to gather and translate data requirements.
  • Ensure high standards of data quality, integrity, and governance across all pipelines.
  • Monitor, troubleshoot, and resolve performance issues in data workflows.
  • Participate in code reviews, testing, and deployment processes.
  • Document technical designs, data flows, and architectural decisions.

Minimum Qualifications:
  • 5+ years of experience in Software Engineering, with a focus on data engineering and analytics platforms.
  • Proven experience in PySpark, ETL development, and DW/BI projects.
  • Strong understanding of financial attribution, slowly changing dimensions (SCD), booking/referring agreements, and source-of-record (SOR) onboarding.
  • Demonstrated ability to work on complex, multi-faceted projects with strategic impact.
  • Excellent communication and collaboration skills.

Preferred Qualifications:
  • Experience with cloud platforms (e.g., AWS), data lake architectures, and modern BI tools.
  • Familiarity with Dremio or similar query acceleration platforms.
  • Background in financial services or enterprise-scale data environments.

By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.

Contact:

This job and many more are available through The Judge Group. Please apply with us today!
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Judge Group, Inc.