Location: Irving, TX
Salary: $53.00 USD Hourly - $57.00 USD Hourly
Description: Software Engineer - Data Engineering (Contingent Workforce)Location: Irving, TX
Engagement Type: Contingent Resource
About the RoleIn this contingent role, you will contribute to data engineering initiatives that support large-scale, high-impact systems. You'll help design, build, and optimize data pipelines and platforms that power analytics, reporting, and downstream applications. You will collaborate with engineering partners and cross-functional teams to solve moderately complex technical challenges, ensuring solutions meet organizational standards for performance, scalability, compliance, and reliability.
Responsibilities- Design and develop ETL/ELT workflows and scalable data pipelines for both batch and real-time processing.
- Build, maintain, and optimize data pipelines using open-source frameworks and cloud-native technologies.
- Implement analytical and operational data stores leveraging Delta Lake and modern data architecture patterns.
- Optimize data structures and query performance for large-scale datasets.
- Partner closely with architects and engineering teams to ensure alignment with target-state architecture and platform standards.
- Apply best practices in data governance, lineage tracking, and metadata management, including integration with Google Dataplex for centralized control and data quality enforcement.
- Develop, schedule, and orchestrate workflows using Apache Airflow, including authoring, maintaining, and optimizing complex DAGs.
- Troubleshoot pipeline failures, resolve data quality or processing issues, and ensure high availability and reliability of data systems.
Minimum Qualifications- 4+ years of experience in Software Engineering, Data Engineering, or a related field, gained through professional work, consulting, training, military service, or education.
Required Technical Skills- Data Expertise: Strong understanding of data structures, data modeling, and lifecycle management.
- ETL/ELT Engineering: Hands-on experience designing and managing large-scale data pipelines.
- PySpark: Advanced proficiency with distributed data processing and transformation.
- Lakehouse/Table Formats: Experience with NetApp Iceberg or similar open table formats.
- Hadoop Ecosystem: Knowledge of HDFS, Hive, and related ecosystem tools.
- Cloud Platforms: Experience with Google Cloud technologies (BigQuery, Dataflow), Delta Lake, and Dataplex for governance and metadata management.
- Programming: Strong skills in Python, SQL, and Spark.
- Workflow Orchestration: Extensive experience using Apache Airflow for scheduling, DAG creation, and orchestration.
- Database & Reporting: Solid understanding of relational and distributed database concepts.
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact: This job and many more are available through The Judge Group. Please apply with us today!