Location: Irving, TX
Salary: $69.00 USD Hourly - $74.00 USD Hourly
Description: Software Engineer, Data Engineering (Contingent Role)Location: Irving, TX
W2 only - No Corp to Corp or 1099
Must not need sponsorhsip
About the RoleIn this contingent assignment, you will work on complex, large-scale data engineering initiatives that directly influence technical strategy and long-term architectural direction. You will evaluate multifaceted engineering challenges, provide expert consultation to cross-functional teams, and support the design and implementation of high-impact data solutions. This role requires strong technical depth, the ability to navigate ambiguity, and the capability to partner effectively with client stakeholders.
Responsibilities- Design and develop scalable ETL/ELT workflows and data pipelines for batch and real-time data processing.
- Build reliable data pipelines that support reporting, analytics, and downstream applications using open-source frameworks and cloud-native technologies.
- Implement operational and analytical data stores leveraging Delta Lake and modern database architectures.
- Optimize data structures, storage formats, and distributed processing strategies to support performance and scalability across large datasets.
- Collaborate with architects and engineering teams to ensure alignment with the target-state data and platform architecture.
- Apply and enforce best practices for data governance, lineage, and metadata management; integrate solutions with Google Dataplex for centralized governance and quality standards.
- Develop, schedule, and operate multi-stage workflows using Apache Airflow, including authoring and maintaining production-ready DAGs.
- Troubleshoot and resolve production pipeline issues, ensuring high reliability, data quality, and system availability.
Required Technical Skills- Data Expertise: Strong understanding of data structures, modeling techniques, data lifecycle management, and distributed data patterns.
- ETL/ELT Development: Hands-on experience designing, building, and maintaining data pipelines at scale.
- PySpark: Advanced proficiency in distributed data processing and large-scale transformations.
- NetApp Iceberg: Experience implementing open table formats for analytical workloads.
- Hadoop Ecosystem: Working knowledge of HDFS, Hive, and related big-data components.
- Cloud Technologies: Experience with Google Cloud Platform services (BigQuery, Dataflow), Delta Lake, and Dataplex for governance and metadata management.
- Programming & Orchestration: Strong skills in Python, SQL, and Spark.
- Apache Airflow: Demonstrated experience building and maintaining complex Airflow DAGs for workflow orchestration.
- Database Concepts: Solid understanding of relational, distributed, and modern analytical database systems.
Qualifications- 5+ years of software engineering or data engineering experience, demonstrated through industry roles, consulting engagements, training, military service, or formal education.
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact: This job and many more are available through The Judge Group. Please apply with us today!