Location: Irving, TX
Salary: $53.00 USD Hourly - $57.00 USD Hourly
Description: Software Engineer - Data Engineering (Contingent Resource)Location: Irving, TX
About the RoleIn this contingent assignment, you will contribute to the design, development, and optimization of data engineering solutions that support large-scale analytics and software engineering initiatives. You will work on moderately complex technical challenges, collaborate with cross-functional engineering teams, and help ensure that data systems meet reliability, scalability, and governance requirements.
You'll leverage your technical expertise to review, analyze, and resolve data engineering issues while following established engineering standards, policies, and compliance expectations.
Responsibilities- Design, develop, and maintain ETL/ELT workflows and data pipelines for both batch and real-time data processing.
- Build scalable data solutions using open-source frameworks and cloud technologies to support reporting, analytics, and downstream applications.
- Implement analytical and operational data stores using Delta Lake and modern database architectures.
- Optimize data models, tables, and pipelines for high performance, reliability, and scalability across large datasets.
- Partner with architects and engineering teams to align implementations with target-state architecture.
- Apply best practices for data governance, lineage, and metadata management, including integrating with Google Dataplex for centralized governance and quality enforcement.
- Develop, schedule, and orchestrate workflows using Apache Airflow, including authoring and managing complex DAGs.
- Troubleshoot pipeline issues, ensure high availability, and maintain system reliability.
Required Technical Skills- Data Foundations: Strong understanding of data structures, modeling, lifecycle management, and distributed data concepts.
- ETL/ELT Development: Hands-on experience designing, building, and maintaining data pipelines.
- PySpark: Advanced experience with distributed data processing and transformation at scale.
- NetApp Iceberg: Experience implementing open table formats for analytical workloads.
- Hadoop Ecosystem: Familiarity with HDFS, Hive, and related components.
- Cloud Platforms: Experience with Google Cloud Platform services including BigQuery, Dataflow, Delta Lake, and Dataplex.
- Programming: Proficiency in Python, SQL, and Spark.
- Workflow Orchestration: Strong experience with Apache Airflow, including writing, scheduling, and maintaining complex DAGs.
- Database Concepts: Solid understanding of relational, distributed, and modern data architecture principles.
Minimum Qualifications- 4+ years of experience in software engineering, data engineering, or an equivalent combination of industry experience, consulting, training, military service, or education.
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact: This job and many more are available through The Judge Group. Please apply with us today!