Location: Iselin, NJ
Salary: $58.00 USD Hourly - $63.00 USD Hourly
Description: Database Engineer III - Data PipelinesLocation: Charlotte, NC (Preferred) or Iselin, NJ
Work Arrangement: Hybrid - 3 days in office required
Employment Type: Contract (18 months, with possible extension or conversion)
Work Schedule: Standard business hours; occasional after-hours support for ad hoc troubleshooting
Number of Openings: 1
Job OverviewWe are seeking an experienced
Database Engineer to design, build, and support large-scale data pipelines as part of a cloud data platform modernization initiative. This role supports fraud and claims analytics applications and plays a key role in migrating a large Teradata-based environment to
Google Cloud Platform (Google Cloud Platform).
The ideal candidate has strong experience with SQL, ETL development, PySpark, and Hadoop-based ecosystems, and is comfortable working in hybrid on-prem and cloud environments.
Key Responsibilities- Design, develop, and maintain scalable data pipelines supporting fraud and claims analytics applications.
- Consult on and contribute to moderately complex database engineering initiatives and large-scale data platform planning efforts.
- Analyze and resolve data engineering challenges requiring evaluation of multiple variables and technologies.
- Support the migration of a large enterprise data platform from Teradata to Google Cloud Platform.
- Collaborate with stakeholders and engineering teams to ensure accurate, reliable, and performant data solutions.
- Contribute to issue resolution while adhering to established engineering standards, policies, and compliance requirements.
Technical Environment- Data Volume: ~1,000 TB of data across 600-700 tables
- Primary Technologies:
- Google BigQuery
- PySpark
- Hadoop
- Ab Initio
- Dremio
- Teradata (legacy)
- Autosys
- SQL
- S3-compatible storage grid
Required Qualifications- 4+ years of database engineering experience, or equivalent demonstrated through work, consulting, education, training, or military experience.
- Strong experience with:
- SQL
- ETL development
- PySpark
- Hadoop ecosystems
- Data pipeline design and implementation
- Experience operating in large, complex data environments.
- Ability to collaborate effectively with cross-functional engineering and analytics teams.
Preferred Qualifications- 5+ years of engineering experience (senior-level equivalent).
- Hands-on experience with Google Cloud Platform, particularly BigQuery.
- Experience with Teradata migrations to cloud platforms.
- Familiarity with Autosys scheduling.
- Google Cloud certifications.
- Experience supporting analytics use cases such as Fraud or Claims Analysis.
Additional Information- This is a backfill position.
- Hybrid Return-to-Office (RTO) policy requires 3 days onsite per week.
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact: This job and many more are available through The Judge Group. Please apply with us today!