Data Engineer - Open on W2 only

Overview

Hybrid
$70 - $75
Contract - W2
Contract - 6 Month(s)
No Travel Required

Skills

Neo4j
Databricks
SQL
Python
ETL
Azure
AWS

Job Details

Overview:
We are seeking an experienced Data Engineering contractor to help design and scale our next-generation fraud data infrastructure. The ideal candidate should have deep technical expertise in building and optimizing large-scale data pipelines- both batch and real-time along with experience in graph databases (Neo4j) and modern cloud data platforms.

Responsibilities:

  • Design, build, and maintain robust ETL/ELT pipelines (batch and streaming) for structured and unstructured data using SQL and Python/PySpark.
  • Collaborate with data scientists and business stakeholders to model, query, and visualize complex entity relationships using Neo4j.
  • Optimize Neo4j data models and Cypher queries for scalability and performance.
  • Build and manage large-scale ML feature stores and integrate them with AI and agentic workflows.
  • Develop and maintain integrations across AWS (S3) and Azure (Blob Storage, VMs) and third-party threat intelligence APIs to enrich fraud detection and investigation workflows.
  • Automate workflows using Apache Airflow or equivalent orchestration tools.
  • Apply DataOps best practices, including version control (Git), CI/CD, and monitoring for reliability and maintainability.
  • Implement and enforce data quality, lineage, and governance standards across all data assets.

Requirements:

  • Master s degree in Statistics, Mathematics, Computer Science, or a related field (or bachelor s degree with equivalent experience).
  • 8+ years of experience in data engineering or a related field.
  • Proven success in ETL/ELT design and implementation, including batch and streaming pipelines.
  • Strong proficiency in SQL, Python, and PySpark.
  • Hands-on experience with Neo4j (data modeling, Cypher, query optimization).
  • Experience building ML feature stores and integrating with AI/ML pipelines.
  • Working knowledge of AWS and Azure data services.
  • Familiarity with Apache Airflow or similar orchestration tools.
  • Proficient in Git and CI/CD workflows.
  • Strong understanding of data quality, lineage, and governance practices.
  • Nice to Have: Experience with Databricks.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.