Forward Deployed Engineer

Remote • Posted 30+ days ago • Updated 12 days ago
Contract W2
Travel Required
Remote
$40 - $50/hr
Fitment

Dice Job Match Score™

🔢 Crunching numbers...

Job Details

Skills

  • Forward Deployed Engineer (FDE)
  • Data Migration Engineer
  • Data Consolidation Engineer
  • Enterprise Data Engineer
  • Data Integration Architect
  • Cloud Data Architect
  • Data Modernization Architect
  • SAP
  • SAP ERP
  • Oracle
  • Oracle ERP
  • Epic ERP
  • data migration
  • ERP migration
  • system migration
  • cloud migration
  • legacy modernization
  • system consolidation
  • multi-system integration
  • Python
  • SQL
  • PySpark
  • Spark
  • TypeScript
  • JavaScript
  • ETL
  • ELT
  • Informatica
  • Talend
  • Matillion
  • Fivetran
  • AWS Glue
  • Azure Data Factory
  • Change Data Capture
  • CDC
  • incremental loading
  • AWS
  • Azure
  • GCP
  • Snowflake
  • Databricks
  • Redshift
  • BigQuery
  • Synapse Analytics
  • Delta Lake
  • Master Data Management
  • MDM
  • data harmonization
  • golden record
  • entity resolution
  • Kafka
  • Kinesis
  • Event Hubs
  • Pub/Sub
  • Neo4j
  • Stardog
  • RDF
  • OWL
  • knowledge graph
  • ontology
  • LLM
  • Large Language Model
  • OpenAI
  • Anthropic
  • AWS Bedrock
  • AI-driven transformation
  • Healthcare
  • Financial Services
  • Manufacturing
  • Retail
  • Energy & Utilities
  • Public Sector

Summary

Hi,

 
I am Tarun Chaudhary, a resource professional with Whiz Global, IT Staffing Firm, I have the below job opportunity with one of my clients, please advise if you are available in the job market and if yes, please go through it and send me your most recent resume

Position: Forward Deployed Engineer (FDE) - Data Migration & Data Consolidation 

Location: Remote Length: Contract-to-hire


Key Responsibilities
 Migration Execution & Cloud Architecture: Lead end-to-end delivery of enterprise data migrations from corporate systems (SAP, Oracle, Epic ERP) to target cloud data platforms, including the design of cloud landing zones, data governance frameworks, and system rationalization strategies. Establish migration compliance controls, automated rollback procedures, and operational readiness gates while owning full technical accountability for 12–18+ month migration roadmaps.
 Data Pipeline Engineering & Transformation: Build production-grade data connectors to SAP (RFC, IDoc, BAPI, OData), Oracle (AQ, GoldenGate, APIs), and SQL/non-relational sources. Develop ETL/ELT pipelines with LLM-enabled transformation logic, multi-layer validation and reconciliation frameworks, and optimized throughput for datasets scaling from tens of millions to billions of records with built-in CDC and incremental loading.
 Ontology Layer Development & Schema Automation:Construct semantic ontology layers translating raw ERP structures into business-consumable objects (Customer, Order, Invoice, Product, Vendor, Asset). Deploy automated schema mapping agents for source-to-target analysis and transformation logic generation. Build unified master data models with row/column-level security, cross-system lineage tracking, and AI-ready semantic structures.
 Application & Workflow Delivery: Build operational dashboards, migration control centers, and agent-driven workflows for automated validation, exception handling, and anomaly detection using low-code platform tools. Generate TypeScript/Python SDKs for custom integrations and deliver real-time monitoring and self-service interfaces for migration progress, data quality KPIs, and compliance tracking.
 Multi-System Consolidation & Master Data Management: Lead consolidation of 5–15+ fragmented ERP instances into standardized master data models. Resolve complex entity resolution challenges including customer matching, product harmonization, and chart of accounts unification. Establish golden record frameworks, data quality scorecards, survivorship rules, and data stewardship workflows for post-migration governance.
 Client Engagement, Discovery & Modernization Advisory: Serve as primary technical advisor to C-suite and enterprise architecture stakeholders across all engagement phases. Deploy discovery agents to analyze legacy data estates, conduct assessment workshops, facilitate solution design sessions, and deliver executive briefings, go/no-go readiness assessments, and prioritized modernization roadmaps.
 Knowledge Transfer, Enablement & IP Development:Build reusable migration accelerators, playbooks, and reference architectures that scale across engagements. Lead knowledge transfer to upskill client teams for post-migration ownership and collaborate with internal product and sales engineering teams to feed field insights back into platform development and delivery methodology.
 Leadership & Executive Engagement: Operate autonomously in ambiguous, high-stakes client environments, driving outcomes with minimal oversight; translate deeply technical concepts into clear, business-level narratives for C-suite audiences through executive briefings and stakeholder communications; navigate organizational complexity, competing stakeholder priorities, and enterprise change management dynamics to maintain momentum across multi-workstream engagements; mentor junior engineers, cultivate technical capability within delivery teams, and foster a culture of knowledge sharing and continuous improvement.
Required Qualifications
7-10+ years of progressive experience in enterprise data engineering, data migration, or large-scale system integration roles within complex, multi-platform environments
3-5+ years directly leading end-to-end data migration or multi-system consolidation programs for Global Enterprises and Industry Leaders, with full ownership of technical delivery and client outcomes
Demonstrated client-facing experience serving as a trusted technical advisor to C-level executives, enterprise architecture teams, and cross-functional business stakeholders
Proven industry depth in at least two of the following verticals: Healthcare, Financial Services, Manufacturing, Retail, Energy & Utilities, or Public Sector
Hands-on migration complexity: successfully delivered programs involving at least 3+ heterogeneous source systems, 100M+ records, complex master data harmonization, and multi-phase cutover execution
Advanced proficiency in Python and SQL with working experience in PySpark and TypeScript/JavaScript
Hands-on expertise with modern ETL/ELT and data integration platforms (Informatica, Talend, Matillion, Fivetran, AWS Glue, Azure Data Factory)
Proven ability to build scalable, version-controlled data pipelines with error handling, incremental loading, and Change Data Capture (CDC)
Strong working knowledge of at least one major cloud provider (AWS, Azure, or Google Cloud Platform), including core infrastructure, managed data services, and security configurations
Experience with enterprise data warehouse and lakehouse platforms (Snowflake, Databricks, BigQuery, Redshift, Synapse Analytics, Delta Lake)
Familiarity with knowledge graph construction, semantic modeling, ontology frameworks (RDF, OWL), or platforms such as Neo4j, AI Foundry, or Stardog
Practical experience integrating LLMs or AI-driven tooling into data transformation, schema inference, or mapping workflows (OpenAI, Anthropic, AWS Bedrock)
Experience with low-code/no-code application platforms for rapid solution delivery (AI Foundry, Mendix, OutSystems, PowerApps)
Preferred Qualifications
 Certifications: AI Foundry (Data Engineer, Ontologist, or Application Developer), SAP Certified Technology Associate/Professional, cloud architecture or data engineering credentials (AWS Solutions Architect, Azure Data Engineer, Google Cloud Platform Professional Data Engineer), or data governance/MDM certifications (CDMP, DAMA)
 Advanced Technical Skills: Deep, production-level knowledge of real-time event streaming platforms (Kafka, Kinesis, Event Hubs, Pub/Sub); demonstrated expertise with enterprise MDM platforms (Informatica MDM, SAP MDG, Profisee, Reltio); hands-on proficiency in API development, microservices architecture, and service mesh patterns; strong command of CI/CD pipelines and infrastructure-as-code tooling (Jenkins, GitLab CI, Azure DevOps, Terraform, ArgoCD); comprehensive understanding of data security, privacy, and regulatory compliance frameworks (GDPR, HIPAA, SOC 2, CCPA, FedRAMP)
 Domain Knowledge: Working understanding of financial close processes, supply chain operations, revenue cycle management, or procurement workflows; experience with industry-specific data standards (EDI, HL7, FHIR, SWIFT, XBRL); familiarity with process mining tools (Celonis, UiPath Process Mining, Signavio) and data observability, cataloging, and lineage platforms (Monte Carlo, Collibra, Alation, Apache Atlas)

 

--

Best Regards,

 

Tarun Chaudhary

IT Recruiter

Whiz Global LLC

 

Phone: 

Fax: +1

Address: 11555 Medlock Bridge Road

Suite 100, Johns Creek, GA 30097    

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 90967474
  • Position Id: 8854050
  • Posted 30+ days ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Remote

5d ago

Easy Apply

Contract

Depends on Experience

Remote or Dearborn, Michigan

6d ago

Easy Apply

Full-time, Contract

$DOE

Remote

4d ago

Contract

$65 - $70

Remote or Pennsylvania

6d ago

Easy Apply

Full-time, Part-time, Third Party, Contract

$DOE

Search all similar jobs