Overview
Skills
Job Details
Job Description: Data Engineer – Enterprise AI & ERP Modernization
Location: San Jose, CA or New York City, NY (Hybrid//3 days Onsite)
Duration: Full time
Travel: Required (Flexible location)
Why This Role Matters
This role is critical in enabling FDEs (Forward Deployment Engineers) to deliver AI-driven ERP modernization rapidly and safely. You will directly influence:
Migration acceleration
Operational continuity
Data-driven decision-making
Enterprise-scale AI enablement
You will help build the foundation that powers enterprise AI, multi-agent automation, and cross-system modernization across highly complex landscapes.
Role Summary
As a Data Engineer, you will partner closely with Forward Deployment Engineers (FDEs) to drive ERP modernization and AI-powered transformation for clients. Your focus will be on data harmonization, cross-system integration, pipeline development, and ensuring that AI systems consume clean, consistent, and actionable data.
You will unify ERP, CRM, and financial systems, accelerate migration efforts, elevate data quality, and enable multi-agent AI workflows that automate enterprise operations and enhance decision-making.
This role demands deeper ERP-centric data understanding than a traditional ML data engineering role — while still requiring modern data engineering and automation skills.
Candidates strong in SAP-specific data OR modern data engineering (with willingness to learn the other) are encouraged to apply.
Key Responsibilities
Data Engineering & Systems Integration
Data Harmonization: Reconcile, integrate, and standardize data across ERP, CRM, finance, and analytics systems.
Pipeline Architecture: Design and build ETL/ELT pipelines that unify enterprise systems for AI and analytics use cases.
Data Transformation & Validation: Implement logic, code, and workflows to cleanse, transform, validate, and prepare datasets.
Schema Interpretation: Analyze complex enterprise schemas and map relationships across multiple platforms.
Pipeline Reliability: Monitor, troubleshoot, and optimize pipelines for high-quality, consistent delivery.
AI & Enterprise Automation Enablement
Prepare structured data for multi-agent AI platforms, orchestration engines, and operational intelligence workflows.
Support FDEs and architects with data foundations needed for modernization and automation programs.
Collaboration & Execution
Work directly with FDEs, Solution Architects, and client teams to solve enterprise data and integration challenges.
Translate unclear or evolving requirements into clear, structured workflows and execution plans.
Operate effectively in ambiguous, fast-moving environments.
Required Skills & Experience
Strong SQL expertise — ability to write complex, multi-schema queries.
Python (or equivalent scripting) for data processing, transformations, and automation.
Data modeling fundamentals including normalized/denormalized structures, schema mapping, and relational modeling.
Enterprise system familiarity — exposure to ERP (SAP S/4HANA), CRM (Salesforce), finance systems, or cloud data warehouses.
ETL / Data Pipeline experience — building, maintaining, or optimizing workflows and data flows.
Adaptability — comfort working with evolving requirements, fragmented systems, and real-world enterprise data.
AI/Analytics exposure (preferred) — supporting ML pipelines or AI-enabled workflows.
Enterprise data complexity handling — navigating inconsistent schemas, duplication, legacy objects, and cross-system data issues.
Behavioral & Problem-Solving Expectations
Ability to work effectively in a startup environment — proactive, resourceful, and adaptable.
Balance engineering depth with practical execution and communication.
Communicate clearly and concisely, with the right level of technical depth for the audience.
Comfortable operating with incomplete requirements and evolving constraints.
Strong reasoning skills, especially under ambiguity.
Ability to use AI tools effectively to accelerate engineering and delivery.