Job description:- Role- Gen AI Engineer- Forward Deployed
Location- New York, NY (Onsite)
Job Type- Full Time
About the Role
We’re seeking a highly skilled and motivated Forward Deployed Engineer (FDE) to work at the cutting edge of Generative AI deployments. In this role, you’ll partner directly with customers to design, build, and deploy intelligent applications using Python, Langchain/LangGraph, and large language models. You’ll bridge engineering excellence with customer empathy to solve high-impact real-world problems.
This is a hands-on engineering role embedded within customer projects—ideal for engineers who enjoy ownership, love solving hard problems, and thrive in dynamic, technical environments.
Key Responsibilities
- Lead the end-to-end deployment of GenAI applications for customers—from discovery to delivery.
- Architect and implement robust, scalable solutions using Python, Langchain/LangGraph, and LLM frameworks.
- Act as a trusted technical advisor to customers, understanding their needs and crafting tailored AI solutions.
- Collaborate closely with product, ML, and engineering teams to influence roadmap and core platform capabilities.
- Write clean, maintainable code and build reusable modules to streamline future deployments.
- Operate across cloud platforms (AWS, Azure, Google Cloud Platform) to ensure secure, performant infrastructure.
- Continuously improve deployment tools, pipelines, and methodologies to reduce time-to-value.
Required Qualifications
- 5–8+ years of experience in software engineering or solutions engineering, ideally in a customer-facing capacity.
- Proven expertise in Python, Langchain, LangGraph, and SQL.
- Deep experience with engineering architecture, including APIs, microservices, and event-driven systems.
- Demonstrated success in designing and deploying GenAI applications into production environments.
- Strong proficiency with cloud services such as AWS, Google Cloud Platform, and/or Azure.
- Excellent communication skills, with the ability to translate technical complexity to customer-facing narratives.
- Comfortable working autonomously and managing multiple deployment tracks in parallel.
Preferred Qualifications
- Familiarity with CI/CD, infrastructure-as-code (Terraform, Pulumi), and container orchestration (Docker, Kubernetes).
- Background in LLM fine-tuning, retrieval-augmented generation (RAG), or AI/ML operations.
- Previous experience in a startup, consulting, or fast-paced customer-obsessed environment.
Education
- Bachelor's, Master's, or Ph.D. in Computer Science, Engineering, or a related technical field.
Values:
- We are client first: We put our clients at the center of everything we do, because their success is the ultimate measure of our value.
- We work at Start-Up Speed: We move fast, stay agile and Favor action because momentum is the foundation of perfection
- We are Al forward: We help our clients build the future of Al and implement it in our own roles and workflow to amplify productivity.
Title: Location:
Duration: ... monthsAdditional Job Details:
About Tanisha Systems, Inc.Tanisha Systems, founded in 2002 in Massachusetts-*, is a leading provider of Custom Application Development and end-to-end IT Services to clients globally. We use a client-centric engagement model that combines local on-site and off-site resources with the cost, global expertise and quality advantages of off-shore operations. We deliver Custom Application Development, Application Modernization, Business Process Outsourcing and Professional IT Services from office locations in * and *.
Tanisha Systems services clients in Government, Banking & Financial Markets, Insurance, Healthcare, Retail & Consumer Goods, Energy & Utilities, Life Sciences, Telecom, Manufacturing and Transportation Industries around the globe. Our engagement model provides a flexible operational environment that empowers our clients with the right levels of control.
Want to read more about Tanisha Systems? Visit us at