Overview
Skills
Job Details
Title: GenAI Architect (Eval Framework) Location : Fremont, CA, USA
Experience Requirements: 10+ years of experience in software engineering, machine learning, data science, or artificial intelligence.
Key Skill: LLMs (Large Language Models), including fine-tuning, LLMOps, function calling, and Retrieval-Augmented Generation (RAG), PyTorch, TensorFlow, Transformers/Hugging Face, and NumPy.
Skill Requirements:
Sound experience with Retrieval-Augmented Generation (RAG), fine-tuning, and multi-agent orchestration.
Experienced in developing GenAI applications leveraging multi-agent frameworks and/or graph-based GenAI approaches (e.g., GraphRAG).
Proficient in using common NLP and/or ML Python frameworks, such as PyTorch, TensorFlow, Transformers/Hugging Face, and NumPy.
LLM skills including fine-tuning, LLMOps, function-calling, and retrieval augmented generation (RAG).
Familiarity with data governance, AI ethics, and responsible AI practices.
Strong proficiency in Python.
Experience following software best practices in team settings, including version control (Git), CI/CD, documentation, & unit testing.
Exposure to Microsoft Azure or similar cloud computing ecosystem.
Ability to design scalable solutions and optimize performance for business impact.
Strong problem-solving skills and the ability to work in a fast-paced, dynamic environment.
Familiarity with vector databases, RAG pipelines, and agentic frameworks.
Excellent communication and documentation skills.
Preferred Qualifications:
Advanced GenAI Expertise: Experience developing applications using multi-agent frameworks and/or graph-based approaches such as GraphRAG and LangGraph.
Cloud & MLOps Proficiency: Hands-on experience with Azure AI services, containerization (Docker/Kubernetes), and ML pipelines.
Key Responsibilities:
Design and develop GenAI-based applications using advanced techniques such as Retrieval-Augmented Generation (RAG), text-to-SQL, function calling, and agentic architectures.
Implement multi-agent frameworks and explore graph-based GenAI approaches (e.g., GraphRAG) for complex problem-solving.
Define and enforce evaluation standards and best practices for GenAI agents, RAG pipelines, and multi-agent orchestration.
Performance evaluations to optimize ML and GenAI models for accuracy, scalability, and business impact.
Engage with business stakeholders to understand requirements, gather feedback, and tailor solutions to meet strategic goals.
Translate business needs into technical specifications and actionable plans.
Ensure adherence to software engineering best practices, including version control (Git), CI/CD pipelines, documentation, and unit testing.
Stay current with emerging GenAI evaluation tools, frameworks, and methodologies.
Provide technical leadership and mentor team members on best practices and emerging GenAI technologies.