Interview process:
• Assessment Test (45 minutes)
• Technical discussion
• Face to Face interview in Richardson, TX
Project Overview
As a Generative AI Engineer, you’ll be a core member of this pod, building and integrating agentic systems powered by cutting-edge LLM and GenAI technologies. You’ll work closely with Tech Leads and Full Stack Engineers to turn AI capabilities into production-ready enterprise solutions.
What Does a Typical Day Look Like?
• Design, develop, and deploy agentic AI systems leveraging LLMs and modern AI frameworks.
• Integrate GenAI models into full-stack applications and internal workflows.
• Collaborate on prompt engineering, model fine-tuning, and evaluation of generative outputs.
• Build reusable components and services for multi-agent orchestration and task automation.
• Optimize AI inference pipelines for scalability, latency, and cost efficiency.
• Participate in architectural discussions, contributing to the pod’s technical roadmap.
Required Skills
• 8+ years of software engineering experience with at least 3 years in AI/ML or GenAI systems in production
• Hands-on experience with Python only for AI/ML model integration.
• Experience with LLM frameworks (LangChain, LlamaIndex is a must)
• Exposure to agentic frameworks (Langgraph, Google ADK, is a must)
• Understanding of Git, CI/CD, DevOps, and production-grade GenAI deployment practices.
• Familiarity with Google Cloud Platform (Google Cloud Platform) — e.g. Vertex AI, Cloud Run, and GKE