Position: Machine Learning Engineer
We are seeking a skilled and forward-looking Machine Learning Engineer with expertise in Large Language Models (LLMs), Generative AI, and Agentic Architectures to join our growing R&D and Applied AI team.
Education, Experience, and Skills Required
Bachelor s or Master s degree in Computer Science, Data Science, Machine Learning, or a related field.
3+ years of experience building and deploying ML systems.
Strong programming skills in Python, with experience in PyTorch, TensorFlow, Scikit-learn, and Hugging Face Transformers.
Hands-on experience with LLMs/SLMs (fine-tuning, prompt design, inference optimization).
Demonstrated expertise in at least two of the following:
o OpenAI GPT (chat, assistants, fine-tuning)
o Anthropic Claude (safety-first reasoning, summarization)
o Google Gemini (multimodal reasoning, enterprise APIs)
o Meta LLaMA (open-source fine-tuned models)
Familiarity with vector databases, embeddings, and RAG pipelines.
Proficiency in handling structured and unstructured data at scale.
Working knowledge of SQL and distributed frameworks such as Spark or Ray.
Strong understanding of the ML lifecycle from data prep and training to deployment and monitoring.
Key Responsibilities
- Core ML/LLM Engineering
Design, train, fine-tune, and deploy ML/LLM models for production.
Implement Retrieval-Augmented Generation (RAG) pipelines using vector databases.
Prototype and optimize multi-agent workflows using LangChain, LangGraph, and MCP.
Develop prompt engineering, optimization, and safety techniques for agentic LLM interactions.
Integrate memory, evidence packs, and explainability modules into agentic pipelines.
Work with multiple LLM ecosystems, including:
o OpenAI GPT (GPT-4, GPT-4o, fine-tuned GPTs)
o Anthropic Claude (Claude 2/3 for reasoning and safety-aligned workflows)
o Google Gemini (multimodal reasoning, advanced RAG integration)
o Meta LLaMA (fine-tuned/custom models for domain-specific tasks)
- Data & Infrastructure
Collaborate with Data Engineering to build and maintain real-time and batch data pipelines supporting ML/LLM workloads.
Conduct feature engineering, preprocessing, and embedding generation for structured and unstructured data.
Implement model monitoring, drift detection, and retraining pipelines.
Utilize cloud ML platforms such as AWS SageMaker and Databricks ML for experimentation and scaling.
- Research & Applied Innovation
Explore and evaluate emerging LLM/SLM architectures and agent orchestration patterns.
Experiment with generative AI and multimodal models (text, images, structured financial data).
Collaborate with R&D to prototype autonomous resolution agents, anomaly detection models, and reasoning engines.
Translate research prototypes into production-ready components.
- Collaboration & Delivery
Work cross-functionally with R&D, Data Science, Product, and Engineering teams to deliver AI-driven business features.
Participate in architecture discussions, design reviews, and model evaluations.
Document experiments, processes, and results for effective knowledge sharing.
Mentor junior engineers and contribute to best practices in ML engineering.
Preferred Qualifications
Experience with agentic frameworks such as LangChain, LangGraph, MCP, or AutoGen.
Knowledge of AI safety, guardrails, and explainability.
Hands-on experience deploying ML/LLM solutions in AWS, Google Cloud Platform, or Azure.
Experience with MLOps practices CI/CD, monitoring, and observability.
Background in anomaly detection, fraud/risk modeling, or behavioral analytics.
Contributions to open-source AI/ML projects or applied research publications.