What We Do
You'll be responsible for designing, building, and deploying applied machine learning solutions including deep learning transformer-based models for Natural Language Processing and Computer Vision, as well as traditional shallow learning models. The role focuses on developing scalable ML systems that deliver measurable business outcomes and drive value across the organization.
WHAT YOU'LL DO:
Design, build, fine-tune, and deploy state-of-the-art machine learning and large language models at scale, supporting millions of daily predictions with a strong focus on accuracy, latency, compute efficiency, and cost optimization.
Develop end-to-end ML and LLM pipelines, covering data ingestion, scripting, automated workflows for OCR, model training, evaluation, and post-processing in production environments.
Build and operationalize LLM fine-tuning pipelines, applying a range of model adaptation techniques including full fine-tuning, LoRA (Low-Rank Adaptation), prompt-based methods, and Direct Preference Optimization (DPO).
Design and experiment with novel LLM architectures, balancing model size, computational efficiency, memory constraints, and deployment requirements.
Optimize LLMs for production deployment through model quantization, compression, and teacher student architectures, enabling efficient inference in resource-constrained environments.
Architect and deploy Retrieval-Augmented Generation (RAG) systems, leveraging vector databases, embedding services, semantic search, document chunking, indexing, and retrieval mechanisms using frameworks such as LangChain, LlamaIndex, and commercial RAG platforms within Google Cloud Platform and Databricks.
Innovate in ML operations and evaluation, including automated ground-truth generation, continuous post-evaluation pipelines, and iterative feedback loops to systematically improve model performance over time.
Design and implement CI/CD pipelines for machine learning systems, ensuring high availability, reliability, low latency, and rapid iteration from experimentation to production.
WHAT YOU LL BRING
5+ years of experience in machine learning engineering, with a proven track record of deploying and operating ML and NLP/LLM systems in production at scale.
Strong hands-on experience building full-stack ML systems, from data ingestion and automation to training, evaluation, deployment, and monitoring.
Deep expertise in LLM fine-tuning and adaptation techniques, including full fine-tuning, LoRA, prompt-based optimization, and preference-based methods such as DPO.
Practical experience designing and optimizing LLM architectures, with an emphasis on compute efficiency, memory usage, and real-world deployment constraints.
Demonstrated proficiency in model inference optimization, including quantization, compression, and distillation techniques for high-throughput, cost-efficient production systems.
Solid understanding and hands-on experience with RAG architectures, vector stores, embeddings, semantic search, chunking strategies, and retrieval workflows integrated with large language models.
Experience using modern LLM orchestration and RAG frameworks such as LangChain, LlamaIndex, and managed AI platforms within cloud ecosystems like Google Cloud Platform and Databricks.
Strong background in ML evaluation and MLOps, including automated evaluation pipelines, CI/CD for ML, and continuous improvement of deployed models.
Proficiency in Python and ML/AI development frameworks, with the ability to work in fast-paced, experimental environments and production systems simultaneously.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
- Dice Id: cxbcsi
- Position Id: Job44387
- Posted 2 hours ago