MLOps/LLMOps Engineer (LLM, DevOps, Cloud SME)

Overview

Remote
Depends on Experience
Contract - Independent
Contract - 6 Month(s)

Skills

Cloud Computing
Enterprise Architecture
Docker
Machine Learning (ML)
Large Language Models (LLMs)
Generative Artificial Intelligence (AI)
GPU
Machine Learning Operations (ML Ops)
Continuous Integration
Google Cloud Platform
Amazon Web Services
Continuous Delivery
Microsoft Azure
Terraform
Python
Kubernetes
Workflow
DevOps

Job Details

NOTE 1: Client does not work with visa candidates at this time.

NOTE 2: Payment terms are NET35.

 

 

MLOps/LLMOps Engineer (LLM, DevOps, Cloud SME)

San Francisco, Bay Area, CA

Duration: Six months may extend to 12 months

Must be in the Greater Bay area or in California

 

 

MLOps/LLMOps Engineer (LLM, DevOps, Cloud SME)

Operationalizing Large Language Models requires specialized expertise beyond traditional MLOps practices. LLMs present unique operational challenges including significantly larger computational requirements, complex data pipelines, specialized infrastructure needs, and unique performance optimization requirements. This specialized role ensures GenAI solutions can scale effectively from proof-of-concept to enterprise-wide deployment in a utility environment.

  • Ensures GenAI solutions move successfully from prototype to production with proper operational support
  • Establishes specialized monitoring for model performance, inference latency, and data quality
  • Enables efficient scaling of LLM solutions across multiple business units
  • Creates high-performance deployment architectures that balance speed, cost, and reliability
  • Develops operational data pipelines to continuously improve model performance with new utility-specific data

Key Responsibilities:

  • Design and implement LLM-specific deployment architectures with Docker containers for both batch and real-time inference
  • Configure GPU infrastructure on-premises or in the cloud with appropriate CI/CD pipelines for model updates
  • Build comprehensive monitoring and observability systems with appropriate logging, metrics, and alerts
  • Implement load balancing and scaling solutions for LLM inference, including model sharding if necessary
  • Create automated workflows for model retraining, versioning, and deployment
  • Optimize infrastructure costs through intelligent resource allocation, spot instances, and efficient compute strategies
  • Collaborate with client's Cyber team on implementing appropriate security controls for GenAI applications
  • Develop automated testing frameworks to ensure consistent output quality across model updates

Expected Skillset:

  • DevOps + ML: Expertise in Kubernetes, Docker, CI/CD tools, and MLflow or similar platforms
  • Cloud & Infrastructure: Understanding of GPU instance options, cloud services (AWS/Azure/Google Cloud Platform), and optimization techniques
  • Automation: Proficiency in Python, Bash, and infrastructure-as-code tools like Terraform or Ansible
  • LLM-Specific Frameworks: Experience with tools like TensorBoard, MLFLow, or equivalent for scaling LLMs
  • Performance Optimization: Knowledge of techniques to monitor and improve inference speed, throughput, and cost
  • Collaboration: Ability to work effectively across technical teams while adhering to enterprise architecture standards

 

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.