Position: AI Optimization Engineer
Location: Jersey City, NJ (Onsite)
Employment mode: Contract position
We are seeking an experienced AI Optimization Engineer to support large-scale AI/ML and Generative AI workloads for an enterprise environment. This role focuses on optimizing, deploying, and managing machine learning and large language models (LLMs) on GPU-accelerated HPC infrastructure. The ideal candidate will have strong experience in Python-based machine learning, deep learning frameworks, model optimization techniques, and scalable AI infrastructure.
The engineer will work closely with AI, infrastructure, and DevOps teams to design efficient model training and inference pipelines, implement SLURM-based workload orchestration, and deploy containerized ML solutions in production environments. Responsibilities include optimizing model performance using techniques such as pruning, quantization, and knowledge distillation, managing inference workflows using Triton Inference Server, and monitoring system performance using Prometheus and Grafana.
This role requires hands-on experience with HPC environments, GPU clusters, containerization technologies, and Linux system administration, along with strong knowledge of machine learning algorithms, deep learning architectures, and modern AI development tools. Experience with cloud platforms, vector embeddings, and enterprise-scale AI deployments is highly preferred.
The AI Optimization Engineer must have strong experience in Python-based machine learning and deep learning, including NumPy, scikit-learn, TensorFlow, PyTorch, and Keras, with hands-on knowledge of supervised and unsupervised learning, neural networks, transformer-based models, NLP, CNNs, and Generative AI concepts. The role requires expertise in AI infrastructure and optimization, including HPC environments, GPU clusters, SLURM workload management, Triton Inference Server, TRTLLM, and model optimization techniques such as pruning, quantization, and distillation for scalable LLM deployment.
Candidates should also have experience with DevOps and deployment tools such as Docker, Kubernetes, MLFlow, Terraform, Jenkins, GitHub, and HuggingFace, along with strong skills in performance monitoring using Prometheus and Grafana. Additional requirements include Flask API development, Linux administration (RHEL/CentOS), container runtimes like Enroot, Pyxis, and Podman, and experience with data analysis and visualization tools such as Plotly, Seaborn, and Matplotlib.
Core Responsibilities
- Design and optimize AI/ML workloads on GPU-based HPC clusters.
- Deploy and manage large language models (LLMs) in scalable production environments.
- Implement model optimization techniques including pruning, quantization, and knowledge distillation.
- Develop and manage automated job scheduling using SLURM with REST and Flask APIs.
- Deploy ML models using containerized microservices architectures.
- Monitor system performance using Prometheus and Grafana.
- Optimize inference pipelines using Triton Inference Server and TRTLLM.
- Conduct exploratory data analysis and model performance evaluation.
- Collaborate with infrastructure and ML teams to improve scalability and efficiency.