Job Title: AI/ML Engineer
Location: Remote JOB
Duration: 6 Months
Duties/Responsibilities:
Design and implement MCP servers that expose internal data/services to LLMs
Build secure, structured endpoints for model context access
Integrate MCP services with model inference APIs
Implement and operate a vector search engine
Deploy models into production (cloud, on-premise or hybrid) and integrate with
upstream/downstream systems (EHR modules, APIs, micro-services, dashboards)
Monitor model performance in live settings (accuracy, drift, bias, fairness, reproducibility),
and iterate on models to maintain or improve reliability and relevance
Build/maintain machine learning pipelines and work with the data platform team to
connect AI workloads to core datasets
Ensure security, permissions and monitoring of AI systems
Implement cost monitoring and usage tracking for AI workloads across internal teams
Partner with cross-functional stakeholders (data scientists, data engineers, SDEs) to
deploy these capabilities
Stay informed about emerging AI/ML techniques, tools and best practices (including AI
ethics, bias mitigation, interpretability), and proactively bring forward improvements or
innovation
Contribute to a culture of continuous improvement, knowledge-sharing and mentoring of
junior team members
Required Skills:
Proficiency in Python (or analogous language) and strong familiarity with ML
frameworks/libraries (ex: TensorFlow, PyTorch, scikit-learn)
This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that
are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without
notice.
Job Description
Experience building APIs, services or microservices
Knowledge of vector databases or search systems
Experience with LLM application patterns: RAG, embeddings, prompt orchestration and
tool calling.
Experience with basic MLOps practices : model deployment, monitoring, pipeline
automation, CI/CD
Demonstrated ability to deploy models into production or near-production environments
(cloud environments like AWS, Azure, Google Cloud Platform or containerised/micro-services
infrastructure). Google Cloud Platform experience is strongly preferred
A collaborative mindset, dependable execution, drive to reflect and improve, and humility
to ask questions and learn.
Education & Experience
Bachelor's degree (or equivalent) in Computer Science, Data Science, Statistics,
Engineering or a related field
5+ years of platform/infrastructure engineering experience, with demonstrable recent
work on LLM-based systems
Preferred:
Experience in healthcare, behavioral health, EHR systems or regulated industries
Familiarity with MLOps practices: CI/CD for models, model monitoring, drift
detection, model governance.
Experience with NLP (clinical text) or computer vision (imaging) tasks
Familiarity with cloud-native services for ML (e.g., AWS SageMaker, Azure ML,
Google Cloud Platform AI Platform) and related infrastructure (Docker, Kubernetes)
Awareness of AI ethics, bias/fairness issues, model interpretability techniques
Experience mentoring others or leading small technical initiatives