Overview
Skills
Job Details
MLOps Engineer (Automation, Datazone, Sagemaker, Bedrock, Lex, Texextract) (Senior)
Project Name | Shared Data Platform (SDP) |
Client | State of Maryland |
Agency | Maryland Benefits |
Location | 100% On-site Mon-Fri, Linthicum Heights, MD 21090 |
Interview Type | In-Person |
Contract Duration | 1 year with 9 one-year renewal options |
Tentative Start Date | 10/01/2025 |
Deadline | 09/18/2025 |
Project Overview:
Innosoft is the prime contractor for MD Benefits (formerly MD THINK), supporting the management, design, development, testing, and implementation of this strategic Information Technology (IT) program. Maryland Benefits is seeking an agile development resource team with required skill sets to build and/or maintain the Maryland Benefits infrastructure/platform, applications development, data repositories, reports and dashboards and support activities related to network services and system operations.
The Shared Data Platform (SDP) is designed as a cloud-based, data-centric infrastructure to support scalable, flexible, and integrated data operations. It empowers self-service and accelerates data-driven decision-making across the enterprise. Key strategic goals include establishing a mature data infrastructure that balances analytics and business intelligence, enabling iterative learning and actionable insights, and fostering a "Data Center of Excellence" (DCoE) to govern and enhance data-driven processes. The SDP also prioritizes the delivery of trusted information and the streamlined onboarding of State Agencies to the Maryland Benefits platform through standardized procedures, ultimately driving operational efficiency and measurable business value. The "Data Platform - Automation" team plays a critical role in achieving these goals by supporting the "Data Platform - Engineering" team.
Duties/Responsibilities:
- Develops, maintains and optimizes the software development environment. Responsible for infrastructure, build, integration and software deployment process. Creates email accounts and provides system access. Should know scripting languages such as Ruby and Python.
- Design and automate reproducible ML pipelines using AWS SageMaker Pipelines and Step Functions.
- Automate training, tuning, evaluation, and deployment of ML models to production endpoints.
- Manage model versioning, drift detection, and performance monitoring using SageMaker Model Registry and CloudWatch.
- Integrate and operationalize Generative AI models using AWS Bedrock for text, image, and multi-modal use cases.
- Build and deploy Conversational AI solutions using Amazon Lex and integrate with enterprise apps.
- Automate intelligent document processing using Textract, integrating OCR with downstream ML models.
- Use AWS DataZone to manage, catalog, and govern ML datasets and assets across the organization.
- Enforce secure and governed access to training data, features, and model outputs through tagging, access policies, and data domains.
- Implement CI/CD pipelines for ML using CodePipeline, CodeBuild, or GitHub Actions.
- Automate infrastructure provisioning for ML experiments using Terraform, CDK, or CloudFormation.
- Ensure reproducibility, scalability, and cost optimization of ML infrastructure (e.g., GPU instances, endpoints).
- Build monitoring dashboards and alerts for model performance, latency, cost, and failures.
- Integrate CloudWatch, SageMaker Clarify, and Model Monitor for bias, explainability, and drift detection.
- Collaborate with data scientists to productionize notebooks and turn prototypes into resilient services.
Requirements
Education:
This position requires a bachelor?s degree in area of specialty
General Experience:
- At least five (5) years of relevant experience.
- 5+ years of experience in Data Engineering, DevOps, or Cloud Infrastructure roles.
- Hands-on experience with AWS Glue, Lambda, EMR, S3, and Lake Formation.
- Expertise in scripting and automation using Python, Bash, or Shell.
- Proficiency in IaC tools like Terraform, CloudFormation, or CDK. Experience with Git, CI/CD pipelines, and monitoring tools like CloudWatch and X-Ray.
Specialized Experience:
- The proposed candidate must have at least three (3) years of experience in the supervision of system engineers, and demonstrated use of interactive, interpretative systems with on-line, real-time acquisition capabilities.
- Key qualifications include strong project management skills, leadership abilities, effective communication, and the capacity to identify and resolve issues while adapting to changing circumstances. Significant experience collaborating with cross-functional teams, product owners, and stakeholders is essential
- Preferred qualifications include a technical background and experience in data and AI/ML projects. Functional experience with local or government projects in Data Services (MDM/R360/EDMS), Child Support, Integrated Eligibility, Child Welfare, Adult Protective Services, Juvenile Justice, or Health and Human Services is also preferred
- AWS Certifications (e.g., Big Data Specialty, DevOps Engineer, or Solutions Architect).
- Experience with Apache Spark on EMR and Glue.
- Familiarity with data lake architecture, data mesh, and data governance frameworks.
- Exposure to event-driven architecture, serverless, or streaming data processing (e.g., Kinesis, Kafka).