Job Title: AI/ML Engineer
Company: Arch Systems
Location: Remote
Job Type: FT
Job Summary
We are seeking a highly skilled AI/ML Engineer to design, build, and deploy scalable machine learning solutions. The ideal candidate will have strong experience in developing ML models, building data pipelines, and deploying production-grade AI systems using modern cloud and containerized environments.
________________________________________
Key Responsibilities
Design, develop, and deploy machine learning models using frameworks such as TensorFlow, PyTorch, and scikit-learn
Build and maintain scalable data pipelines using Apache Kafka, REST APIs, and SFTP for real-time and batch data ingestion
Perform data preprocessing, profiling, and synthetic data generation using tools like SDV, Faker, and pandas-profiling
Implement robust data quality and validation frameworks using Great Expectations and Deequ
Develop and deploy RESTful APIs for ML model serving using FastAPI and Spring Boot
Containerize applications using Docker and orchestrate deployments with Kubernetes and Oracle Kubernetes Engine (OKE)
Manage cloud infrastructure and deployments on Oracle Cloud Infrastructure (OCI)
Build CI/CD pipelines using GitHub Actions and Azure DevOps for automated testing and deployment
Develop interactive dashboards and visualizations using Dash, Plotly, and React
Ensure security and compliance with standards such as NIST SP 800-53, FIPS 140-2, DISA STIGs, and RMF
Collaborate with cross-functional teams including data engineers, software developers, and stakeholders
________________________________________
Required Skills & Qualifications
Strong programming experience in Python and/or Java
Hands-on experience with ML libraries: TensorFlow, PyTorch, scikit-learn, PyOD
Experience with model explainability tools such as SHAP
Solid understanding of data preprocessing, feature engineering, and model evaluation
Experience building and consuming REST APIs
Familiarity with containerization (Docker) and orchestration (Kubernetes)
Experience with streaming/data ingestion tools like Apache Kafka
Knowledge of data validation and quality frameworks (Great Expectations, Deequ)
Experience with cloud platforms, preferably OCI
Familiarity with CI/CD tools such as GitHub Actions or Azure DevOps
Understanding of JSON and YAML for data/config handling
_____________________________________
Preferred Qualifications
Experience with Oracle Kubernetes Engine (OKE)
Exposure to synthetic data generation techniques
Experience working in regulated environments (federal, financial, healthcare)
Knowledge of security compliance frameworks (NIST, RMF, FIPS, DISA STIGs)
Experience building data visualization dashboards using Plotly, Dash, or React
________________________________________
Nice to Have
Experience with anomaly detection techniques
Knowledge of MLOps best practices and model lifecycle management
Familiarity with scalable distributed systems