Overview
Skills
Job Details
Job Title - Senior Development Operations Engineer (Data Science)
Job Location - Normal, Illinois, United States of America, 61761-8000
Job Description -
Job Title: Staff AI/ML Engineer & Data Scientist
Schedule: 9-6 Central Time (1 hour non-billable lunch) M-F
THIS WILL BE MOSTLY REMOTE BUT there will be ~1 trip (you can expense for) to Normal each month for first 3 months. Trip will be ~ 3 days working in Normal.
Role Summary
We are seeking a Staff AI/ML Engineer & Data Scientist with deep expertise in traditional machine learning, Deep learning and strong MLOps experience to lead the design, deployment, and maintenance of production-grade ML systems. You will architect robust ML pipelines, apply advanced statistical techniques, and ensure models are accurate, explainable, and scalable. While the primary focus will be on traditional supervised, unsupervised, and time-series modeling, light experience with retrieval-augmented generation (RAG) is a plus. The individual needs to have devops experience for setting up Databases, CI/CD (Databricks end-to-end experience is plus)
MOST IMPORTANT SKILLS/RESPONSIBILITIES:
-strong databricks MLOPS, databricks AI/ML, and aws MLOPS and software experience
- Traditional ML Expertise Apply algorithms such as regression, tree-based models, SVMs, clustering, and forecasting to solve high-impact problems ,feature engineering and hyper parameter tuning (anamoly prediction). The vast majority of data generated today is unlabeled
- End-to-End Model Development Lead the full lifecycle from data preprocessing and feature engineering to training, validation, deployment, and monitoring.
- Statistical Analysis Apply hypothesis testing, Bayesian methods, and model interpretability techniques to ensure reliable insights.
Devops Experience Experience with database setup, databricks, aws, CI/CD, DevOps/MLOps, VectorDBs, GraphDB
- Masters degree or PHD is mandatory
- This role required to visit site intermittently to Normal,IL site for initial understanding of the scope.
- This role required analysis of manufacturing, sensors, PLC data, any prior would be a plus
Key Responsibilities
- ML Technical Leadership Define ML architecture, best practices, and performance standards for enterprise-scale solutions.
- End-to-End Model Development Lead the full lifecycle from data preprocessing and feature engineering to training, validation, deployment, and monitoring.
- Traditional ML Expertise Apply algorithms such as regression, tree-based models, SVMs, clustering, and forecasting to solve high-impact problems ,feature engineering and hyper parameter tuning.
- Programming & Integration Build scalable ML pipelines and APIs in Python (primary) and Golang (for backend services).
- MLOps Implementation Design and manage CI/CD pipelines for ML, including automated retraining, model versioning, monitoring, and rollback strategies.
- Statistical Analysis Apply hypothesis testing, Bayesian methods, and model interpretability techniques to ensure reliable insights.
- Cross-Functional Collaboration Partner with engineering, analytics, and product teams to align technical solutions with business objectives.
Devops Experience Experience with database setup, databricks, aws, CI/CD, DevOps/MLOps, vectorDBs, GraphDB
Qualifications
Must Have:
- 8+ years of experience in applied ML or data science, including 3+ years in a senior or staff-level role and devops experience.
- Expert proficiency in Python for ML development (Good to have: Golang for backend integration)
- Proven experience deploying traditional ML models to production with measurable business impact.
- Strong knowledge of ML frameworks (Scikit-learn, XGBoost, LightGBM) and data libraries (Pandas, NumPy, Statsmodels).
- Hands-on MLOps experience with tools like MLflow (preferred), Databricks(preferred), Kubeflow, Vertex AI Pipelines, or AWS SageMaker Pipelines.
- Experience with model monitoring, drift detection, and automated retraining strategies.
- Strong database skills (SQL and NoSQL).
- Masters degree or PHD is mandatory
Preferred:
- Exposure to retrieval-augmented generation (RAG) pipelines and vector databases.
- Time-series analysis and anomaly detection experience.
- Cloud deployment expertise (AWS, Azure, Google Cloud Platform).
- Familiarity with distributed computing frameworks (Spark, Ray).
Soft Skills
- Strategic problem-solver with the ability to align AI solutions to business goals.
- Excellent communicator across technical and non-technical stakeholders.