Overview
Remote
On Site
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 30 day((s))
Skills
experienced with neural nets and the tooling TensorFlow-Keras-PyTorch
with basic knowledge of MLOps principles. Alternatively Data Scientists familiar with MIP and CP_SAT approaches to schedule optimization. Must be very experienced with idiomatic python and productionizing their models for enterprise use.
Job Details
Job Title: Senior Data Scientist
Location: Texas (Remote)
Job Type: Contract
Travel: 10% to Dallas/ Dallas Airport
Job Summary
Role Overview
We're seeking a seasoned Data Scientist to own end-to-end model development and deployment within an enterprise environment. You'll either focus on deep learning (TensorFlow/ Keras/ PyTorch) with basic MLOps principles or on optimization solutions using MIP and CP-SAT for scheduling. Strong Python skills and experience productionizing models are essential.
Key Responsibilities
- Design, train, and deploy neural network models using TensorFlow, Keras, or PyTorch, applying best practices for versioning, testing, and performance tuning.
- Collaborate with ML engineering and SRE teams to productionize models-setting up inference pipelines, CI/CD workflows (e.g., MLflow, Kubeflow, Airflow), containerization (Docker/K8s), and monitoring using ML observability tools.
- For optimization-focused roles: formulate scheduling and resource allocation problems using Mixed Integer Programming (MIP) or CP SAT and implement scalable solvers (e.g., OR-Tools).
- Write idiomatic Python 3.11+ code for data processing, feature engineering, and model pipelines, ensuring high code quality and maintainability.
- Present modeling outcomes, results, and performance insights to stakeholders, ensuring interpretability and actionability.
Required Qualifications
- 5+ years of hands-on experience in Python, especially Python 3.11+ idioms.
- Deep expertise in TensorFlow, Keras, and/or PyTorch, with a portfolio of production models in enterprise settings.
- Familiarity with MLOps workflows: CI/CD (Jenkins, GitHub Actions), container orchestration, and monitoring pipelines.
- Alternatively: proven experience with MIP or CP SAT for scheduling or constraint optimization solutions.
- Experience deploying models as scalable APIs/microservices in production environments.
- Clear communication skills to collaborate with technical and business stakeholders.
Preferred Experience
- Working knowledge of MLflow, Kubeflow, and orchestration tools like Airflow or Argo.
- Familiarity with cloud platforms (AWS SageMaker, Azure ML, Google Cloud Platform Vertex AI).
- Experience with distributed training or TPU/GPU acceleration using Ray, Dask, MPI, or Horovod.
- Strong foundation in statistics, model interpretability, and ethical AI principles.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.