Overview
Remote
Depends on Experience
Contract - Independent
Contract - W2
Skills
Machine Learning
Data Engineer
ETL
TensorFlow
PyTorch
SageMaker
Lambda
ECS
EKS
Kafka
Job Details
- Design, build, and maintain data pipelines and ETL workflows for large-scale data processing.
- Develop, train, and fine-tune machine learning models using frameworks such as scikit-learn, TensorFlow, or PyTorch.
- Deploy ML models and APIs on AWS services (SageMaker, Lambda, ECS/EKS, etc.).
- Implement CI/CD pipelines and automated deployment using AWS DevOps tools (CodePipeline, CodeBuild) or equivalent (GitHub Actions, Jenkins).
- Monitor model performance and data drift; implement retraining pipelines where necessary.
- Collaborate with data scientists and backend engineers to integrate ML models into production applications.
- Ensure scalability, performance, and security across the ML and data stack.
Must Haves:
- 5 years professional experience in data engineering and machine learning
- Experience in MLOps frameworks (MLflow, Kubeflow, SageMaker Pipelines).
- Exposure to streaming and real-time data processing (Kafka, Kinesis).
- Familiarity with data orchestration tools (Airflow, Step Functions).
- Knowledge of data visualization (QuickSight, Power BI, or equivalent).
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.