Overview
On Site
USD 45.00 - 63.00 per hour
Full Time
Skills
Modeling
Sales
Analytics
Microsoft Excel
Software Engineering
Workflow
DevSecOps
System Integration
Communication
Data Flow
Technical Writing
Data Governance
Data Quality
Innovation
Professional Development
TensorFlow
PyTorch
scikit-learn
Machine Learning Operations (ML Ops)
Machine Learning (ML)
Python
Storage
Version Control
GitHub
Management
Collaboration
Data Engineering
Apache Spark
PySpark
Continuous Integration
Continuous Delivery
Terraform
Docker
Kubernetes
Problem Solving
Conflict Resolution
Analytical Skill
FOCUS
Cloud Computing
Amazon Web Services
Microsoft Azure
Google Cloud Platform
Google Cloud
Atlassian
Project Management
JIRA
Confluence
Agile
Taxes
Life Insurance
Marketing Intelligence
MI
Business Transformation
Law
Job Details
Description
We are seeking an experienced Machine Learning Engineer to design, implement, and maintain robust analytics pipeline solutions. These solutions will support the analysis, modeling, and prediction of upstream and downstream auction prices, directly benefiting the Business and Sales Planning Analytics (BSPA) Used Vehicle Analytics team and its customers. The ideal candidate will excel at developing ML/Software Engineering Solutions, performing DevSecOps, and collaborating with cross-functional teams (including ML Engineers, Data Scientists, and Data Engineers) to improve processes and drive business performance. Responsibilities: Develop, build and maintain infrastructure required for machine learning, including data pipelines, model deployment platforms, and model monitoring. Develop and maintain tools and libraries to support the development and deployment of machine learning models. Automate machine learning workflows using DevSecOps principles and practices. Collaborate with development and operations teams to implement software solutions that improve system integration and automation of ML pipelines. Design, develop, and manage data flows and APIs between upstream systems and applications. Troubleshoot and resolve issues related to system communication, data flow, and data quality. Collaborate with technical and non-technical teams to gather integration requirements and ensure successful deployment of data solutions. Create and maintain comprehensive technical documentation of software components. Work with IT to ensure systems meet evolving business needs and comply with data governance policies and security requirements. Implement and enforce the highest standards of data quality and integrity across all data processes. Manage deliverables through project management tools. Skills Required: 3+ years of experience in developing and deploying machine learning models in a production environment. 3+ years of experience in programming with Python 3+ years of hands-on experience utilizing Google Cloud Platform (Google Cloud Platform) services, including BigQuery and Google Cloud Storage to efficiently manage and process large datasets, as well as Cloud Composer and/or Cloud Run. Experience with version control systems like GitHub for managing code repositories and collaboration. 3+ years of experience with code quality and security scanning tools, such as, SonarQube, Cycode and FOSSA. 3+ years of experience with data engineering tools and technologies, such as, Kubernetes, Container-as-a-Service (CaaS) platforms, OpenShift, DataProc, Spark (with PySpark) or Airflow. Experience with CI/CD practices and tools, including Tekton or Terraform, as well as containerization technologies like Docker or Kubernetes. Excellent problem-solving and analytical skills, with a focus on data-driven solutions. Familiarity with cloud computing platforms like AWS, Azure, or Google Cloud Platform. Familiarity with Atlassian project management tools (e.g., Jira, Confluence) and agile practices. Skills Preferred: Proven ability to thrive in dynamic environments, managing multiple priorities and delivering high-impact results even with limited information. Exceptional problem-solving skills, a proactive and strategic mindset, and a passion for technical excellence and innovation in data engineering. Demonstrated commitment to continuous learning and professional development. Familiarity with machine learning libraries, such as TensorFlow, PyTorch, or Scikit-learn Experience with MLOps tools and platforms.
Skills
Python, Google Cloud Platform, Airflow, SPark, CI/CD, ML, Deployment
Top Skills Details
Python,Google Cloud Platform,Airflow,SPark,CI/CD
Additional Skills & Qualifications
Experience Required: 3+ years of experience in developing and deploying machine learning models in a production environment. 3+ years of experience in programming with Python 3+ years of hands-on experience utilizing Google Cloud Platform (Google Cloud Platform) services, including BigQuery and Google Cloud Storage to efficiently manage and process large datasets, as well as Cloud Composer and/or Cloud Run. Experience with version control systems like GitHub for managing code repositories and collaboration. 3+ years of experience with code quality and security scanning tools, such as, SonarQube, Cycode and FOSSA. 3+ years of experience with data engineering tools and technologies, such as, Kubernetes, Container-as-a-Service (CaaS) platforms, OpenShift, DataProc, Spark (with PySpark) or Airflow. Experience with CI/CD practices and tools, including Tekton or Terraform, as well as containerization technologies like Docker or Kubernetes. Excellent problem-solving and analytical skills, with a focus on data-driven solutions. Familiarity with cloud computing platforms like AWS, Azure, or Google Cloud Platform. Familiarity with Atlassian project management tools (e.g., Jira, Confluence) and agile practices.
Experience Level
Intermediate Level
Pay and Benefits
The pay range for this position is $45.00 - $63.00/hr.
Eligibility requirements apply to some benefits and may depend on your job classification and length of employment. Benefits are subject to change and may be subject to specific elections, plan, or program terms. If eligible, the benefits available for this temporary role may include the following:
Medical, dental & vision
Critical Illness, Accident, and Hospital
401(k) Retirement Plan - Pre-tax and Roth post-tax contributions available
Life Insurance (Voluntary Life & AD&D for the employee and dependents)
Short and long-term disability
Health Spending Account (HSA)
Transportation benefits
Employee Assistance Program
Time Off/Leave (PTO, Vacation or Sick Leave)
Workplace Type
This is a hybrid position in Dearborn,MI.
Application Deadline
This position is anticipated to close on May 14, 2025.
About TEKsystems and TEKsystems Global Services
We're a leading provider of business and technology services. We accelerate business transformation for our customers. Our expertise in strategy, design, execution and operations unlocks business value through a range of solutions. We're a team of 80,000 strong, working with over 6,000 customers, including 80% of the Fortune 500 across North America, Europe and Asia, who partner with us for our scale, full-stack capabilities and speed. We're strategic thinkers, hands-on collaborators, helping customers capitalize on change and master the momentum of technology. We're building tomorrow by delivering business outcomes and making positive impacts in our global communities. TEKsystems and TEKsystems Global Services are Allegis Group companies. Learn more at TEKsystems.com.
The company is an equal opportunity employer and will consider all applications without regard to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.
We are seeking an experienced Machine Learning Engineer to design, implement, and maintain robust analytics pipeline solutions. These solutions will support the analysis, modeling, and prediction of upstream and downstream auction prices, directly benefiting the Business and Sales Planning Analytics (BSPA) Used Vehicle Analytics team and its customers. The ideal candidate will excel at developing ML/Software Engineering Solutions, performing DevSecOps, and collaborating with cross-functional teams (including ML Engineers, Data Scientists, and Data Engineers) to improve processes and drive business performance. Responsibilities: Develop, build and maintain infrastructure required for machine learning, including data pipelines, model deployment platforms, and model monitoring. Develop and maintain tools and libraries to support the development and deployment of machine learning models. Automate machine learning workflows using DevSecOps principles and practices. Collaborate with development and operations teams to implement software solutions that improve system integration and automation of ML pipelines. Design, develop, and manage data flows and APIs between upstream systems and applications. Troubleshoot and resolve issues related to system communication, data flow, and data quality. Collaborate with technical and non-technical teams to gather integration requirements and ensure successful deployment of data solutions. Create and maintain comprehensive technical documentation of software components. Work with IT to ensure systems meet evolving business needs and comply with data governance policies and security requirements. Implement and enforce the highest standards of data quality and integrity across all data processes. Manage deliverables through project management tools. Skills Required: 3+ years of experience in developing and deploying machine learning models in a production environment. 3+ years of experience in programming with Python 3+ years of hands-on experience utilizing Google Cloud Platform (Google Cloud Platform) services, including BigQuery and Google Cloud Storage to efficiently manage and process large datasets, as well as Cloud Composer and/or Cloud Run. Experience with version control systems like GitHub for managing code repositories and collaboration. 3+ years of experience with code quality and security scanning tools, such as, SonarQube, Cycode and FOSSA. 3+ years of experience with data engineering tools and technologies, such as, Kubernetes, Container-as-a-Service (CaaS) platforms, OpenShift, DataProc, Spark (with PySpark) or Airflow. Experience with CI/CD practices and tools, including Tekton or Terraform, as well as containerization technologies like Docker or Kubernetes. Excellent problem-solving and analytical skills, with a focus on data-driven solutions. Familiarity with cloud computing platforms like AWS, Azure, or Google Cloud Platform. Familiarity with Atlassian project management tools (e.g., Jira, Confluence) and agile practices. Skills Preferred: Proven ability to thrive in dynamic environments, managing multiple priorities and delivering high-impact results even with limited information. Exceptional problem-solving skills, a proactive and strategic mindset, and a passion for technical excellence and innovation in data engineering. Demonstrated commitment to continuous learning and professional development. Familiarity with machine learning libraries, such as TensorFlow, PyTorch, or Scikit-learn Experience with MLOps tools and platforms.
Skills
Python, Google Cloud Platform, Airflow, SPark, CI/CD, ML, Deployment
Top Skills Details
Python,Google Cloud Platform,Airflow,SPark,CI/CD
Additional Skills & Qualifications
Experience Required: 3+ years of experience in developing and deploying machine learning models in a production environment. 3+ years of experience in programming with Python 3+ years of hands-on experience utilizing Google Cloud Platform (Google Cloud Platform) services, including BigQuery and Google Cloud Storage to efficiently manage and process large datasets, as well as Cloud Composer and/or Cloud Run. Experience with version control systems like GitHub for managing code repositories and collaboration. 3+ years of experience with code quality and security scanning tools, such as, SonarQube, Cycode and FOSSA. 3+ years of experience with data engineering tools and technologies, such as, Kubernetes, Container-as-a-Service (CaaS) platforms, OpenShift, DataProc, Spark (with PySpark) or Airflow. Experience with CI/CD practices and tools, including Tekton or Terraform, as well as containerization technologies like Docker or Kubernetes. Excellent problem-solving and analytical skills, with a focus on data-driven solutions. Familiarity with cloud computing platforms like AWS, Azure, or Google Cloud Platform. Familiarity with Atlassian project management tools (e.g., Jira, Confluence) and agile practices.
Experience Level
Intermediate Level
Pay and Benefits
The pay range for this position is $45.00 - $63.00/hr.
Eligibility requirements apply to some benefits and may depend on your job classification and length of employment. Benefits are subject to change and may be subject to specific elections, plan, or program terms. If eligible, the benefits available for this temporary role may include the following:
Medical, dental & vision
Critical Illness, Accident, and Hospital
401(k) Retirement Plan - Pre-tax and Roth post-tax contributions available
Life Insurance (Voluntary Life & AD&D for the employee and dependents)
Short and long-term disability
Health Spending Account (HSA)
Transportation benefits
Employee Assistance Program
Time Off/Leave (PTO, Vacation or Sick Leave)
Workplace Type
This is a hybrid position in Dearborn,MI.
Application Deadline
This position is anticipated to close on May 14, 2025.
About TEKsystems and TEKsystems Global Services
We're a leading provider of business and technology services. We accelerate business transformation for our customers. Our expertise in strategy, design, execution and operations unlocks business value through a range of solutions. We're a team of 80,000 strong, working with over 6,000 customers, including 80% of the Fortune 500 across North America, Europe and Asia, who partner with us for our scale, full-stack capabilities and speed. We're strategic thinkers, hands-on collaborators, helping customers capitalize on change and master the momentum of technology. We're building tomorrow by delivering business outcomes and making positive impacts in our global communities. TEKsystems and TEKsystems Global Services are Allegis Group companies. Learn more at TEKsystems.com.
The company is an equal opportunity employer and will consider all applications without regard to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.