Overview
HybridComplete remote - May need to report onsite for joining formalities.
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
PySpark
Python
SQL
ETL
Machine Learning
cloud platforms
Job Details
Who can apply? - Candidate located in CANADA only. 100% remote opportunity.
Requirement-
Requirement-
- 5+ years of experience in a data engineering or ML engineering role
- Strong proficiency in Python, PySpark, and SQL
- Hands-on experience with ETL development and data pipeline orchestration
- Familiarity with basic machine learning concepts and model lifecycle
- Solid understanding of data warehousing and distributed systems
Nice to have
- Experience with cloud platforms (e.g., AWS, Azure, Google Cloud Platform)
- Exposure to tools like Airflow, dbt, or MLflow
- A strong sense of data ownership and intellectual curiosity.
Key Responsibilities:-
- Develop and maintain scalable data pipelines using Python, PySpark, and SQL
- Design, implement, and optimize ETL workflows to support analytics and ML models
- Collaborate with data scientists and analysts to ensure high-quality, accessible data
- Support the deployment of ML models into production environments- Implement data quality checks, monitoring, and validation
- Stay curious explore new datasets, uncover patterns, and contribute to data-driven innovation
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.