Overview
Skills
Job Details
Job Overview:
Seeking a skilled professional to support the development and deployment of machine learning-integrated software solutions in a production environment.
Key Responsibilities:
Build and maintain data pipelines using tools like PySpark and AWS Glue ETL.
Support end-to-end model pipeline integration in production systems.
Understand and follow the software development lifecycle (SDLC) for model deployment.
Analyze raw data sources to assess quality and relevance for modeling.
Apply business context to data for meaningful insights and model accuracy.
Collaborate with data science teams to review and improve pipeline and data model designs.
Automate data ingestion for scalable model development.
Monitor production models and handle alerts or pipeline issues.
Work with stakeholders to translate business needs into analytic solutions.
Participate in planning and prioritization of analytic tasks.
Contribute to special projects and additional tasks as needed.
Qualifications:
Bachelor s degree or equivalent experience.
Minimum 5 years of relevant work experience.
Proficiency in Python, PySpark, SQL, NoSQL, and AWS (Glue, SageMaker, etc.).