Overview
Skills
Job Details
Hi
Our client is looking Data Scientist AI/ML for Long Term Contract project in Jersey City, NJ/Atlanta, GA/New York City, NY - Onsite below is the detailed requirements.
Kindly share your Updated Resume to proceed further.
Job Role: Data Scientist AI/ML
Location: Jersey City, NJ/Atlanta, GA/New York City, NY - Onsite
Mode of Hiring: Long Term Contract
Job Description:
We are seeking a highly skilled and experienced Data Scientist (AI/ML) to join our growing AI/ML Engineering team. This role is ideal for candidates who are passionate about scaling AI systems, deploying models into production, and applying cutting-edge machine learning techniques to real-world problems. You will be responsible for developing end-to-end ML pipelines, architecting distributed training solutions, and contributing to our internal AI/ML infrastructure and tooling.
Responsibilities
- Architect and implement distributed training strategies using frameworks such as Horovod and DeepSpeed.
- Deploy and manage production ML models using Docker, Kubernetes, and model serving frameworks like TensorFlow Serving, TorchServe, and Seldon Core.
- Develop and maintain CI/CD pipelines for ML workflows following MLOps best practices.
- Implement model monitoring and drift detection systems to ensure robustness and reliability.
- Profile, optimize, and benchmark ML models for low-latency inference in production environments.
- Design and manage feature stores using platforms like Feast or Tecton.
- Orchestrate data pipelines with tools such as Apache Airflow and Kubeflow Pipelines.
- Collaborate with data engineering teams to integrate with diverse data storage solutions including distributed file systems and vector databases.
- Contribute to the development of internal AI/ML tooling and infrastructure.
- Troubleshoot and debug complex issues across the machine learning lifecycle, especially in distributed systems.
Key Skills & Qualifications
- Bachelor's degree in Computer science or related field, with minimum 6-10 Years of relevant experience
- Strong knowledge of machine learning paradigms: supervised, unsupervised, deep learning, and reinforcement learning.
- Expert proficiency in Python and scientific computing libraries like NumPy, SciPy, and Pandas.
- Deep experience with deep learning frameworks such as TensorFlow, Keras, and PyTorch.
- Proficient in distributed data processing frameworks like Apache Spark, Ray, or Dask.
- Familiarity with feature engineering platforms like Feast or Tecton.
- Solid experience with containerization (Docker) and orchestration (Kubernetes).
- Hands-on experience with ML model serving frameworks: TensorFlow Serving, TorchServe, Seldon Core.
- Skilled in building and maintaining data pipelines using orchestration tools like Airflow or Kubeflow.
- Strong understanding of model monitoring, logging, and drift detection techniques.
- Knowledge of data serialization and storage formats: Parquet, Avro, Protocol Buffers.
- Ability to work collaboratively in a cross-functional team and communicate technical concepts clearly.
- Excellent communication and teamwork skills.
- Demonstrate excellent communication skills including the ability to effectively communicate with internal and external customers.
- Ability to use strong industry knowledge to relate to customer needs and dissolve customer concerns and high level of focus and attention to detail.