Data Platform Engineer - AWS


$80 - $90
Contract - W2
Contract - 6 Month(s)


Gen AI
cloud deployment

Job Details

This role can be worked 100% remote EST. No C2C Candidates will be considered.

This role requires versatility and expertise across a wide range of skills. Someone with a diverse background/experience and an engineer at heart will fit into this role seamlessly.
The Generative AI team is comprised of a multiple cross-functional group that works in unison and ensures a sound move from our research activities to scalable solutions. You will collaborate closely with our cloud, security, infrastructure, enterprise architecture and data science team to conceive and execute essential functionalities.


Design and build fault-tolerant infrastructure to support the Generative AI Ref architecture (RAG, Summarization, Agent etc).

Ensure code is delivered without vulnerabilities by enforcing engineering practices, code scanning, etc.

Build and maintain IAC (terraform/Cloud Formation), CICD (Jenkins) scripts,CodePipeline, uDeploy, & GitHub Actions.

Partner with our shared service teams like Architecture, Cloud, Security, etc to design and implement platform solutions.

Collaborate with the DS team to develop a self-service internal developer Generative AI platform.

Design and build the Data ingestion pipeline for Finetuning LLM Models.

Create templates (Architecture As Code) implementing Ref architecture application s topology.


Bachelor's degree in Computer Science, Computer Engineering, or a technical field.

4+ years of experience with AWS cloud.

At least 8 years of experience designing and building data-intensive solutions using distributed computing.

8+ years building and shipping software and/or platform infrastructure solutions for enterprises.

Experience with CI/CD pipelines, Automated Testing, Automated Deployments, Agile methodologies, Unit Testing and Integration Testing tools.

Experience with building scalable serverless application (real-time / batch) on AWS stack (Lambda + step function)

Knowledge of distributed NoSQL database systems.

Experience with data engineering, ETL technology, and conversation UX is a plus.

Experience with HPCs, vector embedding, and Hybrid/Semantic search technologies.

Experience with AWS OpenSearch, Step/Lambda Functions, SageMaker, API Gateways, ECS/Docker is a plus.

Proficiency in customization techniques across various stages of the RAG pipeline, including model fine-tuning, retrieval re-ranking, and hierarchical navigable small-world graph (HNSW) is a plus.

Strong proficiency in embeddings, ANN/KNN, vector stores, database optimization, & performance tuning.

Extensive programming experience with Python, Java.

Experience with LLM orchestration frameworks like Langchain, LlamaIndex etc.

Experience with CI/CD pipelines, Automated Testing, Automated Deployments, Agile methodologies, Unit Testing, and Integration Testing tools.

Excellent problem-solving skills and the ability to work in a collaborative team environment.

Excellent communication skills.