Data Solutions Architect

Overview

On Site
Hybrid
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
No Travel Required

Skills

Data Engineer
Data Solutions Architect
AI
ML
AWS
Kubernetes
data API
Docker
Elastic search
Redis
Vector stores
Python
SQL
Unix
Linux
Graph DB
neo4j
Starburst
RESTful
PostgreSQL
UI
UX
Jenkins
Agile
User Experience
Web Development
Graph Databases
Machine Learning (ML)
Privacy
Collaboration
Data Analysis
Data Quality
Database
Elasticsearch
LinkedIn
API
Amazon Web Services
Artificial Intelligence
Business Operations
Cloud Computing
RTR
Scalability

Job Details

Job ID: VU-DSA

Hybrid/Local Data Engineer/Data Solutions Architect (12+) with AI/ML, AWS, Kubernetes, data API, Docker, Elastic search, Redis, Vector stores, Python, SQL, Unix/Linux, Graph DB/neo4j, Starburst, RESTful, PostgreSQL, UI/UX, Jenkins, Agile experience

Location: Chicago, IL (Onsite once a week)
Duration: 12 months
Client: Virtusa (End client: Baxter)
Required: I94 Travel History/LinkedIn profile

Note: We are not looking for data architects who implemented lakehouse, we are looking for full stack architects with Kubernetes first experience who have built scalable data APIs and deployed them as docker containers on Kubernetes. They should have exposure to databases like Elastic search, Redis etc.

Designing data solutions
Vector stores
Elastic search
Machine Learning
CUDA
GPU
Python Development
AI/ML

Responsibilities:
Design and implement the pipelines and endpoints that expose data to the enterprise.
Design and implement data pipelines using Python, SQL, and Unix to ensure data quality and integrity.
Work with Docker to containerize data pipelines and ensure scalability and portability.
Collaborate with cross-functional teams to gather requirements and integrate data analytics into business operations.
Ensure data privacy and security by implementing appropriate controls and protocols
Stay up-to-date with emerging trends and technologies in data analytics and machine learning.

Skills:
Experience in designing data solutions data product/pipeline with full stack (python, docker, Kubernetes, API) etc. that follows interoperability guidelines
Proficient in use of Architectural patterns ( side cars , Singleton, Pub/Sub, etc)
Expertise in Python web development and ML
Docker, Kubernetes
Unix (or Linux)
AWS or any cloud technologies
Graph DB (neo4j) or Vector Store (ElasticSearch, Redis)
[nice to have] Starburst, RESTful Endpoints, API, SQL, PostgreSQL, UI/UX, Jenkins, Agile

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.