Big Data Architect

Overview

Remote
Contract - W2
Contract - 07 month(s)

Skills

Python
PySpark
SQL
SQL Azure
Microsoft SQL Server
Virtual Machines
Data Modeling
Extract
Transform
Load
ELT
Design Patterns
Docker
Kubernetes
Performance Tuning
Data Engineering
Health Care
Normalization
Data Integration
Storage
Collaboration
Data Quality
Scalability
Big Data
Databricks
Microsoft Azure

Job Details

Primary Skill:

  • Databricks (Advanced 6 9 years)

Required Skills:

  • Databricks (Python, SQL, PySpark)

  • Microsoft Azure SQL (Managed Instance, Azure SQL DB, SQL Server VMs) Advanced (6 9 years)

  • Data Modeling Advanced (6 9 years)

  • Data Engineering Advanced (6 9 years)

  • ETL/ELT design patterns

  • Docker and Azure Kubernetes Service (AKS) for select automation

  • Performance optimization and scalability

Experience:

  • 7+ years of enterprise data engineering experience

  • Healthcare data integration experience strongly preferred

  • Experience with large-scale data harmonization and normalization

Responsibilities:

  • Design and implement large-scale, scalable data architectures

  • Develop data integration, processing, and storage solutions using big data technologies

  • Collaborate with stakeholders to translate data requirements into technical solutions

  • Ensure data quality, security, and governance standards

  • Optimize data solutions for performance, scalability, and cost efficiency

This role focuses on building and supporting enterprise-scale data pipelines and big data solutions using Databricks and Azure technologies.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Intellectt INC