Health Care Google Cloud Platform Data Engineer

Overview

Remote
130,000 - 140,000
Full Time
No Travel Required
Unable to Provide Sponsorship

Skills

HIPAA
HL7
Good Clinical Practice
Data Quality
Data Stewardship
Data Structure
Data Governance
Data Lake
Medical Devices
Electronic Health Record (EHR)
Health Care
Google Cloud
Google Cloud Platform
GCS
clinical
EHR
EMR
LIS
RIS
PACS
FHIR
CCD
DICOM
SQL
Data Marts
Data Flow
Data Domain
DNS
Cloud Architecture
Cloud Computing
Capacity Management
Collaboration
Clustering
Data Modeling
Data Architecture
Dragon NaturallySpeaking
Encryption
Data Warehouse
DLP
GCP Data Engineer
Data Engineer

Job Details

Position :- Google Cloud Platform Data Engineer (Health Care Background Must)

Location: Across USA any Location

Summary: Strong experience architecting enterprise data platforms on Google Cloud (Google Cloud Platform). The architect will work as a strategic technical partner to design and build a Google Cloud Platform BigQuery-based Data Lake & Data Warehouse ecosystem.

The role requires deep hands-on expertise in data ingestion, transformation, modeling, enrichment, and governance, combined with a strong understanding of clinical healthcare data standards, interoperability, and cloud architecture best practices.

 Key Responsibilities:

 1. Data Lake & Data Platform Architecture (Google Cloud Platform)

• Architect and design an enterprise-grade Google Cloud Platform-based data lakehouse leveraging BigQuery, GCS, Dataproc, Dataflow, Pub/Sub, Cloud Composer, and BigQuery Omni.

• Define data ingestion, hydration, curation, processing and enrichment strategies for large-scale structured, semi-structured, and unstructured datasets.

• Create data domain models, canonical models, and consumption-ready datasets for analytics, AI/ML, and operational data products.

• Design federated data layers and self-service data products for downstream consumers.

2. Data Ingestion & Pipelines

• Architect batch, near-real-time, and streaming ingestion pipelines using Google Cloud Platform Cloud Dataflow, Pub/Sub, and Dataproc.

• Set up data ingestion for clinical (EHR/EMR, LIS, RIS/PACS) datasets including HL7, FHIR, CCD, DICOM formats.

• Build ingestion pipelines for non-clinical systems (ERP, HR, payroll, supply chain, finance).

• Architect ingestion from medical devices, IoT, remote patient monitoring, and wearables leveraging IoMT patterns.

• Manage on-prem → cloud migration pipelines, hybrid cloud data movement, VPN/Interconnect connectivity, and data transfer strategies.

3. Data Transformation, Hydration & Enrichment

• Build transformation frameworks using BigQuery SQL, Dataflow, Dataproc, or dbt.

• Define curation patterns including bronze/silver/gold layers, canonical healthcare entities, and data marts.

• Implement data enrichment using external social determinants, device signals, clinical event logs, or operational datasets.

• Enable metadata-driven pipelines for scalable transformations.

4. Data Governance & Quality

• Establish and operationalize a data governance framework encompassing data stewardship, ownership, classification, and lifecycle policies.

• Implement data lineage, data cataloging, and metadata management using tools such as Dataplex, Data Catalog, Collibra, or Informatica.

• Set up data quality frameworks for validation, profiling, anomaly detection, and SLA monitoring.

• Ensure HIPAA compliance, PHI protection, IAM/RBAC, VPC SC, DLP, encryption, retention, and auditing.

5. Cloud Infrastructure & Networking

• Work with cloud infrastructure teams to architect VPC networks, subnetting, ingress/egress, firewall policies, VPN/IPSec, Interconnect, and hybrid connectivity.

• Define storage layers, partitioning/clustering design, cost optimization, performance tuning, and capacity planning for BigQuery.

• Understand containerized processing (Cloud Run, GKE) for data services.

 

6. Stakeholder Collaboration

• Work closely with clinical, operational, research, and IT stakeholders to define data use cases, schema, and consumption models.

• Partner with enterprise architects, security teams, and platform engineering teams on cross-functional initiatives.

• Guide data engineers and provide architectural oversight on pipeline implementation.

7. Hands-on Leadership

• Be actively hands-on in building pipelines, writing transformations, building POCs, and validating architectural patterns.

• Mentor data engineers on best practices, coding standards, and cloud-native development.

Required Skills & Qualifications

Technical Skills (Must-Have)

• 10+ years in data architecture, engineering, or data platform roles.

• Strong expertise in Google Cloud Platform data stack (BigQuery, Dataflow, Composer, GCS, Pub/Sub, Dataproc, Dataplex).

• Hands-on experience with data ingestion, pipeline orchestration, and transformations.

• Deep understanding of clinical data standards:

• HL7 v2.x, FHIR, CCD/C-CDA

• DICOM (for scans and imaging)

• LIS/RIS/PACS data structures

• Experience with device and IoT data ingestion (wearables, remote patient monitoring, clinical devices).

• Experience with ERP datasets (Workday, Oracle, Lawson, PeopleSoft).

• Strong SQL and data modeling skills (3NF, star/snowflake, canonical and logical models).

• Experience with metadata management, lineage, and governance frameworks.

• Solid understanding of HIPAA, PHI/PII handling, DLP, IAM, VPC security.

Cloud & Infrastructure

• Solid understanding of cloud networking, hybrid connectivity, VPC design, firewalling, DNS, service accounts, IAM, and security models.

• Cloud Native Data movement services

• Experience with on-prem to cloud migrations.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.