Job Title : Software Engineer – Interoperability
Location details: Remote
Duration : 6+ Months- long-term contract, possible extensions
We are seeking a highly skilled Software Engineer – Interoperability to design, build, and support enterprise scale healthcare interoperability and data integration solutions. This role supports CMS ONC, BCBSA, and enterprise interoperability initiatives, with a strong focus on FHIR based APIs, SmileCDR, and high volume ETL/ELT pipelines built on modern DataWorks platforms.
The ideal candidate brings hands on development experience across SmileCDR, HL7 FHIR, Informatica Big Data Management (BDM), Python/PySpark, JavaScript, REST APIs, and cloud native data platforms.
________________________________________
Key Responsibilities
Interoperability & API Development
• Design, configure, and develop FHIR based interoperability solutions using SmileCDR.
• Implement and support HL7 FHIR resources (US Core, Da Vinci, CMS mandated APIs).
• Develop and maintain RESTful APIs for Patient Access, Provider Directory, Prior Authorization, and Payer to Payer use cases.
• Build API integrations using JavaScript, REST, OAuth2, and JSON payloads.
• Integrate external interoperability platforms such as Redox and third party healthcare APIs.
• Support API performance, monitoring, security, and compliance.
SmileCDR & FHIR Platform Responsibilities
• Configure and manage SmileCDR repositories, FHIR endpoints, and data ingestion pipelines.
• Develop FHIR mapping, transformation, and validation logic.
• Implement SmileCDR workflows, interceptors, subscriptions, and data persistence strategies.
• Support SmileCDR upgrades, patches, and production troubleshooting.
________________________________________
Data Engineering & ETL / ELT (High Volume Data)
• Design and build large scale, high volume ETL/ELT pipelines supporting clinical, claims, and member datasets.
• Develop pipelines using Python, PySpark, and distributed processing frameworks.
• Perform Source to Target Mapping, data transformations, and data quality validations.
• Support incremental, batch, and near real time ingestion patterns.
________________________________________
Informatica Big Data Management (BDM)
• Develop and maintain data pipelines using Informatica Big Data Management (BDM).
• Implement mappings, workflows, and transformations for Hadoop based and cloud data platforms.
• Integrate Informatica BDM with Hadoop, Hive, Spark, and cloud storage.
• Optimize BDM jobs for performance, scalability, and reliability.
• Support metadata management, lineage, and operational monitoring.
________________________________________
DataWorks Platforms & Tools
• Work with modern data engineering tools and platforms, including:
o DBT – transformations, modeling, and analytics ready datasets
o Starburst / Trino – federated query and analytics
o Apache Iceberg – large scale table format and versioned datasets
o Google Cloud Platform / BigQuery – cloud native analytics and storage
• Use Smile ETL tools and enterprise ingestion frameworks.
• Support hybrid architectures spanning on prem Hadoop and cloud platforms.
________________________________________
DevOps, CI/CD & Cloud Engineering
• Build and maintain CI/CD pipelines using GitLab Pipelines.
• Follow DevOps best practices for source control, automated testing, and deployments.
• Support cloud deployments on Google Cloud Platform.
• Ensure secure, repeatable, and compliant deployments across environments.
________________________________________
Collaboration & Delivery
• Partner with product owners, architects, vendors, and compliance teams.
• Support CMS ONC regulatory timelines and audit readiness.
• Participate in code reviews, design reviews, and technical documentation.
• Provide production support and root cause analysis for critical data and API pipelines.
________________________________________
Required Skills & Experience
Interoperability & API Skills
• SmileCDR (FHIR repository, configuration, ingestion, and APIs)
• HL7 FHIR (US Core, Da Vinci implementation experience)
• REST APIs, OAuth2, JSON
• JavaScript for API and integration development
• Redox or equivalent healthcare integration platforms
Data Engineering & ETL
• ETL / ELT pipelines for large scale, high volume datasets
• Python and PySpark
• Source to Target Mapping
• Data Modeling
Informatica & Big Data
• Informatica Big Data Management (BDM)
• Informatica PowerCenter / IDMC (preferred)
• Hadoop, Hive, Spark
DataWorks & Analytics Platforms
• DBT
• Starburst
• Apache Iceberg
• Google Cloud Platform / BigQuery
DevOps & Cloud
• GitLab CI/CD Pipelines
• DevOps and release management practices
• Cloud platforms (Google Cloud Platform preferred)
________________________________________
Preferred Qualifications
• Healthcare payer or provider interoperability experience
• CMS ONC / BCBSA mandate experience
• HL7 FHIR certification or equivalent hands on implementation experience
• Experience supporting production systems and regulatory reporting
________________________________________
Education
• Bachelor’s or Master’s degree in Computer Science, Engineering, or equivalent experience.