DetailsLocation: Remote
Department: Data Delivery and Governance
Schedule: Full Time, Day
Salary: $118,821.00 - $159,821.00 per year
#LI-Remote
Benefits- Comprehensive health coverage: medical, dental, vision, prescription coverage and HSA/FSA options
- Financial security & retirement: employer-matched 403(b), planning and hardship resources, disability and life insurance
- Time to recharge: pro-rated paid time off (PTO) and holidays
- Career growth: Ascension-paid tuition (Vocare), reimbursement, ongoing professional development and online learning
- Emotional well-being: Employee Assistance Program , counseling and peer support, spiritual care and stress management resources
- Family support: parental leave, adoption assistance and family benefits
- Other benefits: optional legal and pet insurance, transportation savings and more
Benefit options and eligibility vary by position, scheduled hours and location. Benefits are subject to change at any time. Your recruiter will provide the most up-to-date details during the hiring process.
Responsibilities- Engineer and optimize large-scale distributed data architectures by overseeing the end-to-end SDLC of high-concurrency pipelines utilizing Spark, Kafka, and DataForm within Google Cloud Platform/multicloud environments to ensure sub-second latency and high availability.
- Drive technical delivery and roadmap execution across multiple workgroups, conducting rigorous code reviews and challenging architectural estimations to ensure alignment with enterprise-grade scalability, reliability, and security standards.
- Orchestrate complex data integration patterns involving SQL/NoSQL, Graph, and Vector databases, managing technical dependencies and data lineage across Lakehouse architectures to support advanced AI and consumer-facing activations.
- Establish automated CI/CD and DataOps frameworks by defining comprehensive testing strategies-including integration, UAT, and performance benchmarking-to ensure seamless, zero-downtime deployments and quality knowledge transfers.
- Lead technical problem-solving and bottleneck analysis at the pipeline and infrastructure layers, empowering self-optimizing workgroups to resolve architectural impediments and maintain 99.9% data integrity across the ecosystem.
RequirementsEducation:
- High School diploma equivalency with 3 years of cumulative experience OR Associate'
degree/Bachelor's degree with 2 years of cumulative experience OR 7 years of applicable cumulative job specific experience required. - 3 years of leadership or management experience preferred.
Additional Preferences- Cloud Platform Mastery: Deep architectural expertise in Google Cloud Platform (Google Cloud Platform) and its native data stack (BigQuery, Dataflow, Composer), with a secondary proficiency in Azure or AWS for hybrid-cloud deployments.
- Advanced Programming & Scripting: Expert-level fluency in Python, Scala, or Java, specifically applied to developing custom operators within Airflow and optimizing resource allocation in containerized environments.
- Modern Database Innovation: Hands-on experience implementing Graph and Vector databases for specialized search and retrieval-augmented generation (RAG) use cases within a Salesforce Cloud integrated environment.
Why Join Our TeamAscension is a leading nonprofit Catholic health system with a culture and associate experience grounded in service, growth, care and connection. We empower our 99,000+ associates to bring their skills and expertise every day to reimagining healthcare, together. Recognized as one of the Best 150+ Places to Work in Healthcare and a Military-Friendly Gold Employer, you'll find an inclusive and supportive environment where your contributions truly matter.
Equal Employment Opportunity EmployerAscension provides Equal Employment Opportunities (EEO) to all associates and applicants for employment without regard to race, color, religion, sex/gender, sexual orientation, gender identity or expression, pregnancy, childbirth, and related medical conditions, lactation, breastfeeding, national origin, citizenship, age, disability, genetic information, veteran status, marital status, all as defined by applicable law, and any other legally protected status or characteristic in accordance with applicable federal, state and local laws. For further information, view the EEO Know Your Rights (English) poster or EEO Know Your Rights (Spanish) poster.
Fraud prevention notice
Prospective applicants should be vigilant against fraudulent job offers and interview requests. Scammers may use sophisticated tactics to impersonate Ascension employees. To ensure your safety, please remember: Ascension will never ask for payment or to provide banking or financial information as part of the job application or hiring process. Our legitimate email communications will always come from an @ascension.org email address; do not trust other domains, and an official offer will only be extended to candidates who have completed a job application through our authorized applicant tracking system.
E-Verify statement
Employer participates in the Electronic Employment Verification Program. Please click here for more information.
Responsibilities- Engineer and optimize large-scale distributed data architectures by overseeing the end-to-end SDLC of high-concurrency pipelines utilizing Spark, Kafka, and DataForm within Google Cloud Platform/multicloud environments to ensure sub-second latency and high availability.
- Drive technical delivery and roadmap execution across multiple workgroups, conducting rigorous code reviews and challenging architectural estimations to ensure alignment with enterprise-grade scalability, reliability, and security standards.
- Orchestrate complex data integration patterns involving SQL/NoSQL, Graph, and Vector databases, managing technical dependencies and data lineage across Lakehouse architectures to support advanced AI and consumer-facing activations.
- Establish automated CI/CD and DataOps frameworks by defining comprehensive testing strategies-including integration, UAT, and performance benchmarking-to ensure seamless, zero-downtime deployments and quality knowledge transfers.
- Lead technical problem-solving and bottleneck analysis at the pipeline and infrastructure layers, empowering self-optimizing workgroups to resolve architectural impediments and maintain 99.9% data integrity across the ecosystem.
QualificationsEducation:
- High School diploma equivalency with 3 years of cumulative experience OR Associate'
degree/Bachelor's degree with 2 years of cumulative experience OR 7 years of applicable cumulative job specific experience required. - 3 years of leadership or management experience preferred.