Overview
Skills
Job Details
Multi-cloud data exploration
Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Starburst, Privacera, and Databricks) and Intuit services (LDAP, data decryption)
Testing user flows for data analysis, processing, and visualization with Python Spark notebooks and SQL running on distributed compute to join data between AWS S3 and Google Cloud Platform Big Query
Developing data pipelines in Python Spark or SQL to push structured enterprise tool telemetry to our data lake Fine-grained access control for data exploration
Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Databricks and Privacera)
Evaluating Databricks capabilities to sync Hive, Glue, and Unity Catalogs
Evaluating Privacera capabilities or building new capabilities (AWS Lambda with Python) to sync Intuit access policies with Unity Catalog
Testing user flows for data analysis, processing, and visualization with Python Spark notebooks on distributed compute or Databricks serverless SQL runtime
Responsibilities
Develop and implement operational capabilities, tools, and processes that enable highly available, scalable, and reliable customer experiences
Resolve defects/bugs during QA testing, pre-production, production, and post-release patches
Work cross-functionally with various Intuit teams including: product management, analysts, data scientists, and data infrastructure
Work with external enterprise support engineers from Databricks, Starburst, and
Privacera to resolve integration questions and issues
Experience with Agile Development, SCRUM, or Extreme Programming methodologies