Overview
Remote
Contract - Independent
Contract - W2
Contract - 6 Month(s)
Skills
Application Programming Interfaces (APIs)
Artificial Intelligence
Python (Programming Language)
Cloud Computing
Continuous Integration
Git
SQL Databases
DevOps
Safety Principles
Security Managing
Automation
Infrastructure Management
Github
Databricks
Information Technology
Scalability
Forecasting Skills
Reliability
Terraform
Computing Platforms
Data Lakes
Composite Materials
OpenID
Paradigms
Pyspark
Capacity Planning
Cost Optimisation
Infrastructure as Code (IaC)
Triage
Job Details
Job Title: Databricks SRE and Support Engineer
Work Location: Hopkins, MN / Remote
Contract duration: 6 months
Request ID: 116475-1
Job Details:
- Must Have Skills: Expert-level proficiency in Databricks, Python and SQL
- Nice to have skills: Expert-level proficiency in Databricks, Python and SQL
Detailed Job Description:
As Databricks SRE and Support Engineer, you will work on operations related to AI Dojo (AI/ML upskilling program developed by Client on Databricks. This individual contributor (IC) role requires experience on working on large-scale AI/ML platforms guaranteeing stability, reliability, scalability, and performance. Experience with modern Infrastructure and DevOps tools and paradigms, as well as proven hands-on knowledge with Databricks is a must.
Primary Responsibilities:
- Continuous support: Provide continuous SRE support to thousands of geographically distributed users on the AI Dojo Databricks platform: respond to tickets, triage support, liaise with customers.
- Automation & DevOps: Improve existing Infrastructure as Code (IaC) according to best DevOps practices.
- Systems Monitoring: Develop and maintain monitoring frameworks to timely respond to outages and other service interruptions.
- Security & Compliance: Collaborate with internal cybersecurity teams to ensure all systems and operations comply with industry standards and are secure against evolving threats.
- Capacity Planning & Cost Optimization: Forecast and manage capacity requirements for the AI/ML training environment, while identifying opportunities to reduce costs without compromising performance.
Required Qualifications:
- Bachelor's degree in computer science, information technology, or a related field.
- 6+ years of infrastructure experience: Proven experience working on large-scale, cloud-based, enterprise-level software platforms and deep understanding of Databricks environment. In particular:
- Experience building Github Actions pipelines including composite actions, OIDC federation for cloud provider identity acquisition, and use of environments and deployment controls
- Experience building Databricks Asset Bundle and Terraform pipelines to manage and deploy Databricks Platform and Workspace resources
- Fluency in Python, experience with the Databricks Python SDK to perform Workspace operations, and familiarity with PySpark and Delta Lake.
- Deep familiarity with Databricks APIs, and use of the Databricks CLI for use provisioning Workspace identities, filesystem resources, and the querying of account and workspace level Users, Groups, and Service Principals
- Strong understanding of security best practices and experience ensuring compliance with relevant regulatory frameworks.
- 3+ years of practical experience in Infrastructure-as-Code and CI/CD tools like Terraform, Git Actions and alike.
- 3+ years of experience working in support teams that are geographically distributed.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.