AI/ML Architect with Databricks

Hybrid in Los Angeles, CA, US • Posted 5 hours ago • Updated 5 hours ago
Contract Independent
Contract W2
50% Travel Required
Hybrid
$60 - $80/hr
Fitment

Dice Job Match Score™

📊 Calculating match score...

Job Details

Skills

  • AI/ML Architecture
  • Databricks
  • AWS
  • PySpark
  • Spark SQL
  • Python
  • Delta Lake
  • Unity Catalog
  • MLflow
  • Databricks Workflows
  • Data Lakehouse Architecture
  • ETL/ELT Pipelines
  • Machine Learning Model Development
  • MLOps
  • Data Engineering
  • Distributed Data Processing
  • Time-Series Data Processing
  • Performance Optimization
  • Data Partitioning
  • Caching Strategies
  • Spark Tuning
  • S3
  • AWS IAM
  • AWS Glue Catalog
  • SQL
  • Pandas
  • NumPy
  • Scikit-learn
  • Data Analysis
  • CI/CD
  • GitHub Actions
  • GitLab CI
  • TensorFlow
  • PyTorch

Summary

Hello
Job Title: AI/ML Architect Databricks & AWS
Location: Los Angeles, CA (Hybrid)
Hire Type: FTE / Contract to Hire
Rate: $75/hr | Salary: $150K
Job Summary:
We are looking for an experienced AI/ML Architect with strong expertise in Databricks on AWS to design and implement scalable lakehouse data platforms and machine learning solutions. The role involves building high-performance data pipelines, optimizing large-scale datasets, and deploying end-to-end ML solutions using Python, PySpark, and Databricks ecosystem tools.
Key Responsibilities:
  • Design Databricks Lakehouse architectures on AWS using Delta Lake, S3, and Unity Catalog.
  • Build and optimize ETL/ELT pipelines using PySpark, SQL, and Databricks Workflows.
  • Develop and deploy ML models using Python, MLflow, and Databricks ML.
  • Handle multi-terabyte structured and time-series datasets in distributed environments.
  • Implement performance optimization strategies including partitioning, caching, Spark tuning, and file size optimization.
  • Integrate AWS services (S3, IAM, Glue) with Databricks-based data and ML pipelines.
  • Translate business requirements into scalable analytics and ML architectures while guiding technical teams.
Required Skills:
  • 10+ years in Data Engineering / ML Engineering / AI Architecture.
  • Strong expertise in Databricks, PySpark, Spark SQL, Delta Lake, MLflow, and Unity Catalog.
  • Proficiency in Python (pandas, numpy, scikit-learn).
  • Experience working with large-scale distributed data systems and AWS cloud.
  • Strong architecture, problem-solving, and stakeholder communication skills.
Preferred:
  • Experience with MLOps, CI/CD pipelines, and AWS-native services.
  • Familiarity with TensorFlow or PyTorch.

Thanks & Regards,

Nikhil Kumar

+1

Sr. Technical Recruiter
Email id:

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 10114281
  • Position Id: 8904142
  • Posted 5 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Hybrid in Los Angeles, California

Yesterday

Easy Apply

Contract

$70 - $72

Los Angeles, California

Yesterday

Easy Apply

Contract

Depends on Experience

Los Angeles, California

Yesterday

Easy Apply

Contract

Depends on Experience

Los Angeles, California

Yesterday

Easy Apply

Contract, Third Party

$50 - $60

Search all similar jobs