Azure Data Engineer - Los Angeles CA -Hybrid
Los Angeles, CA, US • Posted 11 hours ago • Updated 42 minutes ago

Tror
Dice Job Match Score™
🔗 Matching skills to job...
Job Details
Skills
- ETL
- SQL
- Python
- Azure
- databricks
- Pyspark
- Azure Data Engineer
- Datawarehouse
- Delta Lake
- Spark
Summary
Job Title: Azure Data Engineer
Location: Los Angeles CA - Hybrid
Duration: Long Term Contract
Experience: 12 - 15 Years
Job Description:
We are seeking highly skilled Azure Data Engineer with strong expertise in SQL, Python, Datawarehouse, Cloud ETL tools to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability, and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.
Technical Skills:
- Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)
- Proficiency in Azure Cloud Services.
- Solid Understanding of Spark and PySpark for big data processing.
- Experience in relational databases.
- Knowledge on Databricks Asset Bundles and GitLab.
Preferred Experience:
- Familiarity with Databricks Runtimes and advanced configurations.
- Knowledge of streaming frameworks like Spark Streaming.
- Experience in developing real-time data solutions.
Certifications:
Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification. (Optional)
Key Responsibilities:
Data Pipeline Development:
- Build and maintain scalable ETL/ELT pipelines using Databricks.
- Leverage PySpark/Spark and SQL to transform and process large datasets.
- Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems.
Collaboration & Analysis:
- Work Closely with multiple teams to prepare data for dashboard and BI Tools.
- Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.
Performance & Optimization:
- Optimize Databricks workloads for cost efficiency and performance.
- Monitor and troubleshoot data pipelines to ensure reliability and accuracy.
Governance & Security:
- Implement and manage data security, access controls and governance standards using Unity Catalog.
- Ensure compliance with organizational and regulatory data policies.
Deployment:
- Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.
- Manage version control for Databricks artifacts and collaborate with team to maintain development best practices.
- Dice Id: 91135853
- Position Id: 2026-314
- Posted 11 hours ago
Company Info
TROR is an artificial intelligence consultancy specializing in developing powerful and customized Al solutions for business. With top Al Experts we take pride in providing the best cutting-edge Al consultancy. Our years of experience in various industries helps us to develop and implement bespoke Al solutions for businesses. Our on demand Al products have helped over 100 companies drive transformational results.
The solutions we bring on your table meet the highest industry standards and quality, effectively and efficiently resolving your issues and optimizing the way you want to move forward in the market. Through our customer centric approach, we ensure that we are always there for our valuable customers by offering them satisfactory solutions for guaranteed results.
Similar Jobs
It looks like there aren't any Similar Jobs for this job yet.
Search all similar jobs

