Overview
On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
Skills
Databricks
JSON
PySpark
ETL
Azure
Job Details
Job Description:
- 8+ yrs. in Azure Databricks technologies.
- Good understanding of ETL, BI and DW technologies.
- Hands-on experience on JSON or XML files as sources.
- Experience in autoloader for incremental loading and explode options for flattening data
- Experience in loading Delta live tables
- Experience in PySpark or Spark using data frames.
- Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure.
- Work together with data scientists and analysts to understand the needs for data and create effective data workflows.
- Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
- Utilizing Azure Databricks to create and maintain ETL (Extract, Transform, Load) operations.
- Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
- Improve the scalability, efficiency, and cost-effectiveness of data pipelines.
- Monitoring and resolving data pipeline problems to guarantee consistency and availability of the data.
- Good communication skills Oral & Written
Thanks
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.