Overview
On Site
Depends on Experience
Contract - W2
Contract - Independent
Contract - 6 Month(s)
Unable to Provide Sponsorship
Skills
Databricks
ETL
ELT
Unity Catalog
lakehouse
cloud
Azure
AWS
GCP
data engineering
Python
PySpark
SQL
Scala
data security
compliance
governance
Power BI
Tableau
automation
devops
terraform
github
jenkins
Data Engineer
PowerBI
Big Data
Spark
Kafka
Job Details
Job Description:
- Design, develop, and maintain robust end-to-end data pipelines for various data sources (e.g., databases, APIs, cloud storage).
- Extract, transform, and load (ETL) data using appropriate tools and technologies (e.g., SQL, Python, Databricks).
- Optimize data pipelines for performance, scalability, and reliability
- Develop interactive and insightful Power BI dashboards and reports to visualize key business metrics and trends.
- Spark
Pls look for candidates who has excellent abilities on the following and very good communication skills.
- SQL
- Python
- Databricks
- PowerBI
- Big Data (Spark and Kafka – specifically)
We will run coding test on SQL and Python before taking the candidate to client round.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.