Overview
Skills
Job Details
Databricks Data Engineer with PySpark & Fivetran
remote
Exp: 12+yrs
Top Skills - Must Haves
Python
Pyspark
Databricks
Fivetran
ETL
Top Skills' Details
- Senior level data engineer with end to end exposure to Databricks and a strong foundational understanding of Databricks 2. Strong Python and PySpark experience 3. Strong written and oral communication
4. Prefer experience with Fivetran for data ingestion
Job Description
Client is growing their data engineering team to support a large backlog of work as well as a migration within Databricks. They are seeking 2-3 data engineers who can assist them in expanding usage in Databricks. Day to day, these individuals will work on both projects as well as occasional backlog work. The team will be building pipelines and data sets (project based work to satisfy requirements). Other work could be tackling ad hoc requests like adding tables and fields or creating ingestion patterns. These data engineers may also be involved in the Teradata migration. Most of the work will be moving data into landing zones, making data available, and possibly transforming data.
Additional Skills & Qualifications
Required skills:
Fivetran highly preferred (SAS- cloud based sources such as salesforce) and HVR version (on prem, need to connect to on prem data sources)
Databricks (must be Databricks, not snowflake), specifically having an understanding of medallion architecture and an overall foundational understanding of Databricks.
Should know how to secure data using unity catalog and how to make data available in Databricks.
CI/CD in Databricks
Solid python and PySpark skills- must have
Agile