Overview
Hybrid
Depends on Experience
Contract - W2
Contract - 6 Month(s)
No Travel Required
Skills
databricks
sql server
pyspark
python
agile
devops
Microsoft SSIS
SQL
Extract
Transform
Load
Job Details
JOB: Sr. ETL Developer
LOCATION: Irving 75062 (onsite Tues.-Thurs. required)
TYPE: 6+ month contract (long-term contract could be 2+ years; possibly convert in the future--not guaranteed when)
INTERVIEWS: SQL screen with AM; 1) 30-min. Teams (technical interview); 2) 1-hour onsite including SQL coding and Databricks technical interview
REQUIREMENTS:
- 7+ years' experience in Database engineering and ETL design/development/maintenance
- Expertise in Databricks and SQL Server database development and ETL:
- Knowledgeable of Databricks Delta Lake architecture, Unity Catalog, Notebooks, Workflows
- SQL in Databricks Notebooks and SSIS ETL package development
- Intermediate skillset in ADF for data pipelines required
- Advanced SQL skills required--complex queries, stored procedures, advanced window functions, CTE, etc.
- PySpark or Python development preferred (Python not required, but would really prefer one of the two)
- Must be proficient in query tuning and optimization as well
- Experience in Agile dataOps environment; familiar with CI/CD pipelines for data engineering (using Pulumi here, not Azure DevOps here--can be easily taught)
- Good written/verbal communication skills (not heavily business-facing but still a small company that will be part of meetings)
- Healthcare industry experience a bonus; not required--would like someone familiar with the terminology and type of data files, but mostly dealing with pharmaceutical transactional data here (not much HIPAA data)
- Bachelors of Science in Computer Science, MIS, Analytics or equivalent work experience
ENVIRONMENT:
- New Business Analysts on the team so this Developer will not be heavily business-facing and documenting project requirements, interacting with Execs, etc.
- Mostly dealing with pharmaceutical purchasing/transactional data from healthcare facilities
- Newer Databricks implementation currently running batch ETL, but are planning for real-time data processing and eventually ML pipelines
- Databricks Data Lake architecture with some legacy SQL Servers feeding data in via SSIS ETL packages and Azure Data Factory pipelines, but new data pipelines being built in Databricks now
- Will be maintaining legacy SSIS packages, but all new pipelines being built in Databricks using PySpark or SQL and some Azure Data Factory pipelines with Databricks Notebooks
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.