Overview
On Site
140k - 170k
Full Time
Skills
Build Tools
ELT
Data Processing
Data Analysis
Educate
Artificial Intelligence
Business Analytics
Streaming
Extract
Transform
Load
Apache Spark
Python
Scala
Snow Flake Schema
Data Engineering
Machine Learning (ML)
Databricks
Workflow
Apache Airflow
Unstructured Data
Data Science
Orchestration
Docker
Kubernetes
Shell Scripting
Bash
Unix
Microsoft Windows
Shell
Automated Testing
Jenkins
GitHub
GitLab
Continuous Integration
Continuous Delivery
Amazon Web Services
Amazon S3
Amazon DynamoDB
Data Management
Data Quality
Life Insurance
Finance
SAP BASIS
Job Details
We are looking for a Senior Data Engineer to join one of the 100 Best Companies to Work For, with offices in the Chicago suburbus as well as in the city. The role will be within their Data Engineering 'Center of Excellence' and will be able to impact and improve the way Data Engineering teams across the work.
As a Senior Data Engineer, you will help the organization by using Python and Spark to build tools, frameworks, and services used across the organization to build better ETL/ELT services. You will also work within modern AWS and Databricks architecture following and implementing modern best practices. Required Skills & Experience
Applicants must be currently authorized to work in the US on a full-time basis now and in the future.
#LI-EM1
As a Senior Data Engineer, you will help the organization by using Python and Spark to build tools, frameworks, and services used across the organization to build better ETL/ELT services. You will also work within modern AWS and Databricks architecture following and implementing modern best practices. Required Skills & Experience
- As a Senior contributor, your primary responsibility will be to implement highly efficient, reusable, and scalable data processing systems and pipelines in Databricks and Snowflake.
- Design and implement technical solutions and processes to ensure data reliability and accuracy.
- Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
- Educate data engineering teams in adopting new data patterns and tools.
- Function as SME within this area when engaging with our AI, Platform, and Business Analytics teams to build useful pipelines and data assets.
- Experience in batch and streaming ETL using Spark, Python, Scala, Snowflake or Databricks for Data Engineering or Machine Learning workloads.
- Experience orchestrating and implementing pipelines with workflow tools like Databricks Workflows, Apache Airflow, or Luigi
- Experience prepping structured and unstructured data for data science models.
- Experience with containerization and orchestration technologies (Docker, Kubernetes) and experience with shell scripting in Bash, Unix or windows shell is preferable.
- Implemented CI/CD with automated testing in Jenkins, Github Actions, or Gitlab CI/CD
- Familiarity with AWS Services not limited to Glue, Athena, Lambda, S3, and DynamoDB
- Demonstrated experience implementing data management life cycle, using data quality functions like standardization, transformation, rationalization, linking and matching.
- Performance Bonus eligible
- Medical, dental, vision, and life insurance plans
- Paid time off (PTO) and 6 company holidays per year
- Automatic 6% 401(k) company contribution each pay period
- Employee discounts, parental leave, 3:1 match on donations and tuition reimbursement
- A comprehensive set of emotional, financial, physical and social wellbeing programs
Applicants must be currently authorized to work in the US on a full-time basis now and in the future.
#LI-EM1
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.