Overview
Skills
Job Details
Roles and responsibilities
Principle Duties and Responsibilities
Owns the design, development, and maintenance of scalable data models and leads projects to develop ongoing metrics, reports, analyses, dashboards, etc. to support analytical and business needs
Interfaces with the Data Platform team to extract, transform, and load data from a wide variety of data sources using AWS services and internal tools
Builds, optimizes and delivers high quality data sets to supportadministrative and academic data needs
Leads continuous improvement projects for ongoing reporting and analysis processes, automating or simplifying self-service support for customers
Translates business problem statements into data model requirements
Uses analytical and statistical rigor to answer business questions and drive business decisions in areas including, but limited to Advancement, HR, and Finance
Writes queries and output efficiently, and have in-depth knowledge of the available in area of expertise. Pull the needed with standard query syntax; periodically identify more advanced methods of query optimization. Convert to make it analysis-ready
Recognizes, adopts and documents best practices in the development and support of data solutions
Troubleshoots operational quality issues pertaining to data processing and orchestration code written in SQL/Python
Reviews and audits existing jobs and queries
Recommends improvements to back-end sources for increased accuracy and simplicity
Other duties as assigned
Qualifications
Required
At least 10 years of data engineering experience including experience with programming, data modeling, warehousing, and building data pipelines
Experience with modern databases such as Snowflake and Redshift
Experience with data orchestration tools such as airflow or Dagster
Experience with AWS technologies such as S3, AWS Glue, Lambda, and IAM roles and permissions
Experience in writing complex, highly-optimized SQL queries across large data sets
Experience in a scripting language (e.g. Python, PySpark, Java, Scala)
Experience developing, monitoring, and maintaining Extract Transform Load (ETL) and Extract Load Transform (ELT) data pipelines; experience ensuring data integrity
Preferred
Project management experience
Experience providing technical leadership and educating other engineers for best practices on data engineering.
Familiarity with BI tools and Data Science models