AWS Data Engineer (hybrid)

Overview

Hybrid
Depends on Experience
Full Time
No Travel Required

Skills

Business Intelligence
BI
Databricks
SQL Queries
data pipelines
schema modeling
data processing systems
Microsoft Fabric
Snowflake
deploying data schemas
SAP
Terraform
data modeling
Python
data migration
ETL
AWS
API Integrations

Job Details

Opportunity working for this industry leader known for its outstanding reputation and dynamic approach in the industry. Committed to excellence and innovation and driven by the dedication and expertise of their talented employees they are voted best in their industry in North America for 2023 and 2024.

Support the planning, design, and implementation of data structures in an AWS environment, incorporating internal and external data sources into a robust, scalable, and comprehensive data model supporting BI and analytics needs across the enterprise.

RESPONSIBILITIES

  • Collaborate with cross-functional teams to understand and define BI needs and translate them into data modeling solutions
  • Develop and maintain scalable batch and real-time data pipelines, data schema design, and dimensional data modelling in Databricks and AWS for all system data sources, API integrations, and data ingestion files from external sources.
  • Create data models that will support comprehensive data insights, BI tools, and other data science initiatives
  • Design and implement data integration and data quality framework
  • Create data models and ETL procedures with traceability, data lineage and source control
  • Ensure data pipelines are stabilized, and any data service interruptions are appropriately addressed
  • Implement data monitoring best practices with trigger-based alerts for data processing KPIs and anomalies
  • Decommission legacy pipelines after migration and archive of historic data
  • Investigate and remediate data problems, performing and documenting thorough and complete root cause analyses.
  • Continually seek to optimize performance through database indexing, query optimization, stored procedures, etc.
  • Create and manage documentation for all implementations and systems maintenance.

BACKGROUND

  • Solid experience designing and managing data pipelines, schema modeling, and data processing systems.
  • Proficient in Python, with a track record of solving real-world data challenges.
  • Advanced SQL skills, including experience with database design, query optimization, and stored procedures.
  • Experience with Databricks (or similar tools like Microsoft Fabric, Snowflake, etc.) to drive scalable data solutions.
  • Experience developing and deploying data schemas that enable advanced reporting and business intelligence analytics
  • Experience with SAP a plus
  • Experience with Terraform or other infrastructure-as-code tools is a plus.

RedRiver offers benefits including Major Medical, Dental, Vision, LTD and 401k. More positions @: RedRiver Systems is an Equal Opportunity Employer.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.