Senior Data Engineer - AWS / Redshift

Data engineering, Framework, Apache Spark, Looker, Statistics, Software, Streaming, Apache Kafka, Meta-data management, Apache Hadoop, Data management, Apache NiFi, Design thinking, Python, Machine learning, Tableau, Data science, Architecture, Cloud, Engineering, Data warehouse, Data integration, Data visualization, MapReduce, Real-time, Database architecture, Amazon Web Services, Batch processing, ETL, Google Cloud, Automation, ProVision, looker, mapreduce, schema, airflow, etl, data pipelines, data plumbing
Full Time
Depends on Experience
Work from home not available Travel not required

Job Description

2Bridge has been retained in the direct hire search to find 11 Data Engineering to join our Detroit Michigan based e-commerce client. They are seeking multiple Senior Data Engineer to join their growing Data Team!

As a Senior Data Engineer, you ll own a problem from end to end and will be empowered to take the lead with technology and implementation while joining a rare hyper-growth company.

Our Client offers a comprehensive package including base, bonus, paid relocation to Detroit, unlimited PTO, stocked Kitchens, casual and dynamic work environment and the ability to work in a fast-growing stable technology company.

Responsibilities:

  • Design and build mission-critical data pipelines with a highly scalable distributed architecture - including data ingestion (streaming, events, and batch), data integration, data curation
  • Build and support reusable framework(s) to ingest, integrate and provision data
  • Automation of end to end data pipeline with metadata, data quality checks, and audit
  • Build and support a big data platform on the cloud
  • Define and implement automation of jobs and testing
  • Optimize the data pipeline to support ML workloads and use cases
  • Support mission-critical applications and near real-time data needs from the data platform
  • Capture and publish metadata and new data to subscribed users
  • Work collaboratively with product managers, data scientists as well as business partners and actively participate in design thinking session
  • Participate in design and code reviews
  • Motivate, coach, and serve as a role model and mentor for other development team associates/members that leverage the platform

Qualifications:

  • BS/BA degree in Computer Science, Physics, Mathematics, Statistics or other Engineering disciplines
  • 7+ years experience in data warehouse/data lake technical architecture
  • Minimum 3 years of Big Data and Big Data tools - Kafka, MapReduce, Spark or Python, Hadoop
  • 3+ years' experience engineering in cloud environments Google Cloud, AWS, Azure
  • Experience with Database Architecture & Schema design
  • Strong familiarity with batch processing and workflow tools such as AirFlow, NiFi
  • Ability to work independently with business partners and management to understand their needs and exceed expectations in delivering tools/solutions
  • Strong interpersonal, verbal and written communication skills with the ability to present complex technical & analytical concepts to an executive audience
  • A strong business mindset with customer obsession; ability to collaborate with business partners to identify needs and opportunities for improved data management and delivery
  • Experience with data visualization tools such as Tableau, Looker, PowerBI
  • Experience with Hadoop implementation

Dice Id : 90827581
Position Id : 6340443
Originally Posted : 2 months ago
Have a Job? Post it