Skills
EngineerAccessMarTechAPIScalaPythonAWS
Job Description
Sr. Data Engineer (Remote) in Fort Mill, South Carolina
Posted 11/14/22
THE TEAM YOU WILL BE JOINING:
WHAT THEY OFFER YOU:
WHY THIS ROLE IS IMPORTANT:
THE BACKGROUND THAT FITS:
#LI-HF1
Posted 11/14/22
THE TEAM YOU WILL BE JOINING:
- A proven joint venture backed by two industry leaders
- The largest health & wellbeing platform in the US, helping nearly 100 million people a month
- A company with consistent commitment to continuous improvement and professional development
WHAT THEY OFFER YOU:
- 100% Remote opportunity
- Chance to join a growing team and make a hands-on impact across the organization
- Access industry-leading MarTech and Tech platforms and strategies - partner with some of the best leaders in digital media and technology
- Highly competitive compensation and total rewards package include base pay, bonus, health insurance, 401k, HSA, family leave, adoption assistance
WHY THIS ROLE IS IMPORTANT:
- Provide technology ownership for data solutions for projects that the team has been tasked with.
- Work with a cross-functional team of business analysts, architects, engineers, data analysts, and data scientists to form both business and technical requirements.
- Design and build data pipelines from various data sources to a target data warehouse using batch data load strategies using leading cloud technologies.
- Conceptualizing and generating infrastructure that allows data to be accessed and analyzed.
- Documenting database designs that include data models, metadata, ETL specifications and process flows for business data project integrations.
- Perform periodic code reviews and test plans to ensure data quality and integrity.
- Provide input into strategies as they drive the team forward with delivery of value and technical acumen.
- Execute proof of concepts, where appropriate, to help improve our technical processes.
THE BACKGROUND THAT FITS:
- 4+ years of experience in the big data space
- 3+ years of experience working on Spark (RDDs / Data Frames / Dataset API) using Scala/Python to build and maintain complex ETL pipelines.
- 2+ years of experience working on AWS (Kinesis / Kafka / S3 / RedShift).
- Experience in translating requirements into technical data solutions on a large scale.
- Able to research and troubleshoot potential issues presented by stakeholders within the data ecosystem.
- Experience with GitHub and CI/CD processes.
- Experience with Compute technologies like EMR and Databricks
- Experience working job orchestration (eg., Airflow / AWS Step Function)
- Strong analytical and interpersonal skills.
- Passionate, highly motivated, and able to learn quickly.
- Able to work through ambiguity in a fast-paced, dynamically changing business environment.
#LI-HF1