This Jobot Job is hosted by: Tori Bender
Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $90,000 - $115,000 per yearA bit about us:
Our client is looking for a talented and enthusiastic Data Engineer to join their team. They are a global leader in cellular health, dedicated to providing a sustainable lifestyle by delivering high-quality health products through a direct-selling distribution model. They offer first-to-market products that affect health at the genetic level.Why join us?
They offer a competitive wage and excellent benefits package including 401(k), medical, dental, vision, life, disability, supplemental insurance, paid time off, and free company products.
Perks include working in a brand new, very cool, office space (complete with on-site gym / fitness center and company-provided snacks) coupled with great opportunities for personal and professional growth.Job Details
The Data Engineer is responsible for implementing & documenting data sources and data lineage as well as establishing methods and procedures for tracking data quality, completeness, redundancy, & improvement.
- Be a key player in the strategy and implementation of our key initiatives.
- Create, maintain, and update pipelines to move data from our source systems to our reporting data warehouse.
- Work as a part of an agile team to architect, implement and document data sources and data lineage
- Implement best practices to optimize cloud infrastructure for pipelines and analytics
- Help establish methods and procedures for tracking data quality, completeness, redundancy, and improvement
- Within a collaborative team, develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture
- Help develop and promote data management methodologies and standards
- 4-6 years working with Python (key qualification)
- 5-10 years experience with Microsoft SQL Server and/or Azure SQL Database
- Strong experience in github
- Experience working with Apache Spark or Databricks utilizing Scala, Java, or Pyspark
- Experience with writing queries for Databricks Delta Lake
- Working knowledge and curiosity required to debug and analyze complicated systems
- Experience ingesting data from APIs (especially in Python)
- Experience with business requirements analysis, entity relationship planning, database design, reporting structures, and so on
- Hands-on knowledge of repository tools, data modeling tools, data mapping tools, and data profiling tools
- Strong understanding of relational data structures, theories, principles, and practices
- Understanding of data warehouse architecture is preferred, not required
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.