Overview
Skills
Job Details
Company Background
The Luxor team has built a range of solutions for Bitcoin mining and compute power, including a globally distributed mining pool, firmware, an ASIC brokerage desk, a data website and a Hashrate derivatives market. Our product suite is growing rapidly and as such we are looking to expand our product team. We fundamentally believe compute is going to be the world s most important commodity, and are building the infrastructure to support this new market.
We are looking for a Data Engineer to join our collaborative, and fast-moving team to work on one of the most rewarding projects in the mining and compute industry.
Basic Requirements
Experience building both streaming & batch data pipelines/ETL and familiarity with design principles.
Expert in Python, PostgreSQL, and PL/pgSQL development and administration of large databases with focus on performance and production support on native cloud deployments.
Experience with scalability solutions and multi-region replication and failover solutions.
Experience with data warehouse technologies (Trino, Clickhouse, Airflow, etc).
Bachelor s degree (or its foreign degree equivalent) in Computer Science, Engineering, or a related technical discipline or equivalent experience.
Deep understanding of programming and experience with at least one programming language.
English language proficiency.
Preferred Requirements
Knowledge of Kubernetes and Docker
4+ years of working experience in relevant data field.
Knowledge of blockchain technology / mining pool industry.
Experience with agile development methodology.
Experience delivering and owning web-scale data systems in production.
Experience working with Kafka, preferably redpanda & redpanda connect.
The Ideal Candidate:
Passionate about cryptocurrency and public-blockchain technologies.
Has an interest in creating an entirely new market with Hashrate (compute power) as a commodity.
Has an interest in thinking and evolving the architecture of our software to make it robust and maintainable.
Enjoys writing code and pushing boundaries of what has been done so far.
Brings fun to the team but can also go own the rabbit hole to push quality code on schedule.
Responsibilities
Build scalable and reliable data pipelines that provide accurate data feeds from internal and external systems.
Govern scalable & performant cloud-deployed production relational and non-relational databases.
Collaborate on architecture definitions, always thinking of solutions that are scalable and secure.
Drive data systems to be as near real-time as possible.
Design, document, automate and execute test plans to ensure the quality of the datasets.
Participate in the process of generating and analyzing features.