Overview
On Site
USD 160,000.00 - 240,000.00 per year
Full Time
Skills
Real-time
High Availability
Distribution
Semantics
Apache Avro
Apache Mesos
MySQL
Extract
Transform
Load
Finance
Debugging
Codecs
Collaboration
Cloud Computing
Orchestration
Java
Scala
Python
Testing
Computer Science
Mathematics
Open Source
Streaming
Performance Tuning
Redis
Caching
Analytics
Kubernetes
Apache Kafka
Apache Spark
Mentorship
Training
Life Insurance
Bloomberg
Podcast
Job Details
Description & Requirements
The DataHub Engineering team is building a distributed platform to host, catalog, discover, and deliver financial datasets across Bloomberg. This platform powers batch analytics, real-time stream processing, and low-latency, high-availability data distribution - ensuring that high-quality data, the lifeblood of financial markets, is always accessible.
You will join the team that introduced the abstraction of "dataset", invented a schema language to formally define all data at Bloomberg, complete with schema evolution, versioning, and a true point in time semantics. We're the first to introduce Kafka, Avro, company-wide Dataset Schema Registry, Mesos, Clustered MySQL, Vitess and Spark for ETL at Bloomberg. We are designing a new Data Intensive Platform that is the hub of financial datasets.
You'll get to:
Our tech stack:
You'll need to have:
We'd love to see:
Salary Range = 00 USD Annually + Benefits + Bonus
The referenced salary range is based on the Company's good faith belief at the time of posting. Actual compensation may vary based on factors such as geographic location, work experience, market conditions, education/training and skill level.
We offer one of the most comprehensive and generous benefits plans available and offer a range of total rewards that may include merit increases, incentive compensation (exempt roles only), paid holidays, paid time off, medical, dental, vision, short and long term disability benefits, 401(k) +match, life insurance, and various wellness programs, among others. The Company does not provide benefits directly to contingent workers/contractors and interns.
Discover what makes Bloomberg unique - watch our podcast series for an inside look at our culture, values, and the people behind our success.
The DataHub Engineering team is building a distributed platform to host, catalog, discover, and deliver financial datasets across Bloomberg. This platform powers batch analytics, real-time stream processing, and low-latency, high-availability data distribution - ensuring that high-quality data, the lifeblood of financial markets, is always accessible.
You will join the team that introduced the abstraction of "dataset", invented a schema language to formally define all data at Bloomberg, complete with schema evolution, versioning, and a true point in time semantics. We're the first to introduce Kafka, Avro, company-wide Dataset Schema Registry, Mesos, Clustered MySQL, Vitess and Spark for ETL at Bloomberg. We are designing a new Data Intensive Platform that is the hub of financial datasets.
You'll get to:
- Write software for Kafka based Data Pipes for the company wide Data Mesh
- Debug and diagnose intricate issues, functional and performance regressions, with Apache Kafka, Apache Spark, data codecs, low latency services, and streaming
- Collaborate and share extensively with fellow engineers
- Contribute to open source technologies like Spark or Iceberg
- Display expertise in building Lakehouse for large scale data platforms
Our tech stack:
- Languages : Java, Python, Scala
- Frameworks/Tools : Spark, Kafka, Kubernetes
- Cloud-Native Stack : Container orchestration, service mesh, distributed tracing
You'll need to have:
- 4+ years of professional experience programming in Java, Scala, or Python
- Expertise in Apache Kafka, Spark, Redis and Distributed Systems
- Experience building and testing scalable and reliable data infrastructure
- A Degree in Computer Science, Engineering, Mathematics, similar field of study or equivalent work experience
We'd love to see:
- Any of your contributions in open source to Kafka, Spark, Streaming, etc.
- Experience with performance optimization techniques in Iceberg and using Redis for caching expensive query results to improve application performance
- Experience with DuckDB for analytics on smaller datasets on Kubernetes
- Production experience with Kubernetes (Helm, Operators, CRDs)
- Familiarity with Kafka, Spark, or lakehouse architectures
- A passion for reliability, scale, and mentoring others
Salary Range = 00 USD Annually + Benefits + Bonus
The referenced salary range is based on the Company's good faith belief at the time of posting. Actual compensation may vary based on factors such as geographic location, work experience, market conditions, education/training and skill level.
We offer one of the most comprehensive and generous benefits plans available and offer a range of total rewards that may include merit increases, incentive compensation (exempt roles only), paid holidays, paid time off, medical, dental, vision, short and long term disability benefits, 401(k) +match, life insurance, and various wellness programs, among others. The Company does not provide benefits directly to contingent workers/contractors and interns.
Discover what makes Bloomberg unique - watch our podcast series for an inside look at our culture, values, and the people behind our success.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.