Overview
Full Time
Skills
Data Engineering
Data Warehouse
Analytical Skill
Artificial Intelligence
Data Science
Business Process
SAP BASIS
Business Rules
Trading
Python
Java
Scala
SQL
Software Engineering
Warehouse
Snow Flake Schema
Apache Spark
Amazon Web Services
Orchestration
Docker
Web Scraping
Data Modeling
Performance Tuning
Job Details
Overview
We are looking for a hard-core software engineer who loves data to join our Data Engineering team and build world-class data solutions and applications that power crucial decisions throughout our hedge fund. We are looking for an open-minded, deep thinker who is passionate about building systems at scale. You will enable a best-in-class data engineering practice, drive the approach with which we use data, develop data APIs, backend systems, web scraping platforms, and data models to serve the needs of data scientists and portfolio managers, and play an active role in building our data-driven culture.
Responsibilities
The data engineering team manages data pipelines, web scrapes, data warehouse, and data models across organization. You'll join a team of very smart and highly dedicated individuals to build next-gen greenfield data platform for our hedge fund.
As a Data Engineer, you will apply your technical expertise to build data models and pipelines that support a broad range of analytical, AI, and data science requirements across investment and trading teams. You will work with extended teams (including portfolio managers, data scientists, traders, and analysts) to evolve solutions as business processes and requirements grow. You will help design and implement frameworks to capture alpha from the data. You'll own problems end-to-end and on an ongoing basis, you'll improve the data by adding new sources, coding business rules, and producing new metrics that support investment and trading decisions.
Required Qualifications
Expert knowledge of Python (Java, or Scala) and proficiency with SQL.
3+ years deep software engineering experience, building production data pipelines and warehousing solutions.
Experience building scalable data pipelines using Snowflake, Spark, dbt, Iceberg, and AWS.
Knowledge of orchestration and container tools (e.g. Airflow, Dagster, Prefect, Docker, K8s)
Experience building APIs and highly scalable web scraping platforms.
Strong understanding of data modeling, partitioning, and performance tuning.
Ability to communicate technical concepts to diverse audiences.
We are looking for a hard-core software engineer who loves data to join our Data Engineering team and build world-class data solutions and applications that power crucial decisions throughout our hedge fund. We are looking for an open-minded, deep thinker who is passionate about building systems at scale. You will enable a best-in-class data engineering practice, drive the approach with which we use data, develop data APIs, backend systems, web scraping platforms, and data models to serve the needs of data scientists and portfolio managers, and play an active role in building our data-driven culture.
Responsibilities
The data engineering team manages data pipelines, web scrapes, data warehouse, and data models across organization. You'll join a team of very smart and highly dedicated individuals to build next-gen greenfield data platform for our hedge fund.
As a Data Engineer, you will apply your technical expertise to build data models and pipelines that support a broad range of analytical, AI, and data science requirements across investment and trading teams. You will work with extended teams (including portfolio managers, data scientists, traders, and analysts) to evolve solutions as business processes and requirements grow. You will help design and implement frameworks to capture alpha from the data. You'll own problems end-to-end and on an ongoing basis, you'll improve the data by adding new sources, coding business rules, and producing new metrics that support investment and trading decisions.
Required Qualifications
Expert knowledge of Python (Java, or Scala) and proficiency with SQL.
3+ years deep software engineering experience, building production data pipelines and warehousing solutions.
Experience building scalable data pipelines using Snowflake, Spark, dbt, Iceberg, and AWS.
Knowledge of orchestration and container tools (e.g. Airflow, Dagster, Prefect, Docker, K8s)
Experience building APIs and highly scalable web scraping platforms.
Strong understanding of data modeling, partitioning, and performance tuning.
Ability to communicate technical concepts to diverse audiences.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.