AZURE DATA ENGINEER-Capital Markets- Fulltime Hire

  • New York, NY
  • Posted 4 days ago | Updated 4 days ago

Overview

Hybrid
$175,000 - $200,000
Full Time

Skills

Azure
Python
Pyspark
Capital Marketing exp

Job Details

AZURE Data Engineer with exp on Capital Markets

Salary- $175-200K Target

Work Authorization- USC

Interview Process: Video

Location: Hybrid NYC/Midtown No Relocation Candidates must be onsite day one and go into the office Four times a week.


** PLEASE Only send me candidates in the NY/NJ area.

****CANDIDATES MUST HAVE RECENT , EXTENSIVE EXPERIENCE INTEGRATING THIRD PARTY DATA FEEDS INTO CAPITAL MARKETS TRADING PLATFORMS.

**We need a senior (10+ Years) Azure Data Engineer with extensive experience working in Capital Markets and on actual trading platforms. This is a hands-on position integrating 3rd party data who will own the end-to-end lifecycle of market, alternative, and vendor data from ingestion to production use on our Azure + Databricks (PySpark) stack. Candidates must Design, develop, and operate robust, testable Python/PySpark data pipelines as well as & build ingestion frameworks for multiple vendor data sources (APIs, SFTP, flat files, web endpoints), including schema evolution, PII handling, and resiliency/retry patterns.

Job Description:

The Role

Own the end-to-end lifecycle of market, alternative, and vendor data from ingestion to production use on our Azure + Databricks (PySpark) stack. The mandate spans three core functions:

Platform & Infra: Build the Azure/Databricks backbone for scalable batch/stream workloads.

  1. Pipelines: Design, develop, and operate robust, testable Python/PySpark data pipelines.
  2. Support: Production support, monitoring, SLAs, and fast-turn ad-hoc needs for the PM/analyst pod.

This is a hands-on role for an engineer who enjoys ownership, polish, and speed.

What You ll Do

  • Design & build ingestion frameworks for multiple vendor data sources (APIs, SFTP, flat files, web endpoints), including schema evolution, PII handling, and resiliency/retry patterns.
  • Implement Databricks/PySpark transformations, Delta Lake/parquet storage patterns, and efficient table layouts/partitioning for downstream analytics.
  • Stand up and harden Azure services (e.g., Databricks, Storage, Key Vault; plus orchestration such as ADF/Jobs/Workflows) with IaC where practical.
  • Establish observability (logging/metrics, data quality checks, SLAs, alerts) and CI/CD for reproducible deployments.
  • Production support: on-call during market hours for critical pipelines; drive root-cause analysis and permanent fixes.
  • Partner with the PM and analyst to translate investment questions into data models, marts, and fast retrieval patterns.
  • Create lightweight internal tools or UIs as needed (JavaScript experience is a plus) to improve discovery/self-service.
  • Document datasets, lineage, contracts, and runbooks for durable team knowledge.

What You ll Bring

  • 8-10+ years of hands-on Data Engineering (data engineer first, not primarily analytics).
  • Strong Python and PySpark in Databricks; excellent SQL.
  • Solid Azure experience (Databricks, storage, secrets, orchestration such as ADF/Jobs/Workflows).
  • Proven track record ingesting third-party/vendor data at scale with rigorous data quality controls.
  • Production mindset: testing, version control (Git), packaging, deployment, and monitoring.
  • Clear communication and urgency to support a PM/analyst pod with early start times.

Nice-to-haves: JavaScript for small internal tools, Delta Live Tables, Airflow, dbt, Terraform/Bicep.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About AGUH INC