Sr. Python Developer - Azure/Databricks - Banking/Capital Market

  • New York, NY
  • Posted 3 days ago | Updated 3 days ago

Overview

Hybrid
$70 - $80
Accepts corp to corp applications
Contract - W2
Contract - Independent

Skills

Python
Azure
Databricks
FastAPI
Pydantic
SQLAlchemy
PySpark

Job Details

Sr. Python Developer

Duration: 12+Months

Required Location: Hybrid/Midtown New York City 3 days a week.

Interview Required: Video

A senior (12+ years, 15+ preferred) Python Backend developer with extensive experience working in Banking or Capital Markets and with Azure Databricks. Candidates need Strong proficiency in Python and Python web frameworks (FastAPI, Pydantic, SQLAlchemy/SQLModel) as well as demonstrated experience building RESTful APIs. They must be proficient in writing and optimizing PySpark jobs/notebooks for ETL and data transformation experience with CI/CD for Databricks notebooks and jobs.

Job Description: We are seeking a hands-onSenior Backend Developerwith over 10 years of experience, specializing in Python, to design, develop, and maintain high-performance web applications and data pipelines. The ideal candidate will have deep expertise in building RESTful APIs, working with modern Python frameworks, and developing robust ETL solutions on cloud platforms such as Azure. You will collaborate with cross-functional teams to implement new features and ensure seamless integration of backend components.

Key Responsibilities

Backend Development:

  • Design, develop, and maintain scalable and secure backend systems for web applications.
  • Build RESTful APIs using Python and modern frameworks (e.g., FastAPI), ensuring robust and maintainable code.
  • Collaborate with front-end and DevOps teams to deliver end-to-end solutions.

Data Engineering & ETL:

  • Design, develop, and optimize ETL pipelines for data transformation and loading into databases or data warehouses.
  • Automate data quality workflows using PySpark and Databricks to deliver clean, reliable data.
  • Build and orchestrate scalable ingestion processes using Azure Data Factory (ADF) and Databricks.
  • Integrate structured, semi-structured, and unstructured data sources into unified platforms.
  • Architect efficient data storage solutions leveraging relational and NoSQL databases for both real-time and historical analytics.

DevOps & Cloud Integration:

  • Work with Azure services (Functions, Logic Apps, Key Vault, ADF) for data orchestration and automation.
  • Implement CI/CD pipelines using Azure DevOps or similar platforms.
  • Ensure version control best practices using Git or Azure DevOps.

Required Qualifications

  • Bachelor s degree in Computer Science, Engineering, or equivalent experience.
  • 10+ years of professional experience as a Backend or Full Stack Developer.
  • Strong proficiency in Python and Python web frameworks (FastAPI, Pydantic, SQLAlchemy/SQLModel).
  • Demonstrated experience building RESTful APIs.
  • Advanced SQL skills with a proven track record of optimizing queries and database interactions.
  • Experience with Azure cloud services, especially Azure Data Factory and Databricks.
  • Practical knowledge of DevOps, build/release, and CI/CD processes.
  • Familiarity with version control systems (Git, Azure DevOps).
  • Excellent communication skills with the ability to thrive in a fast-paced, collaborative environment.

Skills

Python Backend Development:

  • Strong expertise in Python 3.x, with a focus on backend systems.
  • Experience with Python web frameworks (FastAPI, Flask, Django, or similar).
  • Data validation and serialization using Pydantic.
  • ORM experience (SQLAlchemy, SQLModel, or similar).
  • RESTful API design, implementation, documentation (OpenAPI/Swagger).
  • Unit, integration, and end-to-end testing of APIs (pytest, unittest).
  • Security best practices (authentication, authorization, API security).
  • Asynchronous programming with Python (async/await, asyncio).
  • Performance optimization, caching strategies, and error handling.
  • Experience with Docker and containerized backend deployments.

Databricks & Data Engineering:

  • Experience with Databricks for large-scale data processing and analytics.
  • Proficient in writing and optimizing PySpark jobs/notebooks for ETL and data transformation.
  • Strong understanding of distributed computing concepts.
  • Working knowledge of data lake architectures and Delta Lake.
  • Building scalable data pipelines using Azure Data Factory and Databricks.
  • Automation of data quality checks, monitoring, and logging.
  • Integration with cloud data sources (Azure Blob, Data Lake Storage, SQL/NoSQL DBs).
  • Data modeling and schema design for analytical workloads.
  • Experience with CI/CD for Databricks notebooks and jobs.
  • Knowledge of workspace administration, cluster management, and job orchestration in Databricks
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.