Overview
Skills
Job Details
Position Description:
Client: Financial Services
Job Title: Cloud Data & API Engineer
Location: Chicago, IL 60604
Employment Type: Full-Time, Permanent
Interview Mode: In-Person
Work Mode: Hybrid (3 days in office, 2 days Remote)
Note:
- Must be at a drivable distance from Client Location (Chicago, Illinois 60604)
- Only local candidates who are comfortable working in a W2 salaried role will be considered.
Job Description:
Our Financial Services client is seeking a Cloud Data & API Engineer with strong expertise in Microsoft Azure, .NET, and modern data platforms, along with a growing knowledge of AI/ML technologies. This role combines cloud-native backend development, data engineering, and AI integration to deliver scalable, intelligent applications. The position involves bridging traditional systems such as SQL Server with cloud-native services, including Databricks, Azure Data Lake, Kafka, and OpenAI services.
Key Responsibilities
- Design and develop RESTful and GraphQL APIs using .NET (C#) on Azure App Services, Azure Functions, or containerized environments.
- Build scalable data pipelines leveraging Databricks (PySpark, Delta Lake), Azure Data Factory, and Azure Data Lake Storage.
- Develop and support event-driven architectures using Kafka and Azure Event Hubs.
- Integrate data from diverse sources, including SQL Server, data warehouses, and streaming platforms.
- Create and deploy AI-enhanced services using Azure OpenAI, LLM frameworks (e.g., LangChain, Semantic Kernel), or Databricks ML.
- Collaborate with engineering, analytics, and DevOps teams to deliver production-grade, intelligent solutions.
- Ensure high performance, security, and reliability across API and data systems in the cloud.
Required Qualifications
- Experience in backend or data engineering roles.
- Strong proficiency in .NET Core / C# and API development (REST, GraphQL).
- Deep understanding of Azure cloud services, including Azure Data Lake, Data Factory, Synapse, App Services, Functions, and Event Hubs.
- Hands-on experience with Databricks and Apache Spark (PySpark or Scala).
- Proficiency with SQL Server for data modeling, querying, and optimization.
- Familiarity with Apache Kafka or Azure equivalents.
- Experience integrating or experimenting with AI/ML solutions such as Azure OpenAI, MLflow, or prompt engineering.
- Understanding of CI/CD, Docker, and infrastructure-as-code (Terraform/Bicep).
- Awareness of security best practices, including OAuth2, managed identities, and role-based access control (RBAC).
Preferred Skills
- Experience with LLM orchestration frameworks (e.g., LangChain, Semantic Kernel, Prompt Flow).
- Knowledge of Azure API Management, Logic Apps, or Service Bus.
- Familiarity with dbt, Airflow, or similar orchestration tools.
- Understanding of Responsible AI principles and AI governance in enterprise settings.
- Exposure to integrating on-premises systems with cloud platforms.
Why This Opportunity
- Contribute to shaping a forward-looking Azure + Databricks + AI platform.
- Work with a modern, dynamic tech stack to solve real-world business problems.
- Join a collaborative team that values innovation, continuous learning, and impactful delivery.