Data Engineer | New York, NY (Need Onsite day 1,hybrid 3 days from office).| Long Term

  • New York, NY
  • Posted 3 days ago | Updated 3 days ago

Overview

Hybrid
$115,000 - $118,000
Full Time

Skills

Snowflake
Python
Pyspark
SQL
Release Management
Databricks
Azure

Job Details

 

Job Title: Data Engineer

Location : New York, NY (Need Onsite day 1, hybrid 3 days from office).

Job Type: Full time

 

 

Job Description:

We are seeking a skilled Data Engineer with expertise in Snowflake, Python, Pyspark, SQL, and Release Management to join our dynamic team. The ideal candidate will have a strong background in the banking domain and will be responsible for designing, developing, and maintaining robust data pipelines and systems to support our banking operations and analytics.

Responsibilities:

  • Design, develop, and maintain scalable and efficient data pipelines using Snowflake, Pyspark, and SQL.
  • Write optimized and complex SQL queries to extract, transform, and load data.
  • Develop and implement data models, schemas, and architecture that support banking domain requirements.
  • Collaborate with data analysts, data scientists, and business stakeholders to gather data requirements.
  • Automate data workflows and ensure data quality, accuracy, and integrity.
  • Manage and coordinate release processes for data pipelines and analytics solutions.
  • Monitor, troubleshoot, and optimize the performance of data systems.
  • Ensure compliance with data governance, security, and privacy standards within the banking domain.
  • Maintain documentation of data architecture, pipelines, and processes.
  • Stay updated with the latest industry trends and incorporate best practices.

 

Requirements:

  • Proven experience as a Data Engineer or in a similar role with a focus on Snowflake, Python, Pyspark, and SQL.
  • Strong understanding of data warehousing concepts and cloud data platforms, especially Snowflake.
  • Hands-on experience with release management, deployment, and version control practices.
  • Solid understanding of banking and financial services industry data and compliance requirements.
  • Proficiency in Python scripting and Pyspark for data processing and automation.
  • Experience with ETL/ELT processes and tools.
  • Knowledge of data governance, security, and privacy standards.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration abilities.

Preferred, but not required:

  • Good Knowledge in Azure and Databricks in highly preferred.
  • Knowledge of Apache Kafka or other streaming technologies.
  • Familiarity with DevOps practices and CI/CD pipelines.
  • Prior experience working in the banking or financial services industry.

 

 

Applicant Consent:
By submitting your application, you agree to ApTask's () and , and provide your consent to receive SMS and voice call communications regarding employment opportunities that match your resume and qualifications. You understand that your personal information will be used solely for recruitment purposes and that you can withdraw your consent at any time by contacting us at or . Message frequency may vary. Msg & data rates may apply.

  •  
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.