Overview
Hybrid
Depends on Experience
Contract - W2
Contract - 12 Month(s)
25% Travel
Skills
Data Engineering
Data Flow
Data Integration
Data Integrity
Data Lake
Data Processing
ETL
Azure
Apache Hive
Apache Kafka
Apache Spark
Batch Processing
Big Data
Cloud Computing
CI/CD
DevOps
Java
Jenkins
SQL
Scala
Python
GitLab
GCP
Job Details
Job Title: Data Engineer
Location: Jersey City, NJ
Contract: W2 Only (No C2C/1099)
Contract Duration: Long-Term
Job Summary:
We are seeking a highly skilled Data Engineer with expertise in Azure and Google Cloud Platform (Google Cloud Platform) to support enterprise-level data initiatives in the financial services domain. The ideal candidate will be responsible for designing and implementing scalable data pipelines, integrating diverse data sources, and enabling real-time and batch data processing. This role requires strong cloud experience, big data proficiency, and a collaborative mindset to work in a globally distributed team.
Key Responsibilities:
- Design, develop, and maintain efficient data pipelines in Azure and Google Cloud Platform environments.
- Automate data workflows for both real-time and batch processing.
- Integrate data from various sources and sync data across systems and platforms.
- Implement robust data quality checks, ensuring data integrity, consistency, and reliability.
- Build and manage infrastructure and DevOps pipelines to support data engineering workflows.
- Collaborate with analysts, developers, and subject matter experts to support trading and financial applications.
- Participate in Agile development cycles and contribute to continuous improvement initiatives.
Required Skills & Qualifications:
- 12+ years of experience in data engineering and cloud technologies.
- Strong hands-on experience with Azure data services: Data Factory, Data Fabric, Data Lake Storage, SQL Database, Synapse Analytics, Data Explorer, Purview.
- Experience with Azure Identity Management tools such as AKV (Azure Key Vault) and UAMI (User Assigned Managed Identity).
- Proficiency in ETL development and Big Data technologies: Spark, Kafka, Hive.
- Experience with CI/CD tools such as GitLab, TeamCity, Jenkins.
- Programming skills in Python, Scala, or Java.
- Demonstrated ability to apply data engineering principles in an Agile environment.
- Strong problem-solving and communication skills, with experience working in globally distributed teams.
Preferred Skills:
- Experience in the financial services or banking industry.
- Familiarity with mainframe data integration and legacy system modernization.
- Knowledge of Google Cloud Platform data services such as BigQuery, Dataflow, and Pub/Sub.
- Ability to adapt to frequent changes and meet tight deadlines.
- Innovative mindset to enhance data sharing and processing efficiency.
- Experience with distributed computing platforms and secure data handling.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.