Data Engineer
Hybrid • Posted 60+ days ago • Updated 14 days ago

Trilogy
Dice Job Match Score™
🔗 Matching skills to job...
Job Details
Skills
- Amazon Web Services
- Apache Hive
- Apache Hadoop
- Apache Spark
- Continuous Delivery
- Continuous Integration
- Data Governance
- Extract
- Transform
- Load
- Good Clinical Practice
- Workflow
- Version Control
- Git
- SQL
- Streaming
- Microsoft Azure
- Big Data
- PySpark
- Python
- HDFS
- Finance
- ETL
- Apache Iceberg
- Relational Datasets
- Oracle
- MongoDB
- S3
- Data Modeling
Summary
Data Analyst (Contract to Hire)
Introduction:
Join a growing data organization at the forefront of innovation in financial services. We are seeking talented data engineers at one of our cusomters to help modernize and scale data capabilities across multiple lines of business, including commercial, retail, and wealth. You will work on high-impact projects that span both on-prem and cloud environments, supporting next-generation data pipelines, governance, and analytics. If you are passionate about building, optimizing, and delivering trusted data at scale, we want to hear from you.
Responsibilities:
- Proficiency in Python and PySpark essential for building and optimizing data pipelines for large-scale datasets.
- Experience with Distributed Computing Environments.
- Familiarity with Bigdata, Hadoop, Hive, and HDFS formats is critical.
- Strong Communication Skills - the role involves collaboration with cross-functional teams, making clear and effective communication important. Background in Data Modeling and ETL Development.
- Design and implement scalable data pipelines using Hadoop, Spark, and Hive.
- Build and maintain ETL/ELT frameworks for batch and streaming data.
- Collaborate with product teams to ingest, transform, and serve model-ready datasets.
- Optimize data workflows for performance and reliability.
- Ensure pipeline quality through validation, logging, and exception handling.
Requirements:
Required Skills: Amazon Web Services, Apache Hive, Apache Hadoop, Apache Spark, Continuous Delivery, Continuous Integration, Data Governance, Extract, Transform, Load, Good Clinical Practice, Workflow, Version Control, Git, SQL, Streaming, Microsoft Azure, Big Data, PySpark, Python, HDFS, Finance, ETL
- Hadoop, Hive, Spark, SQL, Python.
- Experience with version control (Git) and CI/CD tools.
- Familiarity with modern data governance and observability practices.
- Cloud experience a plus (AWS, Azure, Google Cloud Platform).
- Dice Id: 91165373
- Position Id: 8750726
- Posted 30+ days ago
Company Info
Trilogy NextGen designs, builds and manages seamless digital experiences to empower your organization. With our unique blend of cutting-edge technologies and systems integration expertise, we craft complete solutions built on our industry-first ATSC 3.0 datacasting offering, private 5G networks, Wi-Fi, intelligent robotics and AI-driven analytics. Transform your digital future with innovative, secure, cost-effective solutions from Trilogy NextGen.
Similar Jobs
It looks like there aren't any Similar Jobs for this job yet.
Search all similar jobs

