Data Engineer(ETL)

Overview

Compensation information provided in the description
Full Time
Part Time
Accepts corp to corp applications
Contract - Independent
Contract - W2

Skills

Physical data model
Dimensional modeling
Data architecture
Java
Data Lake
System integration
Data migration
Data warehouse
Data marts
Data quality
Data validation
Performance tuning
Big data
Software deployment
Solution architecture
Amazon S3
Data
Extract
transform
load
SANS
C
BFSI
Management
Databricks
Streaming
Apache Spark
Scala
Optimization
Leadership
Transformation
Microsoft Windows
ELT
IMPACT
SQL
RDBMS
Design
Strategy
Debugging
Amazon Web Services
Continuous integration
Continuous delivery
Apache Avro

Job Details

Data Engineer(ETL), Location: SANTACLARA, Pay rate: $60/hr on c2c, End Client: Data BFSI
Job Description:
(Data Engineers) Trusted Computing project 6-8 years of IT experience focusing on enterprise data architecture and management.
Experience in Conceptual/Logical/Physical Data Modeling expertise in Relational and Dimensional Data Modeling Experience with Databricks on Prem , Structured Streaming, Delta Lake concepts, and Delta Live Tables
required Experience with Spark scala and java programming Data Lake concepts such as time travel and schema evolution and optimization Structured Streaming and Delta Live Tables with Databricks a bonus Experience leading and architecting enterprise-wide initiatives specifically system integration, data migration, transformation, data warehouse build, data mart build, and data lakes implementation / support Advanced level understanding of streaming data pipelines and how they differ from batch systems Formalize concepts of how to handle late data, defining windows, and data freshness Advanced understanding of ETL and ELT and ETL/ELT tools such as Data Migration Service etc
Understanding of concepts and implementation strategies for different incremental data loads such as tumbling window, sliding window, high watermark, etc. Familiarity and/or expertise with Great Expectations or other data quality/data validation frameworks a bonus Familiarity with concepts such as late data, defining windows, and how window definitions impact data freshness Advanced level SQL experience (Joins, Aggregation, Windowing functions, Common Table Expressions, RDBMS schema design performance optimization) Indexing and partitioning strategy experience Debug, troubleshoot, design and implement solutions to complex technical issues Experience with large-scale, high-performance enterprise big data application deployment and solution Architecture experience in AWS environment a bonus Familiarity working with Lambda specifically with how to push and pull data, how to use AWS tools to view data for processing massive data at scale a bonus Experience with Gitlabs and CloudWatch and ability to write and maintain gitlabs for supporting CI/CD pipelines Experience working with AWS Lambdas for configuration and optimization and experience with S3 Familiarity with Schema Registry, message formats such as Avro, ORC, etc. Ability to thrive in a team-based environment Experience briefing the benefits and constraints of technology solutions to technology partners, stakeholders, team members, and senior level of managementSkillset: Java, Scala, S3, Glue, RedshiftLocation - Scottsdale

About Paramount Software Solutions, Inc