Overview
HybridHybrid In office every other week
$50 - $70
Contract - W2
Contract - 12 Month(s)
No Travel Required
Skills
Data Engineer
Data Warehouse
SQL
ETL
Hadoop
Data Analysis
Scripting
Job Details
Job Details: Job Title : Data Engineer Location : Richmond, VA Duration : 12 Months Contract - with intent to extend contingent on project need/performance. Schedule: Hybrid In office every other week
Job Description: Qualifications
1. Minimum 5 years working experience as Data Analyst
2. Strong understanding of Data warehousing (Dimensional Modeling, ETL etc.) and RDBMS concepts
3. Minimum 5 years working experience in SQL, Stored Procedures and Table Design
4. Minimum 5 years working experience in SQL Query optimization and ETL Data loading Performance
5. Minimum 5 years working experience in Snowflake Cloud Data warehouse
6. Minimum 5 years working experience in R/Python and Spark
7. Experience as a Data Engineer in Hadoop Platforms on components like HIVE, KAFKA, NiFi, Spark etc is a big plus.
8. Minimum 2 years working experience in shell scripting
9. Experience in real time streaming technologies is plus
10. Experience deploying machine learning models and automating processes in production is a plus
11. Experience with cloud technologies(AWS, Azure, Google Cloud Platform) is big plus
12. Experience with Talend ETL tool is a big plus
Responsibilities
Responsibilities include but not limited to
Analyzing large data sets to identify trends, patterns, and insights.
Design, Develop and maintain secure, consistent and reliable ETL solutions supporting critical business processes across the various Business Units.
Ensuring data solutions are compliant with enterprise security standards.
Developing and performing tests to validate data flows and prepare ETL processes to meet complex business requirements.
Coordinating with various teams to ensure jobs designed and developed meet support standards and best practices before migration into the production environment.
Defining and capturing metadata and rules associated with ETL processes.
Work in complex multi-platform environments on multiple project assignments.
Uses strategies such as Indexing and partitioning to fine tune the data warehouse and big data environments to improve the query response time and scalability.
Define and capture metadata and rules associated with ETL processes.
Effectively communicate the findings to both technical and non-technical audiences.
Assist production support team in providing resolutions to production job failures, data issues and performance tuning.
ETL Development and Process Support, may require weekend/off business hours work
Job Description: Qualifications
1. Minimum 5 years working experience as Data Analyst
2. Strong understanding of Data warehousing (Dimensional Modeling, ETL etc.) and RDBMS concepts
3. Minimum 5 years working experience in SQL, Stored Procedures and Table Design
4. Minimum 5 years working experience in SQL Query optimization and ETL Data loading Performance
5. Minimum 5 years working experience in Snowflake Cloud Data warehouse
6. Minimum 5 years working experience in R/Python and Spark
7. Experience as a Data Engineer in Hadoop Platforms on components like HIVE, KAFKA, NiFi, Spark etc is a big plus.
8. Minimum 2 years working experience in shell scripting
9. Experience in real time streaming technologies is plus
10. Experience deploying machine learning models and automating processes in production is a plus
11. Experience with cloud technologies(AWS, Azure, Google Cloud Platform) is big plus
12. Experience with Talend ETL tool is a big plus
Responsibilities
Responsibilities include but not limited to
Analyzing large data sets to identify trends, patterns, and insights.
Design, Develop and maintain secure, consistent and reliable ETL solutions supporting critical business processes across the various Business Units.
Ensuring data solutions are compliant with enterprise security standards.
Developing and performing tests to validate data flows and prepare ETL processes to meet complex business requirements.
Coordinating with various teams to ensure jobs designed and developed meet support standards and best practices before migration into the production environment.
Defining and capturing metadata and rules associated with ETL processes.
Work in complex multi-platform environments on multiple project assignments.
Uses strategies such as Indexing and partitioning to fine tune the data warehouse and big data environments to improve the query response time and scalability.
Define and capture metadata and rules associated with ETL processes.
Effectively communicate the findings to both technical and non-technical audiences.
Assist production support team in providing resolutions to production job failures, data issues and performance tuning.
ETL Development and Process Support, may require weekend/off business hours work