Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - 6 month(s)
No Travel Required
Skills
SQL
Hadoop
Hive
data pipelines
HIPAA
ETL workflows
DWH/ETL process
Teradata knowledge
Data management & Shell Scripting
Big Data environments (e.g.
Spark).
Job Details
Job description:
Job Responsibilities:
Teradata knowledge, SQL, DWH/ETL process, Data management & Shell Scripting
- Collaborate with cross-functional teams to gather and analyze data requirements.
- Design, develop, and optimize complex SQL queries and scripts for data extraction and reporting.
- Work extensively with Teradata for querying and performance tuning.
- Analyze large datasets in Big Data environments (e.g., Hadoop, Hive, Spark).
- Design and support ETL workflows, data pipelines, and processes to ensure data integrity and accuracy.
- Create and maintain data models and DWH structures to support reporting and analytics.
- Interpret and present data findings to technical and non-technical stakeholders.
- Work with Healthcare data (e.g., claims, eligibility, EMR, EHR) ensuring compliance with regulatory requirements like HIPAA.
- Perform data quality checks, validation, and profiling.
- Document technical processes, data flow diagrams, and lineage.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.