Overview
On Site
$$40 per hour
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - months contract
Skills
Python
SQL
AWS
Hadoop
Spark
Apache NiFi
Job Details
Job Summary (List Format):
- Develop and maintain ETL/ELT pipelines and data workflows for data warehousing projects.
- Process and transform large datasets using SQL, Hadoop, Spark, and Python.
- Support integration of structured and unstructured data from multiple sources.
- Collaborate with senior engineers to apply best practices in coding, performance tuning, and data quality.
- Gain experience in real-time/streaming data ingestion using Apache NiFi.
- Contribute to cloud platform projects (AWS, Azure, or Google Cloud Platform).
- Document data workflows, data lineage, and provide operational support for data systems.
- Utilize strong analytical and debugging skills to resolve data issues.
- Work collaboratively within an agile team environment.
- Develop and maintain ETL/ELT pipelines and data workflows for data warehousing projects.
- Process and transform large datasets using SQL, Hadoop, Spark, and Python.
- Support integration of structured and unstructured data from multiple sources.
- Collaborate with senior engineers to apply best practices in coding, performance tuning, and data quality.
- Gain experience in real-time/streaming data ingestion using Apache NiFi.
- Contribute to cloud platform projects (AWS, Azure, or Google Cloud Platform).
- Document data workflows, data lineage, and provide operational support for data systems.
- Utilize strong analytical and debugging skills to resolve data issues.
- Work collaboratively within an agile team environment.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.