Overview
Hybrid3 days
$0 - $0
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
PySpark
Databricks
Python
SQL
data pipelines
ETL
data engineer
Job Details
Key Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using PySpark and Spark Databricks.
- Write efficient, optimized Python and SQL code for data processing, transformation, and analysis.
- Collaborate with business and technology teams to gather requirements and deliver solutions.
- Work with large-scale datasets to ensure high performance and reliability.
- Troubleshoot and optimize existing pipelines and workflows.
- Follow best practices for data security, governance, and quality.
Required Skills & Experience
- 5 6 years of professional experience as a Data Engineer.
- Strong programming skills in Python.
- Hands-on experience with PySpark and Spark Databricks.
- Advanced SQL skills for querying, performance tuning, and optimization.
- Experience working in a cloud environment (Azure, AWS, or Google Cloud Platform).
- Strong problem-solving and communication skills.
Nice to Have
- Experience with Azure Data Factory, Delta Lake, or Snowflake.
- Knowledge of data modeling and data warehousing concepts.
- Familiarity with CI/CD pipelines and version control (Git).
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.