Overview
Hybrid3 days onsite
50 - 59
Contract - W2
Contract - 12 Month(s)
No Travel Required
Unable to Provide Sponsorship
Skills
Big Data
Apache Spark
Data Mapping
Data Integration
Storage
PySpark
Kafka
Data Warehouse
Databricks
Job Details
Responsibilities
- Collaborate with stakeholders to understand data requirements and design, develop, and maintain complex ETL processes.
- Implement at least two end-to-end Data Warehouse projects.
- Create data integration and data diagram documentation.
- Design and maintain data models, including schema design and optimization.
- Build and manage automated data pipelines ensuring data quality and consistency.
- Work on Data Warehouse platforms, including data mapping, ETL, data load, and transformation.
- Apply strong Data Warehousing and Dimensional Data Modeling skills.
- Utilize Python/Scala and Spark (including PySpark) for big data processing.
- Work with SQL for data querying and manipulation.
- Leverage Azure technologies such as Synapse, Data Factory, Blob Storage, and ADLS (nice to have).
- Use Azure DevOps for CI/CD or equivalent tools (nice to have).
- Work with Databricks and/or Snowflake.
About You
- Degree in Data Science, Statistics, Computer Science, or related field (or equivalent experience).
- 6+ years of experience as a Data Engineer.
- Hands-on experience with Azure Cloud platform and ETL processes using Azure Data Factory.
- Proficiency in Python, PySpark, and SQL.
- Experience with big data processing and analytics using Azure Databricks
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.