Azure Data Engineer

  • Spring, TX
  • Posted 36 days ago | Updated 9 days ago

Overview

Hybrid
$65 - $78
Contract - W2
Contract - 12 Month(s)
No Travel Required

Skills

Azure
SQL
Data Factor
Synapse
Databricks
Datalake
Data lakehouse
CI/CD
DevOps
PowerBI
DBT
Data Build Tool
Delta Lake
Parquet
Spark Pool

Job Details

Hybrid:

Tuesday, Wednesday Thursday -- On-Site

Monday, Friday -- Remote 

Location: Spring, TX 

Contract: ~12 months (Potential extension) 

 

Job Summary:

The Data Engineer will be responsible for supporting the design, development, and maintenance of data pipelines and data processing systems. They will utilize technologies such as Azure, Spark SQL, SQL Server, Azure Data Factory, Azure Databricks, Azure DevOps, Data build Tool (DBT), Git, Power BI, Azure Synapse Analytics, and Azure Databricks. Will be required to ensure the availability, reliability, and scalability of the data infrastructure and in supporting the organizational data-products within the company.

Duties/Responsibilities:

  1. Data pipeline development: Collaborate with the data engineering team to develop and maintain data pipelines, ETL processes, and data integration workflows using Azure Data Factory, Azure Synapse Analytics, Spark SQL, and other relevant technologies. Ensure the timely and accurate movement of data between systems.
  2. Data processing and transformation: Develop & Assist in implementing / maintaining data processing and transformation logic using Azure Synapse Analytics, Spark SQL or Databricks. Extract insights from raw data and transform it into meaningful and structured formats in our data products. Strong DBT (Data Build Tool) experience is a plus, as you may be involved in the implementation and management of DBT workflows.
  3. Azure service utilization: Work with Azure services such as SQL Server, Azure Data Factory, Azure Synapse and Azure Databricks & DBT to build and manage data infrastructure components. Leverage the capabilities of these services to ensure efficient and scalable data processing.
  4. Version control and collaboration: Utilize Azure DevOps and Git for version control and collaborate effectively with the team to manage code repositories and ensure proper documentation and knowledge sharing.
  5. Azure DevOps integration: Assist in integrating data engineering workflows with Azure DevOps for continuous integration, continuous deployment, and automated testing. Contribute to the implementation of CI/CD pipelines for data engineering projects.
  6. Data Modelling, visualization and reporting: Exposure to Power BI or similar reporting tools. Ability to collaborate with cross-functional teams to gather requirements and translate them into optimal data models (dimensional modelling experience required)
  7. Troubleshooting and support: Assist in identifying and resolving data pipeline issues, bottlenecks, and data quality problems. Provide support in investigating and troubleshooting data-related incidents.
  8. Continuous learning and growth: Stay updated with the latest advancements in data engineering technologies and tools. Continuously enhance your knowledge of Azure services and data engineering best practices.

 

Required Skills/Abilities:

  • 5-7 years of experience in data engineering or related roles.
  • Hands-on experience with Azure services: Data Factory, Synapse, and Databricks.
  • Proficiency in SQL programming, including SQL Server and MPP systems.
  • Familiarity with Spark SQL or Databricks for data processing.
  • Experience in DevOps practices for CI/CD and testing in data & analytics teams.
  • Experience with Power BI for data visualization.
  • Proven experience in Azure Synapse Analytics, Synapse Pipelines or Databricks
  • Strong problem-solving skills and attention to detail.
  • Effective communication and teamwork skills.
  • Self-motivated with a passion for learning in data engineering.
  • In-depth understanding of data warehousing concepts and data modeling.
  • Proficiency in data pipeline development and data integration.
  • Solid understanding of source control (preferably git or azure devops) and branching techniques.
  • Strong analytical skills for complex data sets and issue troubleshooting.

 

 

Desired Skills/Abilities:

  • Experience with Azure
  • Familiarity with Delta Lake core and Parquet files
  • Experience with DBT (Data Build Tool)
  • Working understanding of Spark Pool and/or Python

 

Education and Experience:

At least 5-7 years related experience required.

 

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.