Overview
Remote
$60 - $68
Contract - W2
Contract - 12 Month(s)
Skills
Data Engineering
Data Engineer
ETL
Azure
Azure Synapse
Synapse
Azure Databricks
Databricks
MS Fabric
Python
PySpark
SQL
microsoft fabric
SQL Azure
Stakeholder Management
Communication
Data Engineering
Offshoring
Job Details
Title: Senior Data Engineer
Location: Remote work in PST TIme zone
Contract (Only W2)
- Programming Skills: Must have strong hands-on Spark, Python, PySpark, and SQL expertise.
- Big Data and Analytics: Knowledge of big data technologies like Azure Databricks and Synapse.
- Cloud Data Engineering concepts: Must demonstrate knowledge of Medallion Architecture and common ETL patterns, including ingestion frameworks.
- Performance tuning techniques and best practices: Understanding of performance analysis and system architecture is essential.
- Cloud data platform: Preferably MSFT Fabric, Azure Synapse, Azure Databricks, or any other cloud data platform.
- Data modeling skills: Strong skills and knowledge of dimensional modeling, semantic modeling, and standard data modeling patterns used in analytical systems.
- Data Management and Storage: Proficiency with Azure SQL Database, Azure Data Lake Storage, Azure Cosmos DB, Azure Blob Storage, etc.
- Data Integration and ETL: Extensive experience with Azure Data Factory for data integration and ETL processes.
- Analytical Skills: Strong analytical and problem-solving skills.
- Problem-Solving & Technical Leadership Skills: Ability to identify, design, and implement improvements that drive optimal performance.
- Leadership & Collaboration: Experience leading onshore and offshore teams, fostering collaboration, and driving high-performance engineering culture.
- Stakeholder Management: Strong analytical and communication skills, with experience working closely with business and technical stakeholders to align on requirements.
Responsibilities:
- Lead onshore and offshore data engineer team, provide expert guidance and collaborate with business stakeholders.
- Design and Build Data Pipelines: Develop and manage modern data pipelines and data streams using PySpark as well as data factories and data pipelines.
- Database Management: Develop and maintain databases, data systems, and processing systems.
- Data Transformation: Transform complex raw data into actionable business insights using PySpark.
- Technical Guidance: Collaborate with stakeholders and teams to assist with data-related technical issues.
- Data Architecture: Ensure data architecture supports business requirements and scalability.
- Big Data Solutions: Utilize Databricks or Synapse for big data processing and analytics.
- Process Improvements: Identify, design, and implement process improvements, such as automating manual processes and optimizing data delivery.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.