Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
Agile
Analytics
Apache Spark
Business Intelligence
Cloud Computing
Collaboration
Data Analysis
Data Architecture
Data Engineering
Documentation
ERwin
Data Governance
Data Modeling
Data Profiling
Data Warehouse
Databricks
Enterprise Architecture
Functional Analysis
Mapping
Meta-data Management
Microsoft Azure
Scrum
Semantics
Shell Scripting
Snow Flake Schema
Streaming
Technical Drafting
Microsoft Power BI
Microsoft SQL Server
Python
Quality Assurance
SQL
Transact-SQL
Unstructured Data
Visualization
PySpark
Job Details
Role: Sr. Data Engineer
Location: Remote
Contract: 12+ Months
Job Summary:
- We are seeking a highly skilled and motivated Data Engineer / Analyst to join our dynamic data architecture and analytics team.
- The ideal candidate will possess strong hands-on experience with PySpark, T-SQL, and modern cloud-based data platforms such as Azure Databricks and Snowflake.
- This role demands end-to-end involvement across data analysis, transformation logic, functional data modeling, and lineage documentation.
- A strong understanding of data warehousing principles and Medallion architecture is essential.
Key Responsibilities:
- Develop and maintain comprehensive data lineage documentation, SQL logic mapping, and technical design specifications to support handoffs to Data Engineering and Data Governance teams.
- Collaborate closely with Data Architects and Analysts to ensure data models align with enterprise architecture and business objectives.
- Design and implement efficient, reusable, and well-documented code in Python, T-SQL, and Shell Scripting.
- Partner with Business Intelligence SMEs and stakeholders to gather and define functional and business-level data definitions.
- Perform detailed data profiling, validation, and quality assurance across structured and unstructured data sources.
- Participate in data discovery and functional analysis to inform design decisions and support solution delivery.
Required Skills and Experience:
- Minimum 5 years of experience in data engineering, analytics, or a related field.
- Proven expertise in T-SQL, PySpark, dbt, and Databricks (including Spark Structured Streaming).
- Hands-on experience with SQL Server and cloud platforms such as Azure, Fabric, or Snowflake.
- Working knowledge of data modeling tools such as Erwin Data Modeler.
- Familiarity with Power BI or equivalent BI/visualization tools.
- Solid understanding of data governance, metadata management, and data lineage principles.
- Experience working within Agile/Scrum delivery frameworks.
- Exposure to enterprise data warehousing methodologies such as Data Vault 2.0 or Kimball.
- Ability to translate business needs into technical solutions through structured analysis.
Preferred Qualifications (Nice to Have):
- Experience supporting Medallion Architecture based data platforms.
- Understanding of semantic layer design and data product delivery in modern BI environments.
- Exposure to data cataloging tools and metadata repositories.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.