Overview
Skills
Job Details
Job Description:
We are looking for a skilled Python Data Engineer with hands-on experience in Azure, PySpark, and Databricks, specifically within the insurance domain. This is a hybrid, long-term contract position requiring a mix of remote and onsite work in either Hartford, CT or Charlotte, NC. The ideal candidate will be responsible for building, optimizing, and maintaining data pipelines and supporting data-driven decision-making across the organization.
KIND NOTE: We only need local candidate who can go for Face to Face interview at Hartford,CT
Key Responsibilities:
Design and develop scalable and robust data pipelines using PySpark and Python.
Leverage Azure Data Services (e.g., Azure Data Factory, Azure Data Lake, Azure Synapse) for data integration and transformation.
Utilize Databricks for distributed data processing, data wrangling, and advanced analytics.
Ensure data quality, integrity, and compliance with data governance and security policies.
Collaborate with cross-functional teams, including business analysts, data scientists, and application developers.
Participate in performance tuning, troubleshooting, and optimization of data workflows.
Translate business requirements into technical specifications, especially within the insurance industry context.
Develop and maintain documentation for data pipelines and architecture.