Job Description
Role: Data Architect Onsite ( USA ) Level - 7
Job Summary :
Looking for data architect (onsite) , who will lead i nvolves in designs and maintains the data infrastructure that enables storage, access,
and analysis across the business and client needs . They define how data is collected, structured, integrated, and governed to ensure
scalability, security, and performance.
Years of experience needed : 1 5 + Years in Data Engineering in insurance domain
Responsibilities :
Key responsibilities include modelling database structures, selecting technologies, setting governance policies, and working closely
with engineers ( clients ) , analysts, and compliance teams. They play a critical role in supporting business intelligence, and regulatory
initiatives.
Defining data architecture strategies, frameworks, and models
Designing and optimizing databases, warehouses, and data lakes
Ensuring data structures meet business and compliance needs
Collaborating with data engineers and analysts on pipeline architecture
Overseeing metadata management, catalogues, and lineage tracking
Ensuring data integrity, scalability, and performance
Selecting appropriate storage and cloud platforms (e.g. Snowflake, AWS, Azure, Big Query , Redshift)
Supporting data governance and access control policies
Reviewing existing systems for improvement or migration
Documenting technical standards and architectural decisions
System design with strategic alignment and governance.
Technical Skills:
E xperience designing, implementing, and managing data analytics using Databricks in insurance domain.
Proven experience in developing and implementing ETL pipelines from various data sources using Databricks on cloud AWS
Design and implement scalable ETL pipelines using Databricks to process and transform data from multiple sources.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high - quality
data solutions.
Optimize data workflows and ensure data quality and integrity throughout the ETL process.
Monitor and troubleshoot data pipeline performance, implementing improvements as necessary.
Work with cloud technologies, specifically AWS, to manage data storage and processing resources effectively.
Document data engineering processes, architecture, and best practices to ensure knowledge sharing within the team.
Stay updated with the latest trends and technologies in data engineering and cloud computing.
Strong proficiency in Python, PySpark and SQL with hands - on experience developing data pipelines.
Data Modeling/ Data Lineage and awareness of Canonical data model implementation
Experience in Medallion Architecture implementation
Experience in working Insurance domain
Experience with JIRA
Mandatory Skills
Strong expertise in Databricks, including the ability to develop and optimize ETL pipelines.
Proven experience with AWS cloud services, particularly in data storage and processing.
Solid understanding of data modeling, data warehousing, and data integration techniques.
Proficiency in programming languages such as Python or Scala for data manipulation and transformation.