Location: Dallas, TX (Onsite- preferred local candidates)
Day to Day Job Duties:
Design high-quality domain (canonical) data models, consumption-layer dimensional models, ensuring compliance with NLG data architecture and governance standards.
Collaborate with business and technical stakeholders to define, document, and govern how data flows across the organization supporting analytics, reporting, operational use cases, and advanced data science initiatives.
Work under the guidance of NLG Core Data Architects to create data models for the different layers (Domain and Consumption) of our enterprise data platform hosted in Databricks on Azure.
Collaborate with Application Data SMEs to understand the complete structure and business definition of the source data.
Collaborate with business analysts and business analytics SMEs to understand the BUS Matrix and associated requirements definition for the data needs.
Create and maintain data model using ER Studio tool, complete with DDL generation.
Define end to end data lineage (S2T) and document them on Confluence pages as per given NLG standards.
Generate SQL query snippets for explaining data transformation logic.
Profile and analyze source data to ensure data quality and recommend data refinement and cleansing methods.
Perform necessary reviews and obtain sign offs prior to delivery to the DEV and QA teams.
Perform handoff walkthroughs of model and S2T to the DEV team and participate in design review sessions for any refinements as necessary.
Perform handoff walkthroughs of model and S2T to the QA team and participate in defect triage sessions as necessary.
Build SQL queries for end consumption business views in alignment with the business requirements.
Participate with business UAT teams to clarify data model questions and business view questions.
Provide any other necessary support through the engineering build, QA and UAT phases.
Basic Qualifications:
7-10 years in data architecture, data modeling, or enterprise analytics.
Strong experience with canonical / domain modeling, dimensional modeling, and semantic modeling.
Proficiency in documenting data lineage and metadata (e.g., Purview, Collibra, Alation, or native Databricks capabilities).
Minimum 5 years' experience working with Data Modeling tools (e.g., Erwin, ER/Studio, or similar)
At least 3 years of experience in life insurance, annuities, policy administration, claims, actuarial, or related financial services data will be excellent.
Understanding of ACORD data standards, product hierarchies, distribution channels, customer/party models, and regulatory reporting will be an added advantage.
Any prior experience with the conceptual understanding of IBM IIW, Teradata ILDM, Oracle OIDF, etc., will be helpful
Proficiency in defining SQL queries is a must
Experience working with Azure Databricks (Delta Lake, Unity Catalog, DBFS, SQL endpoints) will be an added advantage
Familiarity with Data Mesh, Kimball, Inmon, Data Vault, and Lakehouse modeling patterns.
Excellent communication and ability to translate complex concepts for non-technical audiences.
Ability to lead architecture discussions and influence stakeholders.
Comfortable working in agile delivery environments.
Strong documentation habits and detail orientation.
Travel: None.
Degree: BA or BS degree with a STEM major is preferred. Relevant advanced degree is a plus.
Nice to Have:
Experience with data contracts, API modeling, or event-driven architecture will be helpful
Experience working with Alteryx and/or Tableau will be an added advantage
Experience with SSIS will be helpful
Knowledge of Python will be helpful
Best Regards
Govinda rajulu. M| Sr. Talent Acquisition Specialist