Data Architect

Overview

On Site
$140,000 - $160,000
Full Time

Skills

Snowflake
SQL
ETL
PySpark
Qlik Replicate
Azure
AWS

Job Details

 

Position Summary:

We are seeking an experienced and hands-on Data Architect to lead the design, implementation, and optimization of modern, scalable data architecture solutions. The ideal candidate will have a strong foundation in distributed databases, data integration, cloud-based platforms, and modern data tools. This role will collaborate closely with engineering, DevOps, and analytics teams to support enterprise-wide data initiatives.

Required Qualifications:

  • Bachelor’s or master’s degree in computer science, Engineering, or related discipline.
  • 7+ years of experience in data architecture, engineering, or related technical roles.
  • Proven hands-on experience with Snowflake and modern data warehouses.
  • Strong knowledge and experience with distributed SQL databases such as YugabyteDB.
  • Proficiency in ETL development, especially using Qlik Replicate or similar tools.
  • Solid experience in relational data modeling and data normalization techniques.
  • Strong understanding of data governance, security, and compliance practices.
  • Experience working with cloud platforms (e.g., AWS, Azure, or Google Cloud Platform) for data solutions.
  • Strong programming skills in Python and Java.
  • Experience with API integrations for data ingestion and exposure.
  • Familiarity with CI/CD practices and collaboration with DevOps teams.
  • Experience using Sigma or similar reporting and analytics tools.

Preferred / Nice-to-Have Skills:

  • Experience with Rocket ETL or similar ETL tools.
  • Hands-on experience with PySpark or distributed data processing frameworks.
  • Exposure to AI/ML workflows and tools such as Snowflake ML.
  • Knowledge of Infrastructure as Code (IaC) tools like Terraform.
  • Experience with version control systems and agile methodologies.

Key Responsibilities:

  • Architect and implement scalable, secure, and high-performance data platforms using modern technologies.
  • Design end-to-end data architecture solutions to support real-time, near-real-time, and batch data processing needs.
  • Lead data modeling efforts including normalization, relational design, and support for distributed SQL databases.
  • Integrate diverse data sources including internal systems, external APIs, and third-party feeds.
  • Develop and manage ETL/ELT pipelines, including near real-time replication solutions using tools like Qlik Replicate.
  • Apply best practices in data governance, metadata management, and data quality frameworks.
  • Collaborate with DevOps teams to build and maintain automated CI/CD pipelines for data workloads.
  • Optimize cloud-based data solutions leveraging platforms such as Snowflake and YugabyteDB.
  • Work with API-based data ingestion and delivery for both internal and external applications.
  • Support reporting and dashboarding using tools like Sigma.
  • Contribute to the development of data-intensive applications using Python and Java.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.