Solutions Architect

Overview

Hybrid
Depends on Experience
Full Time

Skills

Amazon Web Services
Cloud Computing
Data Analysis
Data Governance
Data Lake
Data Modeling
Data Stewardship
Regulatory Compliance
Solution Architecture
Kubernetes
DevOps

Job Details

The Role

We are seeking a hands-on Solutions Architect to join our Data and Analytics Platform team, driving the design and implementation of our next-generation cloud-based, streaming Data Mesh platform. This role is pivotal to enabling teams to create scalable, data-driven products and advancing our application modernization strategy.

Our Vision

We believe decentralized data is the future, with Data Mesh as the key to unlocking its potential. Our vision is a world where data is treated as a product, ownership is distributed to foster innovation, and domain-oriented, decentralized infrastructure drives agility and collaboration.

The Streaming Data Mesh will transform data into a strategic asset. Through well-defined data domains and business ownership, we aim to:

  • Empower teams with self-serve data infrastructure.
  • Treat Data Domains as first-class products.
  • Shift responsibility from centralized teams to foster accountability and innovation.

By embracing platform thinking and adhering to the four pillars of Data Mesh Data Ownership, Data as a Product, Self-Service Infrastructure, and Federated Governance we will create a seamless experience for creating, discovering, and utilizing trusted, connected, and real-time data.

If you are passionate about shaping the future of data through cutting-edge architectures, fostering a culture of data stewardship, and enabling decentralized innovation, we d love to hear from you!

How You'll Help Take Us There

  • Drive the Data Management Strategy & Architecture: Work with the Head of the Data Platform & Mesh to define the self-service automation, data catalogue, classifications and other functionality to work across the organisation, ensuring its scalability, performance, and alignment with evolving business and technical needs.
  • Contribute to the Data Mesh Architecture: Collaborate with the Head of the Data Platform to refine and enhance the defined Data Mesh architecture, ensuring its scalability, performance, and alignment with evolving business and technical needs.
  • Enable Decentralized Data Ownership: Define and implement processes and infrastructure to support domain-oriented data ownership, fostering accountability and innovation across data domains.
  • Champion Data as a Product: Collaborate with domain teams to treat data as a product, developing reusable, high-quality, and trusted datasets that meet the needs of diverse stakeholders.
  • Build Self-Service Infrastructure: Design and deploy self-serve tools and platforms that empower teams to create, discover, and manage data independently, reducing reliance on centralized engineering teams.
  • Implement Federated Governance: Establish and enforce standards, policies, and best practices to ensure security, compliance, and interoperability across the Data Mesh while maintaining agility.
  • Collaborate Across Teams: Work closely with engineering, data governance, and analytics teams to integrate the Data Mesh with existing systems, ensuring seamless data sharing and usability.

What We re Looking for

  • 8+ years of experience of extensive Java engineering experience, with 3+ in data architecture, solutions architecture, or similar roles, with a strong focus on cloud-based data platforms (still hands on)
  • Proven experience contributing to or leading the implementation of Data Mesh, Data Lake, or modern data platforms in a complex, enterprise-scale environment (at scale)
  • Strong knowledge of data modeling, APIs, and integration patterns.
  • Good understanding of product management, agile principles, and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact, and risks.
  • Experience writing technical proposals such as RFCs for peer-review and discussion
  • Hands-on expertise in designing streaming data systems (e.g., Kafka, Flink, KStreams) and distributed data architectures (Spark, ETL, EDA).
  • Proficiency in cloud platforms (AWS), including their services (e.g., AWS Glue, EMR, S3, Lambdas).
  • Familiarity with DevOps practices, including CI/CD pipelines, Infrastructure as Code (IaC), and containerization (e.g., Kubernetes, Docker).
  • Ability to work effectively in a team environment and lead cross-functional teams.

BS/MS degree in Computer Science, Engineering, or a related subject

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.