Full time job- Data Architect @Issaquah WA- (/EAD)

Overview

On Site
Full Time
100% Travel

Skills

Data Acquisition
Unstructured Data
Real-time
Data Architecture
Storage
Workflow
Data Quality
Performance Tuning
Optimization
Migration
Legacy Systems
Python
Apache Spark
Analytics
Google Cloud Platform
Google Cloud
GCS
Data Flow
Change Data Capture
Apache Beam
Data Processing
JSON
Messaging
DevOps
GitHub
Terraform
Scalability
Scripting
Shell
Perl
RESTful
Data Modeling
Data Integration
Extract
Transform
Load
SQL
Management
Cloud Computing
Database
Continuous Integration
Continuous Delivery
Data Warehouse
Distributed Computing
Conflict Resolution
Problem Solving
Analytical Skill
Communication
Collaboration
Teamwork

Job Details

Data Architect

Location : Issaquah , WA ( onsite)

Full Time position

Client : COSTCO

UST is searching for a Data Architect who will play a role in designing, developing, and implementing data pipelines and data integration solutions using Python and Google Cloud Platform services.

The opportunity:

Collaborate with cross-functional teams to understand data requirements and design data solutions that meet business needs.

Develop, construct, test and maintain data acquisition pipelines for large volumes of structured and unstructured data. This includes batch and real-time processing

Develop and maintain data pipelines and ETL processes using Python.

Design, build, and optimize data models and data architecture for efficient data processing and storage

Implement data integration and data transformation workflows to ensure data quality and consistency

Monitor and troubleshoot data pipelines to ensure data availability and reliability

Conduct performance tuning and optimization of data processing systems for improved efficiency and scalability

This position description identifies the responsibilities and tasks typically associated with the performance of the position. Other relevant essential functions may be required.

What you need:

Working experience as a Data Engineer

Experienced in migrating large-scale applications from legacy systems to modern architectures.'

Good programming skills in Python and experience with Spark for data processing and analytics

Experience in Google Cloud Platform services such as GCS, Dataflow, Cloud Functions, Cloud Composer, Cloud Scheduler, Datastream (CDC), Pub/Sub, BigQuery, Dataproc, etc. with Apache Beam (Batch & Stream data processing).

Develop JSON messaging structure for integrating with various application

Leverage DevOps and CI/CD practices (GitHub, Terraform) to ensure the reliability and scalability of data pipelines.

Experience with scripting languages like Shell, Perl etc.

Design and build an ingestion pipeline using Rest API.

Experience with data modeling, data integration, and ETL processes

Strong knowledge of SQL and database systems

Familiarity with managing cloud-native databases.

Understanding of security integration in CI/CD pipelines.

Understanding of data warehousing concepts and best practices

Proficiency in working with large-scale data sets and distributed computing frameworks

Strong problem-solving and analytical skills

Excellent communication and teamwork abilities

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.