Data Architect / Principal Data Engineer - Locals to OH

Overview

On Site
Depends on Experience
Contract - W2

Skills

Agile
Amazon Web Services
Apache Kafka
Big Data
Business Intelligence
Cloud Computing
Collaboration
Communication
Computer Science
Conflict Resolution
Continuous Delivery
Continuous Integration
Data Governance
Data Processing
Data Quality
Data Warehouse
Datastage
Decision-making
Extract
Transform
Load
FSO
Finance
Fraud
GitHub
Good Clinical Practice
Google Cloud Platform
IBM
IBM InfoSphere DataStage
Java
Jenkins
Management
Microsoft Azure
Migration
Modeling
PostgreSQL
Problem Solving
Programming Languages
Python
Real-time
Relational Databases
SAFE
SAS/SQL
SQL
Screening
ServiceNow
Snow Flake Schema
Use Cases
Version Control

Job Details

Title: Data Architect / Principal Data Engineer

Location: Madisonville Office Building, 5001 Kingsley Drive, Cincinnati, OH 45227

Duration: 12+ Months Contract

TECHNICAL SKILLS

Must Have

  • Business Intelligence (BI) - Data Engineering
  • Elevate & DBT
  • ETL (DataStage)
  • Kafka
  • Snowflake

Nice To Have

  • CI/CD (Jenkins/MettleCI
  • Experience in Financial Institutions or other highly regulated industries
  • GitHub
  • Scripting (Python, SAS, SQL, Java)

JOB DESCRIPTION

An experienced Data Architect/Engineer to modernize and build new data integrations and data product offerings in support of Disputes Operations. Data input and consumption from Disputes platforms (ServiceNow FSO, AdjustmentHub, and NetReveal) and other related data sources/systems to meet operational, fraud, regulatory, and financial needs across the organization.

As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data products that enable data-driven decision-making across the organization. You will collaborate closely with data scientists, analysts, product managers, and software engineers to transform raw data into reliable, accessible, and actionable insights. Your work will involve developing data pipelines, modeling data, ensuring data quality, and implementing best practices in data governance and architecture.


Key Responsibilities:

  • Design and implement robust, scalable data pipelines and ETL processes.
  • Develop and maintain data models and data products that support business needs.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality, integrity, and security across all data products.
  • Monitor and optimize performance of data systems and infrastructure.
  • Advocate for data best practices and contribute to the evolution of the data platform.


Qualifications
:

  • Bachelor s or master s degree in computer science, Engineering, or related field.
  • Proven experience with relational and non-relational databases (e.g. SQL, PostgreSQL)
  • Proficiency in programming languages such as Python, Java and data pipeline tools (e.g. DBT)
  • Experience with cloud platforms (e.g. AWS, Azure, Google Cloud Platform) and modern data warehouses (e.g. Snowflake)
  • Hands-on expertise in IBM DataStage for building and managing ETL processes
  • Knowledge of big data technologies (preferably Kafka) is a plus.
  • Experience with GitHub for version control and team collaboration.
  • Exposure to Jenkins/MettleCI for CI/CD pipeline development and maintenance.
  • Excellent problem-solving and communication skills

Additional nformation:

  • Onsite - Cincinnati, OH
  • Part of an Agile Squad (SAFe, Scum)

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Javen Technologies, Inc