DATA STREAMING ENGINEER

Overview

Hybrid
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)
No Travel Required
Unable to Provide Sponsorship

Skills

Streaming
Kafka

Job Details

Data Streaming Engineer

Client: Cardinal Glass 

Location: Hybrid (must work 3 days/week onsite in Eden Prairie, MN) 

Local Consultants Only

Position: Data Streaming Engineer 

Job Type: Contract 

Estimated Duration:  12+ Months

W2, 1099 or Own corp Only

Need someone who is Mid to Senior level. This is a 12-month contract to start, potential for extension and / or conversion down the line.

Hybrid (3 days per week on-site until Summer of 2026, followed by 4 days per week on-site

Team Overview + Business & Technology Initiatives:

Glass tempering is a heat treatment process that strengthens glass by inducing internal stresses.   Before tempering, the glass is cut to its final dimensions and edges are smoothed. High-pressure air jets cool the glass rapidly. This causes the outer surfaces to contract while the inner core remains hot, creating compressive stress on the surface and tensile stress inside—the key to tempered glass’s strength.  Tempered glass is tested for strength, flatness, and optical quality.  It should break into small, blunt fragments if shattered, minimizing injury risk. 

Our newly formed Advanced Technology Group is focused on streamlining the glass tempering process to enhance product quality while reducing costs. By leveraging newly developed proprietary technology, we aim to optimize furnace operations—specifically by minimizing heat time and improving time management—through detailed analysis of upstream and in-furnace process flows. These insights will be correlated with defect patterns observed as glass exits the furnace.

This initiative is powered by data collection from Operational Technology (OT) equipment throughout the production line and is intended to serve as the foundation for expanding automation across additional, non-tempering, systems within the glass fabrication process 

Role Summary:

We’re in search of a Data Streaming Engineer who can bridge the world of industrial automation and modern data engineering. This is a unique, high-impact role for somebody to capture real-time data from IIoT systems and help transform it into usable, AI-ready pipelines. 

Key Responsibilities:

  • Build and maintain data streaming solutions using KafkaNATS.io, or similar technologies.
  • Capture and process IIoT data, working with both standard device outputs and custom integrations with manufacturers.
  • Build and manage real-time data flows (like PySpark and Databricks).
  • Collaborate with Data Scientists to curate high-quality AI training datasets.
  • Ensure high reliability and low-latency in data capture and processing.

Top Required / Must-Have Skills:

  • Data streaming & management, Data capture & Queuing to document storage (Kafka, Nats, S3, or similar modern technologies) 
  • Strong NoSQL & Spark experience (Azure Synapse, Data Bricks, etc.) 
  • Adept in Data Analysis – ability to learn proprietary data sets and create learning mechanisms 
  • Real time data processing expertise, and being able to set up such systems  

Preferred Experience:

  • Background in manufacturingprocess automation, or industrial IoT.
  • Deep understanding of edge data collection, device integration, and stream processing.
  • Strong DevOps skills related to data pipelines.

Additional Notes:

  • Extremely hands on role 
  • Most machines are proprietary 
  • Somebody is going to need to dig into manuals 
  • Goal is to have sub-second response time; real-time data processing to feed real time AI systems 
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.