Software Development Engineer 3

Overview

On Site
$92 - $92
Contract - W2

Skills

AIM
Accessibility
Amazon Web Services
Analytics
Apache Airflow
Apache Flink
Apache Kafka
Apache Spark
Automated Testing
Backbone.js
C++
Cloud Computing
Collaboration
Computer Science
Continuous Delivery
Continuous Integration
DaaS
Data Domain
Data Quality
Decision-making
DevOps
Docker
Elasticsearch
FOCUS
IT Management
Java
KPI
Kubernetes
Legacy Systems
Machine Learning (ML)
Manufacturing
Manufacturing Engineering
Marketing
Mathematics
Mentorship
Physics
Product Development
Programming Languages
Python
RDBMS
Real-time
Robotics
SQL
Scala
Semantics
Snow Flake Schema
Software Design
Sprint
Streaming
Systems Design
Terraform
Use Cases
Warehouse

Job Details

We re looking for a Senior Software Engineer who is passionate about building a modern, scalable Data as a Service (DaaS) platform that empowers Intuitive s Digital products and supports over 2,000 engineers across the organization. In this role, you will own and evolve critical components of our real-time and micro-batch data pipelines that power product development, internal tools, and key metrics and insights for engineering, manufacturing, marketing, enterprise analytics and business teamsanalytics.

Your work will focus on enabling high-throughput, low-latency data delivery through streaming pipelines, dynamic transformations, and APIs-based, access to data driving discoverability, accessibility, and actionable insights. You will help define the architecture and engineering practices that support self-service analytics and operational decision-making on scale.

As a catalyst for change, you will be at the forefront of reimagining how engineering teams consume and interact with data. Long-term success in this role means building robust, efficient systems and replacing legacy processes with modern solutions that allow teams to move faster, with greater confidence and autonomy.

Responsibilities:

Design and build scalable, distributed Data as a Service that ingest, process, and serve data from robotics, manufacturing, engineering, and clinical sources in real time and batch modes

Develop and maintain robust APIs, data services, and tooling to provide internal teams with secure, efficient, and intuitive access to high-quality data

Partner with engineering, analytics, and business stakeholders to evolve data contracts and models that support emerging use cases and ensure semantic consistency

Implement CI/CD practices for data services, including automated testing for data quality, service reliability, and schema evolution

Championing a self-service data culture by building discoverable, well-documented data products and guiding teams toward empowered, autonomous data access

Act as a technical leader within the data domain driving best practices, mentoring teammates, and continuously improving how data is produced, shared, and consumed across the organization

Key Skills & Experience:

Solid quantitative background in Computer Science, Engineering, Physics, Math, or 8 10+ years of hands-on experience in a technically demanding role

Proficient in at least two major programming languages such as Python, Go, Scala, C++, or Java, with a strong understanding of software design and architecture

Deep knowledge of SQL and understanding of relational database internals and performance

Proven experience building data pipelines and working with distributed systems using technologies like Apache Spark, Kafka, Elasticsearch, Snowflake, and Airflow

Strong collaborator who actively contributes to code reviews, system design discussions, sprint planning, and KPI evaluations to drive team excellence and technical quality

Minimum a bachelor's or master's degree in computer science, information technology, or a related field.

Bonus Points:

Experience working on Data Platform or Infrastructure Engineering teams

Hands-on experience with AWS, Docker, Kubernetes, Kafka, Elasticsearch, Apache Airflow, Snowflake, and Terraform

Familiarity with CI/CD best practices for DataOps and deployment automation

Experience developing with Docker and deploying into Kubernetes-based environments

About the Data Services Team:

We are a rapidly growing organization made up of Software and Data Engineersing, and DevOps teamwith a strong DevOps cultures, focused on building a next-generation Data as a Service (DaaS) platform. We are seeking a strong technical lead who are is passionate about designing scalable, real-time data systems that deliver high-quality, trusted data across the company.

Our platform leverages technologies such as Apache Kafka for real-time data ingestion, Apache Flink for stream processing and transformation, Snowflake for scalable warehousing, and AWS as our cloud backbone. We focus on serving data as via APIs and event streams to enable on-demand access, analytics, and machine learning across product and engineering teams.

This is a newly formed group with a mandate to drive meaningful change by enabling fast, reliable, and consistent access to data. We aim to eliminate organizational silos and foster a strong culture of collaboration and ownership. If you see a high-leverage solution to a long-standing data problem, we support tearing down legacy systems and building the right solution provided you have a strong plan and the drive to make it happen.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.