Overview
Skills
Job Details
Hello,
Hope you're doing well!!
Please find the requirement below. If you find yourself comfortable with the requirement, please reply back with your updated resume or call me back at
Position :: Senior Software Engineer
Location :: : Sunnyvale, CA (hybrid schedule) (Local Only)
Duration :: 12+Months (Contract)
Job Description:
Core Focus of the Role
We are looking for a candidate with experience with hands-on Informatica-to-Airflow migration work along with ETL conversion or Airflow-based automation. The Manager is also looking for Airflow, Python-based data migration, and CI/CD automation. This project needs someone who can start immediately and handle code-level migration from Informatica to Python in Airflow.
Build and evolve a modern, scalable Data-as-a-Service (DaaS) platform that supports over 2,000 engineers across Intuitive.
Design real-time and micro-batch data pipelines for ingestion, transformation, and serving of high-volume data from robotics, manufacturing, engineering, and clinical systems.
Develop APIs, data services, and internal tools that enable secure, efficient, and intuitive access to trusted data.
Modernize legacy systems - replacing them with high-throughput, low-latency, self-service data infrastructure.
Serve as a technical leader, defining data architecture, best practices, and mentoring teammates.
Technical Stack & Skills
Programming: Expertise in Python, Go, Scala, C++, or Java (at least two).
Data Systems: Hands-on experience with Apache Spark, Kafka, Elasticsearch, Snowflake, Airflow, and AWS.
DevOps / Infrastructure: Familiarity with Docker, Kubernetes, and Terraform.
DataOps: Knowledge of CI/CD, automated testing for data quality, and schema evolution.
SQL Expertise: Strong understanding of database internals, performance tuning, and query optimization.
Key Responsibilities
Design and deploy distributed data pipelines and services to deliver analytics-ready data.
Partner with engineering, analytics, and business teams to define data contracts, models, and semantics.
Implement testing, monitoring, and continuous deployment for reliable data delivery.
Drive a self-service data culture through documentation, discoverability, and collaboration.
Lead technical discussions, code reviews, and cross-functional initiatives to improve data systems.
Ideal Candidate Profile
8 10+ years of experience in software or data engineering roles with a strong systems and architecture background.
Skilled in building scalable, distributed, and resilient data infrastructure.
Strong collaborator and technical mentor who contributes to team culture and engineering excellence.
Background in Computer Science, Engineering, Physics, or Mathematics (Bachelor's or Master's degree).
About the Data Services Team
A new, rapidly growing team with a mission to create a next-generation DaaS platform.
Works with Apache Kafka, Flink, Snowflake, and AWS to deliver real-time data as APIs and event streams.
Focused on collaboration, ownership, and modernization - tearing down silos and replacing outdated systems.
Ideal for engineers who are hands-on, forward-thinking, and passionate about data infrastructure innovation.
Your work will focus on enabling high-throughput, low-latency data delivery through streaming pipelines, dynamic transformations, and APIs-based, accessibility, and actionable insights. You will help define the architecture and engineering practices that support self-service analytics and operational decision-making on scale.
As a catalyst for change, you will be at the forefront of reimagining how engineering teams consume and interact with data. Long-term success in this role means building robust, efficient systems and replacing legacy processes with modern solutions that allow teams to move faster, with greater confidence and autonomy.
Responsibilities:
Design and build scalable, distributed Data as a Service that ingest, process, and serve data from robotics, manufacturing, engineering, and clinical sources in real time and batch modes
Develop and maintain robust APIs, data services, and tooling to provide internal teams with secure, efficient, and intuitive access to high-quality data
Partner with engineering, analytics, and business stakeholders to evolve data contracts and models that support emerging use cases and ensure semantic consistency
Implement CI/CD practices for data services, including automated testing for data quality, service reliability, and schema evolution
Championing a self-service data culture by building discoverable, well-documented data products and guiding teams toward empowered, autonomous data access
Act as a technical leader within the data domain driving best practices, mentoring teammates, and continuously improving how data is produced, shared, and consumed across the organization
Key Skills & Experience:
Solid quantitative background in Computer Science, Engineering, Physics, Math, or 8 10+ years of hands-on experience in a technically demanding role
Proficient in at least two major programming languages such as Python, Go, Scala, C++, or Java, with a strong understanding of software design and architecture
Deep knowledge of SQL and understanding of relational database internals and performance
Proven experience building data pipelines and working with distributed systems using technologies like Apache Spark, Kafka, Elasticsearch, Snowflake, and Airflow
Strong collaborator who actively contributes to code reviews, system design discussions, sprint planning, and KPI evaluations to drive team excellence and technical quality
Minimum a bachelor's or master's degree in computer science, information technology, or a related field.
Bonus Points:
Experience working on Data Platform or Infrastructure Engineering teams
Hands-on experience with AWS, Docker, Kubernetes, Kafka, Elasticsearch, Apache Airflow, Snowflake, and Terraform
Familiarity with CI/CD best practices for DataOps and deployment automation
Experience developing with Docker and deploying into Kubernetes-based environments
About the Data Services Team:
We are a rapidly growing organization made up of Software and Data Engineering, and DevOps team with a strong DevOps cultures, focused on building a next-generation Data as a Service (DaaS) platform.
We are seeking a strong technical lead who are is passionate about designing scalable, real-time data systems that deliver high-quality, trusted data across the company.
Our platform leverages technologies such as Apache Kafka for real-time data ingestion, Apache Flink for stream processing and transformation, Snowflake for scalable warehousing, and AWS as our cloud backbone.
We focus on serving data as via APIs and event streams to enable on-demand access, analytics, and machine learning across product and engineering teams.
This is a newly formed group with a mandate to drive meaningful change by enabling fast, reliable, and consistent access to data. We aim to eliminate organizational silos and foster a strong culture of collaboration and ownership. If you see a high-leverage solution to a long-standing data problem, we support tearing down legacy systems and building the right solution provided you have a strong plan and the drive to make it happen.
Thanks & Regards
Vinay Kumar
Senior Technical Recruiter
VISION INFOTECH INC
Phone:
Email:
368 Main Street, st #3, Melrose MA 02176
E-Verified Company