Overview
Skills
Job Details
A globally leading technology company is looking for a Software Data Engineer to design and maintain scalable ELT pipelines, working across the full data lifecycle from ingestion to reporting. The ideal candidate will collaborate closely with cross-functional teams, leveraging tools like Airflow, DBT, Snowflake, and Python to deliver high-quality, well-documented data solutions. If you're passionate about building robust data infrastructure and enabling data-driven decisions at scale, we invite you to apply!
Job Responsibilities:
Design, build, and maintain scalable ELT pipelines using SQL and Python.
Work across the full data lifecycle from ingestion and transformation to model deployment and reporting.
Collaborate with data scientists, engineers, and product managers to deliver clean, reliable, and well-documented data.
Implement and manage workflows using Airflow, while ensuring traceability and version control via GitHub.
Support transformation logic and data modeling using DBT, with data housed primarily in Snowflake.
Use Jupyter Notebooks and ad-hoc analysis to support business questions and drive actionable insights.
Build tools to monitor, validate, and test data pipelines, ensuring high availability and quality.
Contribute to automation efforts, improving the team s efficiency and reducing manual work..
Provide occasional support for urgent data reporting needs.
Engage constructively with both technical and non-technical colleagues to ensure data solutions align with business goals.
Key Qualifications:
2 5 years of experience in data engineering, software engineering, or data analytics roles.
Proficient in SQL and Python; comfortable with Bash or shell scripting.
Hands-on experience with modern data tooling:
Spark for large-scale data processing
Airflow for workflow orchestration
Snowflake and DBT for data transformation and modeling
AWS S3 for data storage and movement
Docker and Kubernetes for containerization and deployment workflows
Jupyter Notebooks for collaborative data exploration and documentation
Familiarity with Git-based CI/CD pipelines and collaborative code development.
Solid understanding of data warehousing, data modeling, and working with big data ecosystems.
Foundational knowledge of statistics, including mean, median, standard deviation, and variance.
Strong problem-solving skills with the ability to break down complex issues into manageable components.
Committed to good software engineering practices such as testing, documentation, and code quality checks.
Able to clearly communicate technical concepts to both technical peers and non-technical stakeholders.
Familiarity with battery systems or electrical engineering is a plus, but not required.
Education:
MS or Ph.D. in Computer Science, Software Engineering, Statistics, Electrical Engineering, Battery Engineering, or related technical field.
Certifications in Six Sigma (CSSBB) or Quality Engineering (CQE) are a plus but not required.
Type: Contract
Duration: 9 months (with a possibility to extend up to 18 months)
Work Location: Cupertino, CA (Hybrid)
Pay Range: $ 53.00 - $ 68.00 (DOE)