Job Description: Staff Data Engineer, Enterprise Data Operations
Location: Westminster, CO (Corporate Headquarters)
About TrimbleTrimble is an industrial technology leader transforming the way the world works by delivering solutions that connect the physical and digital worlds. Core technologies in positioning, modeling, connectivity, and data analytics enable customers to improve productivity, quality, safety, transparency, and sustainability. From purpose-built products to enterprise lifecycle solutions, Trimble is transforming industries such as agriculture, construction, geospatial, and transportation.
For more information about Trimble (NASDAQ: TRMB), visit: ;br>
The OpportunityThe Enterprise Data Operations team is the central nervous system of Trimble's data ecosystem. Our mission is to empower business units across the globe with trusted, high-quality data to drive analytics, business intelligence, and strategic decision-making. We are responsible for the architecture, development, and governance of the enterprise data platform that powers Trimble's growth.
We are seeking a Staff Data Engineer to join our team at our corporate headquarters in Westminster, Colorado. This is a pivotal, senior-level role for a technical thought leader passionate about building robust, scalable, and elegant data solutions. You will not just be building pipelines; you will be setting the technical direction, mentoring engineers, and solving our most complex data challenges. As a technical expert, you will be instrumental in designing and delivering the next generation of our enterprise data products.
What You'll Do (Key Responsibilities)- Data Architecture & Strategy: Lead the design and evolution of our enterprise data platform, ensuring it is scalable, reliable, and secure. Champion and implement best practices in data architecture, data modeling, and data engineering.
- Enterprise Data Product Delivery: Architect, build, and optimize complex, large-scale ETL/ELT data pipelines from a wide variety of source systems using modern big data technologies on cloud platforms (AWS, Azure, Google Cloud Platform).
- Technical Leadership: Act as a technical thought leader and subject matter expert for data engineering within the organization. Mentor and guide junior and mid-level data engineers, fostering a culture of technical excellence and innovation through code reviews, design sessions, and knowledge sharing.
- Development & Optimization: Develop robust, reusable data processing frameworks and components. Write clean, high-quality, and maintainable code in Python and SQL. Profile and tune data processing jobs to improve performance and reduce cost.
- Cross-Functional Collaboration: Partner closely with data scientists, BI developers, data analysts, and business stakeholders to understand their data needs and translate complex business requirements into scalable technical solutions.
- Operational Excellence: Drive automation in data quality, data governance, and platform operations. Troubleshoot and resolve complex data integrity and performance issues across the enterprise data landscape.
What You'll Bring (Required Qualifications)- Bachelor's degree in Computer Science, Engineering, Information Systems, or a related technical field.
- 8+ years of progressive experience in data engineering, software engineering, or a related role, with a demonstrated track record of delivering complex, enterprise-scale data products.
- Expert-level proficiency in SQL and at least one programming language, preferably Python.
- Extensive hands-on experience with a major cloud platform (AWS, Azure, or Google Cloud Platform) and its data services (e.g., S3, Redshift, Glue, Lambda; ADLS, Synapse, Data Factory; BigQuery, Cloud Storage).
- Deep expertise with modern big data processing frameworks like Apache Spark.
- Proven experience designing and building large-scale data warehouses and data lakes from the ground up, with a deep understanding of data modeling techniques (e.g., Kimball, Inmon, Data Vault).
- Demonstrated ability to lead technical projects, influence architecture decisions, and mentor other engineers.
- Excellent problem-solving skills and the ability to navigate ambiguity in a fast-paced environment.
What We'd Love to See (Preferred Qualifications)- Master's degree in Computer Science or a related field.
- Experience with modern data stack technologies and orchestration tools such as dbt, Airflow, or Prefect.
- Experience with streaming data technologies like Kafka, Kinesis, or Spark Streaming.
- Knowledge of modern data architecture concepts like Data Mesh or Data Fabric.
- Experience with Infrastructure as Code (e.g., Terraform, CloudFormation) and CI/CD best practices for data pipelines.
- Familiarity with containerization technologies like Docker and Kubernetes.
- Relevant cloud certifications (e.g., AWS Certified Data Analytics, Google Professional Data Engineer).
Trimble's Privacy Policy