Senior Data Analytics Engineer (Data Sciences)

Overview

Remote
On Site
Full Time

Skills

Research and Development
Life Sciences
Productivity
Business Analytics
Data Analysis
Modeling
Project Management
Financial Reporting
Enterprise Resource Planning
Financial Software
Leadership
Decision-making
Documentation
Version Control
Automated Testing
Extract
Transform
Load
Data Warehouse
Forecasting
Profitability Analysis
IT Management
Mentorship
Pandas
PySpark
Collaboration
Finance
Data Governance
Data Architecture
Microsoft
Orchestration
Customer Engagement
Statistics
Computer Science
Mathematics
Data Science
Process Improvement
Python
SQL
Data Engineering
Databricks
Microsoft Power BI
Tableau
SAS
Analytics
Communication
Management
Reporting
Agile
Scrum
Apache Spark
Testing

Job Details

Work Schedule
Standard (Mon-Fri)

Environmental Conditions
Office

Job Description

When you join us at Thermo Fisher Scientific, you'll be part of an inquisitive team that shares your passion for exploration and discovery. We have revenues of more than $40 billion and make the largest R&D investment in the industry. This gives our people the resources and chances to make significant contributions to the world.

If you are passionate about engineering and how it can drive decision-making within a world-leading life science company, then this is the role for you!

As a key member of our team, you will collaborate across all divisions and functions. You will develop, build, and maintain analytics and data solutions for customers with various technical backgrounds. Your work will generate data insights to improve efficiency, productivity, and revenue. The ideal candidate will possess strong business analytics and communication skills, experience with data analytics and reporting tools, ETL processes, modeling knowledge, and solid project management abilities.

Key Responsibilities:

Lead the architecture and implementation of data pipelines and Microsoft Fabric-based data models to enable unified financial reporting and analytics.

  • Build, review, and optimize ETL processes developed in Python, ensuring efficient data ingestion, transformation, and quality across diverse data sources (ERP, financial systems, operational platforms).
  • Partner with Finance leadership to define business requirements, translate them into robust technical builds, and deliver actionable insights for decision-making.
  • Establish and implement data engineering standards, including coding guidelines, documentation, version control, and automated testing for ETL pipelines.
  • Develop, test, and deploy scalable data models using Lakehouse architectures and Fabric Data Warehouses to support forecasting, planning, and profitability analysis.
  • Provide technical leadership and mentorship to data engineers and analysts, encouraging skill development in Fabric, Power BI, and modern Python data frameworks (Pandas, PySpark, SQLAlchemy).
  • Collaborate multi-functionally with Finance, IT, and Data Governance teams to ensure alignment with enterprise data architecture and security policies.
  • Keep up to date with developments in Microsoft Fabric, Power BI, Python, and data orchestration tools, and suggest their strategic implementation.
  • Run multiple concurrent initiatives - leading all aspects of planning, prioritisation, communication, and customer engagement.

Requirements/Qualifications:

  • Bachelor's degree or equivalent experience in a quantitative field, such as Statistics, Computer Science, Mathematics, Data Science or a related field, master's or equivalent experience preferred
  • Proven experience in a data engineer or data science role with dynamic responsibilities and scope
  • Experience building models and analyzing large, complex data sets yielding opportunities for revenue and/or process improvement within an organization
  • Technical proficiency in Python and SQL
  • Proficiency in data engineering and reporting platforms (Databricks, Power Bi, Tableau, SAS Analytics)
  • Interpersonal skills: outstanding verbal/written communication
  • Proven track record of achieving desired results without direct report authority
  • Knowledge of Agile/Scrum methodology is a plus
  • Hands-on experience with distributed computational framework, such as Spark
  • Command of statistical topics, including distributions, hypothesis testing, and experiment building
  • Minimal travel required about 10%
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.