Overview
On Site
110k - 150k
Full Time
Skills
Product Innovation
Decision-making
Computer Science
Information Systems
Data Engineering
ELT
PostgreSQL
Cosmos-Db
Python
Java
Extract
Transform
Load
API
GraphQL
Snow Flake Schema
Apache HTTP Server
Microsoft
Apache Ranger
Terraform
Databricks
Machine Learning (ML)
Streaming
Jupyter
Monetization
Data Governance
Cloud Computing
Microservices
Real-time
Data Storage
SQL
NoSQL
Data Lake
Management
Collaboration
Apache Kafka
Microsoft Azure
Messaging
DevSecOps
Regulatory Compliance
Continuous Integration
Continuous Delivery
Workflow
Job Details
A fast-growing and innovative workforce travel technology company based in Scottsdale, Arizona is seeking a highly skilled Senior Data Engineer to join our expanding Data Platform & Engineering team.
This hybrid role includes regular in-office collaboration with up to 20% flexibility to work from home.
We're looking for an experienced data engineer with a passion for building scalable, real-time data solutions that support product innovation and data-driven decision-making across the organization. Qualifications:
This hybrid role includes regular in-office collaboration with up to 20% flexibility to work from home.
We're looking for an experienced data engineer with a passion for building scalable, real-time data solutions that support product innovation and data-driven decision-making across the organization. Qualifications:
- Hold a bachelor's degree in computer science, Information Systems, or a related field.
- Bring 5+ years of hands-on experience in data engineering or a closely related role.
- Experienced in designing and maintaining real-time and batch data pipelines using modern ETL/ELT frameworks.
- Deep knowledge of SQL, NoSQL, and hybrid data storage solutions, including PostgreSQL, Cosmos DB, and Data Lakes (e.g., Azure Data Lake, Delta Lake, Iceberg).
- Strong proficiency in Python, Java, and/or Go for data pipeline and API development.
- Skilled in working with event-driven architectures, including Azure Event Hub, Service Bus, and Kafka.
- Experience with API development (REST, GraphQL, gRPC) to support Data-as-a-Product initiatives.
- Comfortable working with Azure and Apache data platforms (e.g., Databricks, Azure Fabric, Snowflake, Apache Hudi).
- Understanding of data governance, lineage, and compliance using tools like Microsoft Purview, OpenLineage, or Apache Ranger.
- Familiarity with Infrastructure as Code (IaC) practices using Bicep, Terraform, or CloudFormation.
- Experience supporting machine learning workflows with Azure ML, Databricks ML, or MLflow.
- Hands-on experience with real-time data streaming and notebooks (e.g., Jupyter, Synapse).
- Knowledge of data monetization and self-serve data platforms.
- Exposure to federated data governance models.
- Design and build scalable, cloud-native data infrastructure that integrates with microservices.
- Develop and optimize real-time and batch data pipelines for ingestion, transformation, and delivery.
- Implement data storage strategies across SQL, NoSQL, and Data Lake technologies.
- Build and manage secure, documented data APIs that enable self-service access for internal and external users.
- Collaborate with product and business teams to define and deliver reliable data products.
- Implement event-driven architectures using Kafka or Azure messaging services.
- Ensure data quality, security, lineage, and observability across all pipelines.
- Work with DevSecOps teams to integrate security and compliance into CI/CD workflows.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.