Staff Data Engineer- Publishing Platform- Remote

Remote • Posted 14 hours ago • Updated 2 hours ago
Contract W2
On-site
$70-90/hr
Fitment

Dice Job Match Score™

🫥 Flibbertigibetting...

Job Details

Skills

  • Publishing
  • Scalability
  • Promotions
  • Data Warehouse
  • Payments
  • Instrumentation
  • A/B Testing
  • Performance Metrics
  • Clarity
  • Privacy
  • Onboarding
  • Mentorship
  • Computer Science
  • Information Systems
  • Data Engineering
  • FOCUS
  • Big Data
  • Programming Languages
  • Python
  • Scala
  • SQL
  • Golang
  • Databricks
  • Apache Spark
  • Orchestration
  • Management
  • Warehouse
  • Cloud Computing
  • Amazon Web Services
  • Google Cloud Platform
  • Google Cloud
  • Data Modeling
  • Analytical Skill
  • Use Cases
  • Streaming
  • Apache Kafka
  • Amazon Kinesis
  • Data Quality
  • Testing
  • Version Control
  • Workflow
  • Collaboration
  • Analytics
  • Machine Learning (ML)
  • Data Management
  • Meta-data Management
  • Data Architecture

Summary

You'll work closely with Data Scientists, ML Engineers, and Product partners to transform experimental models into robust, production-grade data pipelines ensuring performance, scalability, and measurable business impact.

This role focuses on building and maintaining high-quality datasets and pipelines that fuel recommendation systems within player platforms, supporting data-driven personalization across store content, promotions, and more. We are looking for an individual who operates with a high degree of autonomy and proactively anticipates and mitigates risks. You ll also contribute to modernizing our data systems and improving the team s engineering practices, data modeling approaches, and observability standards.

Responsibilities
Design, build, and maintain scalable data pipelines for structured and semi-structured data that support analytics, machine learning models, and player-facing systems.
Implement efficient, reliable data models and transformations within Riot s central game data warehouse, with a focus on Medallion architecture, accuracy, performance, and long-term maintainability.
Develop and productionize pipelines to ingest, transform, and serve data for systems such as Client, payments, content delivery, and store recommendations including instrumentation to support A/B testing and key performance metrics.
Collaborate with Data Scientists, Machine Learning Engineers, and Software Engineers to ensure data quality, schema clarity, and smooth integration into downstream systems.
Diagnose and resolve issues in data pipelines; optimize for reliability, performance, and cost efficiency; and enhance observability across workflows.
Apply privacy, security, and responsible data use guidelines when building or accessing behavioral datasets (e.g., GDPR, CCPA, internal governance policies).
Document data models, pipelines, data contracts, and SLAs to ensure transparency and alignment across teams.
Contribute to team engineering practices, including coding standards, testing strategies, and operational best practices.
Participate in on-call rotations, perform code reviews, and support onboarding and mentorship of junior engineers.

Required Qualifications
Bachelor s or Master s degree in Computer Science, Information Systems, Engineering, or a related technical field.
5 7+ years of hands-on experience in data engineering, with a focus on building and maintaining scalable pipelines in production environments.
Strong Proficiency in big data tools and programming languages such as Python, Scala, Spark, SQL, and optionally GoLang.
Hands-on experience with Databricks for building and operating scalable data pipelines (e.g., Spark jobs, Delta Lake).
Experience with orchestration and workflow tools (e.g., Airflow, Dagster, or Prefect).
Strong experience with dbt for modular data modeling, transformation framework, testing, and lineage management in the warehouse.
Familiarity with cloud-based data infrastructure, particularly AWS or Google Cloud Platform.
Solid understanding of Medallion architecture, schema design and data modeling principles for analytical and operational use cases.
Exposure to streaming data pipelines or event-driven ingestion using technologies like Kafka, Kinesis, or Pub/Sub.
Working knowledge of data quality, testing, version control, and observability best practices in modern data workflows.
Strong collaboration skills with the ability to communicate effectively across engineering, analytics, and product teams.

Desired Qualifications
Experience supporting personalization, recommendations, or player-facing ML systems.
Exposure to feature stores, ML data pipelines, or online/offline data management patterns.
Knowledge of event-based or contextual recommendation signals (user behavior, session data, content metadata).
Interest in contributing to data architecture and standards as an emerging craft leader.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: cxbcsi
  • Position Id: Job44542
  • Posted 14 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Remote

Today

Contract

75-90/hr

Remote

Today

Contract

70-90/hr

Remote

19d ago

Easy Apply

Contract

65

Remote

4d ago

Easy Apply

Contract, Third Party

Depends on Experience

Search all similar jobs