Senior Data Engineer

Overview

Remote
Depends on Experience
Full Time
No Travel Required

Skills

Agile
Amazon DynamoDB
Amazon Redshift
Amazon Web Services
Analytical Skill
Analytics
Apache Airflow
Apache Kafka
Apache Spark
Big Data
Business Intelligence
Cloud Computing
Collaboration
Communication
Computer Science
Continuous Delivery
Continuous Integration
Data Architecture
Data Engineering
Data Governance
Data Lake
Data Modeling
Data Processing
Data Quality
Data Security
Data Warehouse
Docker
ELT
Extract
Transform
Load
Git
Informatica
Information Systems
Mentorship
Microsoft Azure
Microsoft Power BI
MongoDB
MySQL
NoSQL
PostgreSQL
Python
Real-time
Regulatory Compliance
SQL
Scala
Shell
Snow Flake Schema
Tableau
Technical Drafting
Terraform
Unstructured Data

Job Details

Job Title: Senior Data Engineer (Remote)

Experience Required: 10+ Years


Job Summary:

We are seeking an experienced and highly skilled Senior Data Engineer with a strong background in designing, building, and maintaining robust, scalable, and high-performance data pipelines and data architecture. The ideal candidate has at least 10 years of experience in data engineering, cloud data platforms, ETL/ELT, and big data ecosystems. This role is fully remote and demands strong collaboration, communication, and analytical skills to partner with cross-functional teams.


Key Responsibilities:

  • Design and develop scalable and maintainable data pipelines to support data ingestion, processing, and analytics.

  • Architect and implement data lake and data warehouse solutions (e.g., AWS Redshift, Snowflake, BigQuery).

  • Build ETL/ELT processes for structured and unstructured data using tools such as Apache Airflow, dbt, Informatica, or custom pipelines.

  • Optimize data systems and ensure data quality, integrity, and reliability.

  • Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver effective solutions.

  • Ensure security, compliance, and governance in all data operations.

  • Implement data monitoring, logging, and alerting systems to ensure data pipelines run efficiently.

  • Mentor junior data engineers and participate in code reviews and technical design discussions.


Required Skills:

  • Languages: Python, SQL, Scala (nice to have), Shell scripting

  • Cloud Platforms: AWS (preferred), Azure, or Google Cloud Platform

  • Data Tools: Apache Spark, Kafka, Airflow, Snowflake, Redshift, Glue, EMR

  • Databases: PostgreSQL, MySQL, NoSQL (MongoDB, DynamoDB)

  • Data Warehousing & Lakehouse: Snowflake, Delta Lake, Redshift

  • DevOps/DataOps: Docker, Git, CI/CD pipelines, Terraform (plus)

  • Other: Excellent understanding of data modeling, data governance, and data security best practices


Preferred Qualifications:

  • Bachelor s or Master s degree in Computer Science, Engineering, Information Systems, or a related field.

  • Experience with real-time data processing and stream processing platforms.

  • Strong communication skills and experience working with globally distributed teams.

  • Familiarity with BI tools like Power BI, Tableau, or Looker is a plus.

  • Experience with agile development methodologies.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.