Sr Data Engineer

Overview

On Site
Depends on Experience
Contract - W2
Contract - Independent

Skills

API
Amazon Web Services
Amazon DynamoDB
Amazon Kinesis
Amazon S3
Apache Airflow
Apache Cassandra
Apache Kafka
Apache Spark
Big Data
Cloud Computing
Cloud Security
Data Governance
Data Engineering
Continuous Integration
Continuous Delivery
Databricks
Docker
Database
Data Processing
Data Modeling
Data Lake
IoT
Extract
Transform
Load
Microsoft Azure
MongoDB
NoSQL
Python
RDBMS
RESTful
Point Of Sale
PostgreSQL
SQL
Terraform

Job Details

Role: Sr Data Engineer

Location: Irving, TX ( 5 Days Onsite)

W2 Contract Position

JD:

We are currently looking for a Sr Data Engineer to join and work with our
Engineering, Product, Support, and Customer Success teams and have the responsibility to keep
our platform and services working at full steam. For this role, you ll need to have a good
knowledge of how each team works and how they interact with one another.

Responsibilities
Design and build scalable real-time and batch data pipelines to support store
operations, including POS transactions, inventory updates, and device logs.
Lead integration of store systems (handheld devices, IoT sensors, store APIs) with
centralized cloud-based data platforms.
Develop efficient MongoDB schemas and queries to support transactional and
analytical workloads.
Ensure data reliability, observability, and latency optimization across all processing
stages.
Implement and maintain infrastructure-as-code, CI/CD pipelines, and automated
deployment workflows.
Work collaboratively with cross-functional teams in engineering, product, store
operations, and analytics to define data requirements and deliver scalable solutions.
Establish and enforce data governance, access control, and compliance aligned with
internal security policies and industry regulations (e.g., PCI-DSS).
Mentor junior engineers and contribute to architectural reviews, standards, and
technical roadmaps.
Key Technologies & Stack
We are looking for candidates with proven expertise in the following technologies and
platforms:
Strong hands-on experience with AWS services, particularly Lambda, Kinesis, Glue, S3,
Step Functions, CloudWatch, and IAM, to build and manage scalable, cloud-native data
pipelines.

Proficiency in using Amazon S3 as a central data lake and Apache Spark (via EMR or
Glue) for distributed data processing at scale.
Advanced programming skills in Python, with the ability to develop robust and reusable
ETL components.
Experience in orchestrating workflows using Apache Airflow or AWS MWAA, as well as
event-driven state machines with Step Functions.
Knowledge of containerization and infrastructure automation using Docker, Terraform,
and GitHub Actions as part of CI/CD workflows.
Strong background in monitoring and observability using tools like CloudWatch,
Datadog, or PrometheGrafana.
Experience integrating with external systems and services using RESTful APIs and gRPC
protocols.
Solid understanding of cloud security and compliance, with working knowledge of IAM
policies, CloudTrail auditing, and encryption standards for data at rest and in transit.
Hands on experience with SQL technologies.
4+ years experience in building data workflows and big data systems.
Must have 2+ years in Azure cloud and databricks setup.
Must have 4+ years experience in spark framework based data pipeline development.
Must have exposure to API development.
4+ years of experience in any relational database (Oracle/Postgres).
2+ years of experience in any NoSQL databases (Cassandra/MongoDB/DynamoDB).
4+ years of experience in any cloud services (AWS, Azure, Google Cloud Platform).
Must have experience with messaging Technologies like Kafka or Rabbit MQ.
Qualifications
Bachelor s or master s degree in computer science, Engineering, or a related field.
6+ years of experience in data engineering, preferably in the retail or logistics domain.
Experience designing and operating production-grade data pipelines on AWS.
Strong understanding of data modeling concepts (document, dimensional, normalized).
Excellent problem-solving skills and ability to work in a fast-paced, distributed team.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Innorev Technologies, Inc