AWS Data Engineer ( Experience in the utility industry data)

  • Posted 2 hours ago | Updated 2 hours ago

Overview

Remote
Depends on Experience
Contract - Independent
Contract - 6 Month(s)

Skills

Data Engineering
SQL
Python
Scala
data transformation
processing
utility industry data
utility data
meter data
customer data
grid
asset data
work management
outage data
CIM standards
utility integration
AWS services
Storage & Processing
S3
Glue
Redshift
Athena
EMR
Streaming
Kinesis
MSK
Lambda
Step Functions
optimize ETL/ELT workflows
Automate batch
streaming data pipelines
real-time analytics
Build solutions
scalability
performance
fault tolerance
data architecture
data lakes
data warehouses
lakehouses
SQL queries
data models
pipeline performance
compute
storage
networking
partitioning
indexing
schema design
data integration
databases
APIs
IoT
third-party systems

Job Details

title: AWS data engineer ( Experience in the utility industry data)
Location: This position is remote.
duration: 6 months contract.

seeking an AWS Data Engineer to design, build, and optimize large-scale data pipelines and analytics solutions on Amazon Web Services (AWS). The role will design and build the infrastructure and pipelines that enable organizations to collect, store, process, and analyze large volumes of structured and unstructured data efficiently and securely. A Data Engineer is responsible for the end-to-end data lifecycle, from ingestion and transformation to storage and delivery for analytics, machine learning, and operational systems. They ensure data is reliable, high-quality, scalable, and accessible for business and technical stakeholders.

The ideal candidate will have strong expertise in cloud-based data engineering, hands-on experience with AWS native services, and a solid understanding of data lake, data warehouse, and real-time streaming architectures.

Responsibilities:
Design, build, and optimize ETL/ELT workflows to ingest data from multiple sources. (e.g., S3, Redshift, Lake Formation, Glue, lambda).
Implement data cleansing, enrichment, and standardization processes.
Automate batch and streaming data pipelines for real-time analytics. Build solutions for both streaming (Kinesis, MSK, Lambda) and batch processing (Glue, EMR, Step Functions).
Ensure pipelines are optimized for scalability, performance, and fault tolerance.
Optimize SQL queries, data models, and pipeline performance.
Ensure efficient use of cloud-native resources (compute, storage, networking).
Design and implement data architecture across data lakes, data warehouses, and lakehouses.
Optimize data storage strategies (partitioning, indexing, schema design).
Implement data integration from diverse sources (databases, APIs, IoT, third-party systems).
Work with Data Scientists, Analysts, and BI developers to deliver clean, well-structured data.
Document data assets and processes for discoverability.
Training of existing core staff who will maintain infrastructure and pipelines.

Required Education & Experience:
Bachelor s degree in Computer Science, Data Engineering, or related field.
5+ years of experience in data engineering roles
Proficiency in SQL, Python, or Scala for data transformation and processing.
Experience in the utility industry data
Strong understanding of utility data domains: meter data, customer data, grid/asset data, work management, outage data.
Familiarity with CIM standards and utility integration frameworks.
Working Knowledge of AWS services such as:
Storage & Processing: S3, Glue, Redshift, Athena, EMR
Streaming: Kinesis, MSK, Lambda

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.