Overview
On Site
BASED ON EXPERIENCE
Contract - W2
Contract - Independent
Contract - 3+ mo(s)
Skills
DATA ENGINEER
HADOOP
MONGODB
REDIS
DYNAMODB
APPACHE
CASSANDRA
NEPTUNE
AWS
Job Details
Data Engineer //Software Engineer III Location: Maryland Heights, MO
Position Overview:
We are seeking an experienced Data Engineer to join our team. The ideal candidate will have a strong background in developing and maintaining data pipelines, working with big data platforms, and migrating data from on-premises solutions to cloud environments. This role is pivotal in ensuring that our data infrastructure is robust, scalable, and efficient, meeting the evolving needs of our business.
Key Responsibilities:
Data Extraction and Processing: Extract data from multiple sources using Spark and Scala based on business requirements.
Data Management: Maintain data within a data lake, and distribute data through APIs or Hive to relevant teams.
Pipeline Development: Build and manage data pipelines to support various business units.
Platform Utilization: Utilize Hadoop as the core big data platform, specifically working with Cloudera clusters.
Performance Monitoring: Monitor the performance of live data pipelines, address issues by creating user stories, and implement performance improvements.
Framework Utilization and Development: Utilize existing frameworks for established business cases; develop new pipelines from scratch for new business cases.
Project Involvement:
Cloud Migration: Lead and participate in the migration of data from on-premises systems to AWS cloud infrastructure.
Customer Data Projects: Handle extensive customer-related data, such as billing information and network issue tracking to proactively address and resolve customer service issues.
Technician Data Projects: Manage and track technician-related information, including GPS tracking of field agents to optimize their routes and performance.
Must-Have Skillsets:
Data Pipeline Development: 5+ years of experience in developing data pipelines and API calls using SQL, Scala, Spark, and Java.
Big Data Experience: Proficient in utilizing Hadoop for big data operations.
Automation Scripting: Experience in shell scripting for automation purposes.
Cloud Expertise: Hands-on experience with AWS, specifically in migrating data from on-premises to the cloud and managing cloud-based data systems.
NoSQL Databases: Proficient in working with NoSQL databases such as MongoDB, Redis, DynamoDB, Apache Cassandra, and Amazon Neptune.
Position Overview:
We are seeking an experienced Data Engineer to join our team. The ideal candidate will have a strong background in developing and maintaining data pipelines, working with big data platforms, and migrating data from on-premises solutions to cloud environments. This role is pivotal in ensuring that our data infrastructure is robust, scalable, and efficient, meeting the evolving needs of our business.
Key Responsibilities:
Data Extraction and Processing: Extract data from multiple sources using Spark and Scala based on business requirements.
Data Management: Maintain data within a data lake, and distribute data through APIs or Hive to relevant teams.
Pipeline Development: Build and manage data pipelines to support various business units.
Platform Utilization: Utilize Hadoop as the core big data platform, specifically working with Cloudera clusters.
Performance Monitoring: Monitor the performance of live data pipelines, address issues by creating user stories, and implement performance improvements.
Framework Utilization and Development: Utilize existing frameworks for established business cases; develop new pipelines from scratch for new business cases.
Project Involvement:
Cloud Migration: Lead and participate in the migration of data from on-premises systems to AWS cloud infrastructure.
Customer Data Projects: Handle extensive customer-related data, such as billing information and network issue tracking to proactively address and resolve customer service issues.
Technician Data Projects: Manage and track technician-related information, including GPS tracking of field agents to optimize their routes and performance.
Must-Have Skillsets:
Data Pipeline Development: 5+ years of experience in developing data pipelines and API calls using SQL, Scala, Spark, and Java.
Big Data Experience: Proficient in utilizing Hadoop for big data operations.
Automation Scripting: Experience in shell scripting for automation purposes.
Cloud Expertise: Hands-on experience with AWS, specifically in migrating data from on-premises to the cloud and managing cloud-based data systems.
NoSQL Databases: Proficient in working with NoSQL databases such as MongoDB, Redis, DynamoDB, Apache Cassandra, and Amazon Neptune.
#INDDEN