Data Engineer II

Overview

On Site
$66 - $68 hr
Contract - Independent
Contract - W2
Contract - 12+ mo(s)

Skills

DATA ENGINEER
BIGDATA ENGINEER
BIG DATA ENGINEER
SOFTWARE ENGINEER
SOFTWARE DEVELOPMENT
SOFTWARE ENGINEERING
SOFTWARE
ETL
SPARK
AWS
AMAZON WEB SERVICES
HADOOP
HIVE
HDFS
KAFKA
SQS
LAMBDA
DYNAMODB
CASSANDRA

Job Details

Payrate: $66.00 - $68.00/hr.

Responsibilities:
  • Develop various facets of data capture, data processing, storage and distribution
  • Understand and apply AWS standard methodologies and products (compute, storage, databases)
  • Translate marketing concepts/requirements into functional specifications
  • Write clean, maintainable and well-tested code
  • Propose new ways of doing things and contribute to the system architecture
  • Manage ETL data to and from Group entities to third party solutions
  • Create and maintain functional utilities (SPAs) that provide point solutions to scale our marketing operations
  • Develop scalable and highly-performant distributed systems with everything this entails (availability, monitoring, resiliency)
  • Communicate and document solutions and design decisions
  • Work with business collaborators, analytics, and senior leadership to define and scope solutions that the marketing team can leverage to efficiently scale out our marketing operations

Qualifications:
  • Bachelor s degree; or equivalent related professional experience
  • 5+ years of development experience, particularly in using marketing acquisition technologies to deliver automation of multiple channels and drive operational efficiencies
  • 4+ years experience with programming languages such as PHP, Python or Java stack
  • Experience building data pipelines from multiple sources including APIs, CSV, event streams, NoSQL, etc. using distributed data frameworks
  • Experience with different aspects of data systems including database design, data ingestion, data modeling, unit testing, performance optimization, SQL etc
  • Demonstrable history creating on and leveraging AWS
  • Experience in batch and/or stream processing (using Spark) and streaming systems/queues such as Kafka or SQS
  • Daily practice of agile methods including use of sprints, backlog, user stories
  • Experience with AWS ecosystem or other big data technologies such as EC2, S3, Redshift, Batch, AppFlow
  • AWS: EC2, S3, Lambda, DynamoDB, Cassandra, SQL
  • Hadoop, Hive, HDFS, Spark, other big data technologies
  • Understand, Analyze, design, develop, as well as implement RESTful services and APIs

Pay Transparency: The typical base pay for this role across the U.S. is: $66.00 - $68.00 /hr. Final offer amounts, within the base pay set forth above, are determined by factors including your relevant skills, education and experience and the benefits package you select. Full-time employees are eligible to select from different benefits packages. Packages may include medical, dental, and vision benefits, 10 paid days off, 401(k) plan participation, commuter benefits and life and disability insurance.

For information about our collection, use, and disclosure of applicant's personal information as well as applicants' rights over their personal information, please see our Privacy Policy (;/span>

Aditi Consulting LLC uses AI technology to engage candidates during the sourcing process. AI technology is used to gather data only and does not replace human-based decision making in employment decisions. By applying for this position, you agree to Aditi s use of AI technology, including calls from an AI Voice Recruiter.

#AditiConsulting
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.