"Data Engineer"

Amazon EMR, Amazon S3, sql, python, spark, aws, Amazon RDS, Amazon Web Services, Analytics, Ansible, Apache Avro, Apache Kafka, Apache Maven, Apache Spark, Architecture, Cloud, Continuous integration, Data warehouse, DevOps, ETL, Microservices, NoSQL
Contract W2, Contract Independent, Contract Corp-To-Corp, 12 Months
$70 - $100
Work from home available Travel not required

Job Description

Must be eligible for conversion without sponsorship (USC, GC or GC-EAD)

must have: Spark, Python, AWS

Should be willing to take Glider assessment test prior to submission.

Data Engineer
Responsibilities

  • Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers
  • Help develop solutions for streaming, real-time, and search-driven analytics
  • Must have a firm understanding of delivering large-scale data sets solutions and SDLC best practices
  • Transform complex analytical models in scalable, production-ready solutions
  • Utilizing programming languages like Java, Scala, Python
  • Manage the development pipeline of distributed computing Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
  • Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Terraform, Git and Docker

Basic qualifications

  • Bachelor Degree
  • At least 5 years of experience with the Software Development Life Cycle (SDLC)
  • At least 3 years of experience working on a big data platform
  • At least 2 years of experience working with unstructured datasets
  • At least 2 years of experience developing microservices: Python, Java, or Scala
  • At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  • At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  • At least 1 year of Agile experience
  • At least 1 year of experience with a streaming data platform including Apache Kafka and Spark

Preferred qualifications

  • 1+ years of experience with Identity & Access Management, including familiarity with principles like least privilege & role-based access control
  • Understanding of microservices architecture & RESTful web service frameworks
  • 1+ years of experience with JSON, Parquet, or Avro formats
  • 1+ years experience in RDS, NOSQL or Graph Databases
  • 1+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR  

Posted By

Akash Nirmal

Dice Id : 10110693c
Position Id : 6471124
Originally Posted : 3 months ago
Have a Job? Post it

Similar Positions

Data Engineer
  • ProIT Inc.
  • Richmond, VA
Data Engineer with Microservices
  • XFORIA Inc
  • Richmond, VA
Data Engineer
  • Digital Intelligence Systems, LLC
  • Richmond, VA
Data Engineer
  • Collabera
  • Richmond, VA
Data Engineer
  • IT America
  • Richmond, VA
Data Engineers
  • Epikso
  • Richmond, VA
Data Engineer
  • Keanesoft
  • Richmond, VA
Data Engineer (Python)
  • Professional Vision Technologies
  • Richmond, VA
Data Engineers (Big Data)
  • S&R IT Inc.
  • Richmond, VA
Big Data Data Engineer
  • Systel,Inc.
  • Richmond, VA
Data Engineer
  • Blue Icy Water, LLC
  • Richmond, VA
Data Engineer
  • Navtech Inc
  • Richmond, VA