Hadoop Big Data Engineer - ONLY W2 - ONLY DC, MD, VA

Overview

Remote
Hybrid
Depends on Experience
Contract - W2
Contract - 12 Month(s)

Skills

Apache Hadoop
Apache Hive
Apache Kafka
Apache Spark
Apache Oozie
Apache Sqoop
Amazon Redshift
Amazon S3
Amazon Web Services
Big Data
Data Integration
Python
PostgreSQL
Java

Job Details

------------------ONLY W2--------------------ONLY W2------------------ONLY W2-------------

***************Candidates must reside in Maryland, Washington, DC, or Virginia****************

Title: Hadoop Big Data Engineer

Location: Reston, VA 20191

Duration: 12 Months Contract to hire

Job Description:

Terms of Employment

  • W2 Contract-to-Hire, 12 months
  • This is a hybrid schedule at Reston, VA, Candidates must reside in Maryland, Washington, DC, or Virginia

Responsibilities

  • Works closely with Architects, Product Owners, Scrum Masters, and Value Stream Managers to provide insight into delivering business value and meeting objectives.
  • Decomposes functional and technical requirements into project activities and tasks and provides conceptual design, prototype, and test cycles appropriate to a chosen technical solution.
  • Identifies technical risks and develops mitigation strategies; introduces and recommends industry best practices and standards for the project.
  • Evaluates and assists in the selection and procurement of hardware and software technologies and serves as a mentor for junior developers.
  • Works with stakeholders as well as technical and analytical counterparts to define constraints and develop requirements and concept of operations documentation.

Required Skills & Experience

  • Bachelor's Degree in Information Technology or Computer Science.
  • 10 years in Software design and development, software test and evaluation, software requirements management.
  • 7+ years of strong programming background with Java/Python/Scala
  • 3+ years of experience working on Data Integration projects using Hadoop MapReduce, Sqoop, Oozie , Hive, Spark and other related Big Data technologies
  • 2+ years of experience on AWS preferably leveraging services such as Lambda, S3, Redshift, Glue services
  • Experience building Kafka based data ingestion/retrieval programs
  • Experience tuning Hadoop/Spark/hive parameters for optimal performance
  • Strong SQL query writing and data analysis skills
  • Excellent Shell scripting experience
  • Rigor in high code quality, automated testing, and other engineering best practices, ability to write reusable code components
  • Knowledge of cloud technologies ( e.g. AWS, Azure)
  • Knowledge of database technologies (e.g. cloud, SQL, Oracle, Mongo DB, PostgreSQL, etc.)
  • Fundamental knowledge of software engineers best practices, agile methodologies, CI/CD pipelines.
  • Knowledge of test-first practices including Test-Driven Development (TDD) for unit tests and Behavior-Driven Development (BDD) for automated acceptance tests.
  • Strong experience with Deployment/Continuous Integration/ Continuous Testing/ Continuous Delivery processes and tools and has expertise in CI/CD tools and framework.
  • Experience with deploying a global application and configuration management.
  • Reviewing the work of other developers and providing feedback.
  • Ability to communicate technical requirements to all levels of expertise.
  • Proficient in establishing and maintaining good working relationships.
  • Knowledge and understanding of software development life cycle (SDLC)
  • Proficient with integrating complex and/or existing systems.
  • Knowledge of programming languages (e.g. JavaScript, C, Python, etc.).
  • Excellent communication skills both written and verbal.

Sincerely,

Preetam Raj

Lead Technical Recruiter

nTech Workforce Inc.

D: EXT: 726

E:

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.