Spark Developer - Remote / Telecommute

Spark, python, SQL
Contract W2, Contract Independent, Contract Corp-To-Corp, 12 Months
Depends on Experience

Job Description

Job Tittle: Spark Data Engineer - Remote / Telecommute



  • Work with data engineering team to define and develop data ingestion, validation, transformation and data engineering code.
  • Develop open source platform components using Spark, Scala, Java, Oozie, Hive and other components
  • Document code artifacts and participate in developing user documentation and run books
  • Troubleshoot deployment to various environments and provide test support.
  • Participate in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates


  • At least 3+ years of experience in developing large scale data processing/data storage/data distribution systems
    • At least 3+ years of experience on working with large Hadoop projects using Spark and Python and working with Spark DataFrame, Dataset APIs with SparkSQL as well as RDDs and Scala function literals and closures.
    • Hands-on experience with Hadoop, Hive, Sqoop, Oozie, HDFS.  Great SQL Skills. 
    • Experience with ELT/ETL development, patterns and tooling, experience with ETL tools (Informatica, Talend) preferred.
    • Experience with AWS  and cloud environments including S3 object storage, EC2, RDS and Redshift
    • Experience with SQL including Postgres, MySQL RDBMS platforms
    • Experience with Linux (RHEL or Centos preferred) environments
    • Experience with various IDE and code repositories as well as unit testing frameworks.
    • Experience with code build tools such as Maven.
    • Fundamental knowledge of distributed data processing systems and storage mechanisms.
    • Ability to produce high quality work products under pressure and within deadlines with specific references
    • Strong communication and collaborative skills
    • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
    • At least 5+ years of working with a complex Big Data environment
    • 5+ years of experience with JIRA/GitHub/Git and other code management toolsets
  • Bachelors’s degree in Computer Science or related field

  • Certification in Spark, AWS or other cloud platform


Dice Id : ittb
Position Id : 7083938
Originally Posted : 2 months ago
Have a Job? Post it

Similar Positions

Hadoop with Python and Spark
  • Ztek Consulting
  • Bloomington, IL, USA
Hadoop Developer - 9+ years of experience
  • Next Level Business Services, Inc.
  • McLean, VA, USA
  • Nam Info Inc
  • Beaverton, OR, USA
Sr. Java/Big Data Developer
  • Spartan Solutions INC
  • Reston, VA, USA
Big data Engineer/Spark Developer - Contract
  • K-Tek Resourcing LLC
  • Chicago, IL, USA
Big Data / Hadoop Developer
  • N2 Services Inc
  • Detroit, MI, USA
Sr. Spark / Scala Developer
  • Matlen Silver
  • Plymouth Meeting, PA, USA
ODS Data Support Engineer (Pentaho ETL /Spark/Python/Airflow)
  • Atika Tech
  • Mount Laurel Township, NJ, USA
Data Engineer/ Big data engineer
  • Caprus IT Inc.
  • Plano, TX, USA