Senior Big Data Engineer

  • Iselin, NJ
  • Posted 60+ days ago | Updated 5 hours ago

Overview

On Site
Accepts corp to corp applications
Contract - W2
Contract - 6 month(s)

Skills

Big Data

Job Details

Job title: Senior Big Data Engineer

Location: New Jersey (Hybrid)

Duration: 6 + months

Job Description:

We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. As a Senior Data Engineer, you will play a crucial role in designing, implementing, and maintaining data pipelines and infrastructure for our big data projects. Your expertise in Java, Python, Spark cluster management, data science, big data, REST API development, and knowledge of Databricks and Delta Lake will be essential in driving the success of our data initiatives.

Responsibilities:

  • Design, develop, and implement scalable data pipelines and ETL processes using Java, Python, and Spark.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and design efficient solutions.
  • Manage and optimize Spark clusters to ensure high performance and reliability.
  • Perform data exploration, data cleaning, and data transformation tasks to prepare data for analysis and modeling.
  • Develop and maintain data models and schemas to support data integration and analysis.
  • Implement data quality and validation checks to ensure accuracy and consistency of data.
  • Utilize REST API development skills to create and integrate data services and endpoints for seamless data access and consumption.
  • Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues.
  • Stay updated with the latest technologies and trends in big data, data engineering, data science, and REST API development, and provide recommendations for process improvements.
  • Mentor and guide junior team members, providing technical leadership and sharing best practices.

Qualifications:

  • Master's degree in Computer Science, Data Science, or a related field.
  • Minimum of 3 years of professional experience in data engineering, working with Java, Python, Spark, and big data technologies.
  • Strong programming skills in Java and Python, with expertise in building scalable and maintainable code.
  • Proven experience in Spark cluster management, optimization, and performance tuning.
  • Solid understanding of data science concepts and experience working with data scientists and analysts.
  • Proficiency in SQL and experience with relational databases (e.g., Snowflake, Delta Tables).
  • Experience in designing and developing REST APIs using frameworks such as Flask or Spring.
  • Familiarity with cloud-based data platforms (e.g.Azure)
  • Experience with data warehousing concepts and tools (e.g., Snowflake, BigQuery) is a plus.
  • Strong problem-solving and analytical skills, with the ability to tackle complex data engineering challenges.
  • Excellent communication and collaboration skills, with the ability to work effectively in a team-oriented environment.

About VeridianTech