Data Architect (Spark/Scala) - Day 1 Onsite - Fulltime Only

  • Jersey City, NJ
  • Posted 37 days ago | Updated 1 day ago

Overview

On Site
Depends on Experience
Full Time

Skills

Apache Spark
Scala
HDFS
Hive
Impala
Hbase
Bigdata
SQL

Job Details

Hi ,

We are Photon, one of the world's largest Digital Platform Engineering company providing a combination of Strategy Consulting, Creative Design and Technology Services to a wide range of customers.

We work with 40% of the Fortune 100 companies. And, we have a repertoire of niche products and experiences that we have designed and built for businesses to fully empower their digital transformation.

Check out our video at

Job Description:

Data Architect

Jersey City NJ / Tampa FL / Irving TX

Day 1 Onsite

Fulltime Only

Key Responsibilities:
  • Architect and design large-scale, distributed big data solutions using Java and big data technologies to handle high-volume data processing and analytics.
  • Optimize and tune Spark applications for better performance on large-scale data sets.
  • Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions.
  • Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions.
  • Design and implement high-performance data processing and analytics solutions.
  • Ensure data integrity, accuracy, and security across all processing tasks.
  • Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies.
  • Implement version control and CI/CD pipelines for Spark applications.
Required Skills & Experience:
  • Minimum 15+ years of experience in application development.
  • Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing.
  • Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop.
  • Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi.
  • Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data.
  • Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL.
  • Knowledge of data warehousing concepts, dimensional modeling, and data lakes.
  • Ability to troubleshoot and optimize Spark and Cloudera platform performance.
  • Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.