Big Data Developer with Hadoop

Overview

On Site
$80,000 - $90,000
Full Time

Skills

Amazon Web Services
Apache Hadoop
Apache Hive
Caching
Big Data
Cloudera Impala
Couchbase
Data Modeling
Data Processing
Data Security
Google Cloud Platform

Job Details

Big Data Developer with Hadoop to join our team in Tampa, FL (Need Onsite day 1, hybrid 3 days from office).

Our challenge

We are looking for a skilled Big Data Developer with expertise in Hadoop and related big data technologies to join our dynamic team. The ideal candidate will have hands-on experience working with Hadoop ecosystem components such as Phoenix, Hue, Impala, and caching solutions like Redis or Couchbase. Candidate will be responsible for designing, developing, and optimizing large-scale data processing solutions, ensuring the performance, scalability, and reliability of big data systems.

Responsibilities:

  • Design, develop, and implement data processing pipelines using Hadoop ecosystem components such as HDFS, Hive, Impala, and Phoenix.
  • Develop and optimize SQL-based queries using Impala, Phoenix, and other big data query engines.
  • Build and maintain dashboards, reports, and data access interfaces using Hue.
  • Implement caching solutions using Redis, Couchbase, or similar technologies to improve data retrieval times and system performance.
  • Collaborate with data analysts and data scientists to understand data requirements and deliver scalable solutions.
  • Tune and optimize big data queries and workloads for performance and efficiency.
  • Ensure data quality, consistency, and security across big data platforms.
  • Monitor and troubleshoot data pipelines and storage solutions to ensure availability and reliability.
  • Stay updated with the latest developments and best practices in big data technologies and implement enhancements as needed.

Requirements:

  • Bachelor s or Master s degree in Computer Science, Information Technology, or related field.
  • 8+ years of hands-on experience in big data development, specifically with Hadoop ecosystem components.
  • Strong expertise in Hive, Impala, Phoenix, and Hue for data query and management.
  • Experience with caching technologies such as Redis, Couchbase, or similar.
  • Proficiency in SQL and scripting languages such as Python, Java, or Shell scripting.
  • Experience in data modeling, ETL processes, and performance tuning.
  • Familiarity with data security, access controls, and data governance.
  • Knowledge of cloud platforms like AWS, Azure, or Google Cloud Platform is a plus.
  • Strong problem-solving skills and ability to work in a fast-paced environment.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Laiba Technologies LLC