Hybrid in Charlotte, North Carolina
•
Today
We are seeking a highly experienced Big Data Engineer with a minimum of 8 years in the industry to work on enterprise-grade data processing solutions. The role demands deep expertise in the Hadoop ecosystem, Apache Spark, and cloud-based big data services. Key Responsibilities: Design, develop, and maintain large-scale distributed data processing pipelines. Work with massive data sets using Spark, Hadoop, and related frameworks. Implement efficient and secure ingestion frameworks using Kafka, N
Easy Apply
Contract
Up to $70