Overview
Skills
Job Details
Job/Role Description: Senior Bigdata Developer with Apache Spark
Location: Phoenix, Arizona, USA
Employment Type: Contractor/Full-time
Key Responsibilities:
Data pipeline development: Design, develop, and maintain large-scale, distributed data pipelines using Apache Spark. ii. Performance optimization: Implement best practices for optimizing Spark jobs for performance and scalability. iii. Integration: Work with diverse data sources, including HDFS, NoSQL databases, relational databases, and cloud storage.
Collaboration: Partner with data scientists, analysts, and stakeholders to understand requirements and deliver data solutions. v. Real-time processing: Develop real-time data streaming applications using Spark Streaming or similar technologies. vi. Code quality: Write clean, maintainable, and reusable code, following best practices for version control, testing, and documentation vii. Troubleshooting: Identify and resolve issues in Spark jobs and data pipelines Required Skills and Qualifications: i.
Experience: 8 to 12 years in software development, with a focus on big data solutions 3+ years of hands-on experience in Apache Spark (batch and streaming) ii. Technical Skills: Proficient in programming languages such as Scala, Python, or Java Strong understanding of distributed computing principles Experience with big data ecosystems (Hadoop, HDFS, Hive, Kafka)