About the job
We are a team-based out of San Francisco that partners with business lines across to deliver big data and advanced analytics products and solutions.
In this role, you will have the opportunity to contribute to several high-quality data solutions and enhance your technical skills across many disciplines.
- Hands-on development role focused on creating big data and analytics solutions.
- Coding of mission-critical components
- Analyze business and functional requirements and contribute to the overall solution
- Participate in design reviews, provide input to the design recommendations
- Participate in project planning sessions with project managers, business analysts, and team members.
- Bachelor's degree in Computer Science, Engineering, or Information Management.
- 8+ years of relevant work experience Professional experience designing, creating and maintaining scalable data pipelines.
- Hands-on experience with a variety of big data technologies (Hadoop / Cloudera, Spark.)
- Experience working with the Cloudera stack; Kafka, Spark, Flume, Hadoop, HBase, Solr, Hive etc.
- Hands-on expertise with Big data technologies (HBase, Hive, Sqoop)
- Experience with Pub/Sub messaging (JMS, Kafka, etc.), stream processing (Storm, Spark Streaming, etc.)
- Experience deploying and working with big data technologies like Hadoop, Kafka, Storm, Spark Highly proficient in OO programming (Java and Python preferred).
- Hadoop Ecosystem ( HDFS, Yarn, MapReduce, Spark, Hive, Impala ) and should be able to coach the other members of the team
- Experience with object-oriented scripting languages: Java (required), Python, etc.
- Hands-on expertise with SQL & NoSql data platforms
- Experience with UNIX shell scripts and commands
- Experience with version control (git), issue tracking (jira), and code reviews
- Proficient in agile development practices