Software Development Engineer II/ Hadoop
Location: O'Fallon, MO
Duration: 24 Months with possible extension
Team's main responsibility:
Enterprise Data Solutions - We handle the batch processing (data intake within data warehouse)
Culture of your team:
Highly collaborative, both within teams and because of the data warehouse environment, number of platforms and administrators.
Constant collaboration with diff. groups.
Extreme amount of depends on how we process.
Communication is important.
Typical work day look like:
We operate under the agile framework - Will be usually scrum meetings (daily or twice a week calls) The workload - migrating off of big data platform onto Hadoop or oracle. Will take existing batch jobs and re-writing them using Nifi on Hadoop or with oracles SQL and shell scripting.
Years of experience:
Top 3 required technical skills:
Hadoop Ecosystem (hive, impala, and spark) This would be a plus
Solution orientated mindset
Kind of accesses will this person need:
Network and Badge
The Data Warehouse program enables business insights for every transaction. Our vision is to provide a secure, scalable, resilient, high performing and multi-tenant global data warehouse allowing consistent data and analytics experiences.
- Work closely with technical leads to define user stories
- Develop high quality, secure, scalable batch and near-real time solutions
- Contribute to performance tuning of ETL jobs and process improvements of existing jobs
- Assist with support issues by troubleshooting incidents and problem tickets
- Possess the aptitude to quickly learn and contribute to our complex platform
- Provide technical expertise to ensure effective delivery of software capabilities
- Work closely with other engineers as part of high functioning agile team to advance high level strategies
All about candidate:
- Bachelor's degree in Information Technology, Computer Science or Management Information Systems or equivalent work experience.
-Thorough knowledge and understanding of Software Engineering Concepts and Methodologies
- Strong foundation in algorithms, data structures and core computer science
- Solid experience on RDBMS (e.g. Oracle)
- Demonstrated ability to write complex SQL/PL SQL queries
- Evidence of strong Unix Shell scripting skills
- Experience working in the Cloudera Hadoop ecosystem (HDFS, Hive, Impala, Pig, Spark)
- Demonstrated ability to balance short term tactical decisions with long term strategy objectives
- Experience as an engineer within an agile team structure
- Strong analytical and excellent problem solving skills
- Strong communication skills - both verbal and written
- High-energy, detail-oriented and proactive with the ability to function under pressure in an independent environment.
- High degree of initiative and self-motivation with a willingness and ability to learn and take on challenging opportunities
- Ability to work as a member of diverse and geographically distributed project team
Nifi, Messaging, Streaming(Kafka)