Skills
- Hadoop
- Spark
- SQL
- Hive
- Data Integration
Job Description
Hadoop developer
Location :: Charlotte, NC (Day one Onsite) - NEED ONLY LOCALS - NO RELOCATION CANDIDATES ARE ACCEPTED
Location :: Charlotte, NC (Day one Onsite) - NEED ONLY LOCALS - NO RELOCATION CANDIDATES ARE ACCEPTED
12+ MONTHS
Only W2 - No C2C
Mandatory skills:
Hadoop Developer
-Spark
-Hive
-Data Integration
Good to have
-Code branching
-Github
Design and build data services that deliver Strategic Enterprise Risk Management data
• Design high performing data models on big-data architecture as data services.
• Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark, MongoDB and object storage architecture.
• Design and build the data services on container-based architecture such as Kubernetes and Docker
• Partner with Enterprise data teams such as Data Management & Insights and Enterprise Data Environment (Data Lake) and identify the best place to source the data
• Work with business analysts, development teams and project managers for requirements and business rules.
• Collaborate with source system and approved provisioning point (APP) teams, Architects, Data Analysts and Modelers to build scalable and performant data solutions.
• Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
• Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.
• Work with DBAs in Enterprise Database Management group to troubleshoot problems and optimize performance
• Support ongoing data management efforts for Development, QA and Production environments
• Utilizes a thorough understanding of available technology, tools, and existing designs.
• Leverage knowledge of industry trends to build best in class technology to provide competitive advantage.
• Acts as expert technical resource to programming staff in the program development, testing, and implementation process.
• 5+ years of application development and implementation experience
• 5+ years of experience delivering complex enterprise-wide information technology solutions
• 5+ years of ETL (Extract, Transform, Load) Programming experience
• 3+ years of reporting experience, analytics experience or a combination of both
• 4+ years of Hadoop development/programming experience
• 5+ years of operational risk or credit risk or compliance domain experience
• 5+ years of experience delivering ETL, data warehouse and data analytics capabilities on big-data architecture such as Hadoop
• 6+ years of Java or Python experience
• 5+ years of Agile experience
• 5+ years of design and development experience with columnar databases using Parquet or ORC file formats on Hadoop
• 5+ years of Apache Spark design and development experience using Scala, Java, Python or Data Frames with Resilient Distributed Datasets (RDDs)
• 2+ years of experience integrating with RESTful API
Hadoop Developer
-Spark
-Hive
-Data Integration
Good to have
-Code branching
-Github
Design and build data services that deliver Strategic Enterprise Risk Management data
• Design high performing data models on big-data architecture as data services.
• Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark, MongoDB and object storage architecture.
• Design and build the data services on container-based architecture such as Kubernetes and Docker
• Partner with Enterprise data teams such as Data Management & Insights and Enterprise Data Environment (Data Lake) and identify the best place to source the data
• Work with business analysts, development teams and project managers for requirements and business rules.
• Collaborate with source system and approved provisioning point (APP) teams, Architects, Data Analysts and Modelers to build scalable and performant data solutions.
• Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
• Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.
• Work with DBAs in Enterprise Database Management group to troubleshoot problems and optimize performance
• Support ongoing data management efforts for Development, QA and Production environments
• Utilizes a thorough understanding of available technology, tools, and existing designs.
• Leverage knowledge of industry trends to build best in class technology to provide competitive advantage.
• Acts as expert technical resource to programming staff in the program development, testing, and implementation process.
Amazee Global Ventures Inc, is an equal opportunity employer. We will not discriminate and will follow all measures to ensure no discrimination in employment, recruitment, advertisements for employment, compensation, termination, upgrading, promotions, and other conditions of employment against any employee or job applicant on the bases of race, color, gender, national origin, age, religion, creed, disability, veteran's status, sexual orientation, gender identity or gender expression. We are committed to providing an inclusive and welcoming environment for all members of our staff, clients, volunteers, subcontractors, vendors, and clients.
• 5+ years of application development and implementation experience
• 5+ years of experience delivering complex enterprise-wide information technology solutions
• 5+ years of ETL (Extract, Transform, Load) Programming experience
• 3+ years of reporting experience, analytics experience or a combination of both
• 4+ years of Hadoop development/programming experience
• 5+ years of operational risk or credit risk or compliance domain experience
• 5+ years of experience delivering ETL, data warehouse and data analytics capabilities on big-data architecture such as Hadoop
• 6+ years of Java or Python experience
• 5+ years of Agile experience
• 5+ years of design and development experience with columnar databases using Parquet or ORC file formats on Hadoop
• 5+ years of Apache Spark design and development experience using Scala, Java, Python or Data Frames with Resilient Distributed Datasets (RDDs)
• 2+ years of experience integrating with RESTful API