Bigdata Engineer

Overview

On Site
$50
Contract - W2
Contract - 24 Month(s)

Skills

Big Data
Data Engineering
Data Processing
Data Warehouse
Apache Hive
Apache Spark
Apache Hadoop
Brand
Cloud Computing
Computer Science
Debugging
Design Software
Development Testing
Documentation
Google Cloud
Google Cloud Platform
Java
MapReduce
Process Flow
Production Support
PySpark
Python
Real-time
SQL
Shell Scripting
Software Development
Systems Analysis/design
Systems Architecture
Testing
Unix
Writing

Job Details

Position: Bigdata Engineer

Location: Phoenix, AZ

Must day 1 on site

Key Responsibilities: Responsible for designing Data Engineering solutions on bigdata ecosystem, developing custom applications, and modifying existing applications to meet distinct and changing business requirements. Hands on coding, debugging, and documentation, as well as working closely with SRE team. Provide post implementation and ongoing support. Develop and design software applications, translating user needs into system architecture. Assess and validate application performance and integration of component systems and provide process flow diagrams. Test the engineering resilience of software and automation tools. You will be challenged with identifying innovative ideas and proof of concept to deliver against the existing and future needs of our customers. Software Engineers who join our Loyalty Technology team will be assigned to one of several exciting teams that are developing a new, nimble, and modern loyalty platform which will support the key element of connecting with our customers where they are and how they choose to interact with American Express. Be part of an enthusiastic, high performing technology team developing solutions to drive engagement and loyalty within our existing cardmember base and attract new customers to the Amex brand. The position will also play a critical role partnering with other development teams, testing and quality, and production support, to meet implementation dates and allow smooth transition throughout the development life cycle.

Minimum Qualifications: Masters in computer applications or equivalent OR Bachelor's degree in engineering or computer science or equivalent.

* Deep understanding of Hadoop and Spark Architecture and its working principle.

* Deep understanding of Data warehousing concepts.

* Ability to design and develop optimized Data pipelines for batch and real time data processing.

* 5+ years of software development experience.

* 5+ years' experience on Python or Java Hands-on experience on writing and understending complex SQL (Hive/PySpark-data frames), optimizing joins while processing huge amount of data.

* 3+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark).

* Hands on Experience on Google Cloud Platform (BigQuery, DataProc, Cloud Composer)

* 3+ years of experience in UNIX shell scripting

* Should have experience in analysis, design, development, testing, and implementation of system applications.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About MMB Global Tech LLC