Python/Cloud Full Stack Engineer

Overview

On Site
69.5/hr - 78.31/hr
Full Time

Skills

Financial Services
Finance
Software Engineering
Apache Airflow
Big Data
Microservices
PySpark
Kubernetes
Amazon Web Services
Google Cloud
Google Cloud Platform
Microsoft Azure
Data Storage
HDFS
Python
Django
Computer Science
Shell Scripting
Machine Learning (ML)
Customization
GPU
Computer Hardware
Artificial Intelligence
Management
DevOps
Analytics
Extract
Transform
Load
Data Processing
Workflow
Docker
Optimization
Cloud Computing
Resource Management
Apache Spark
Apache Kafka
Amazon S3
Cloud Storage
Database

Job Details

Outstanding long-term contract opportunity! A well-known Financial Services Company is looking for a Python/Cloud Full Stack Engineer in Charlotte, NC (Hybrid).

Work with the brightest minds at one of the largest financial institutions in the world. This is long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.

Contract Duration: 12 Months

Required Skills & Experience
  • 7+ years of software engineering experience.
  • 3+ years working on Apache Spark for and Apache Airflow big data processing.
  • 3+ years of Microservices (such as Django) development experience.
  • 3+ years of Cloud development experience. (Google Cloud preferred).
  • Proficiency in Spark frameworks (Python/PySpark).
  • Familiarity with Docker and Kubernetes concerts(e.g., pods, deployments, services and images).
  • Hands on working experience on distributed systems, cloud platforms(AWS, Google Cloud Platform, Azure), and data storage solutions (e.g., S3, HDFS).
  • Programing: Strong coding skills in Python, Airflow, Django.
  • Education: Bachelor's degree in Computer Science, Engineering or related filed.
Desired Skills & Experience
  • Experience with shell scripting.
  • AI/ML: Experience with customizing GPU hardware for AI solution.

What You Will Be Doing
  • Will be designing, deploying and managing scalable data processing solutions in a cloud-native environment.
  • Work closely with data scientists, software engineers, and DevOps team to ensure robust, high-performance data pipelines and analytics platforms.
  • Data Pipeline Development: Design and implement large-scale data processing workflows using Apache Spark.
  • Container Development: Design and implement docker images.
  • Optimization: Tune Spark jobs for performance, leveraging OpenShift's/Cloud resource management capabilities.
  • Integration: Integrate spark with other data sources (e.g., Kafka, s3, cloud storage) and sinks (e.g., databases, data lakes).
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Motion Recruitment Partners, LLC