Position- Lead Google Cloud Platform Cloud Engineer with Logistics exp, Local
Type- W2/Hybrid (3 days/week office)
Location- Charlotte, NC
Visa- GC
JOB DESCRIPTION
Lead Cloud Engineer Apache Spark / Google Cloud Platform / Python / Microservices (S3)
Overview
This role supports the Model Risk Management platform used to run statistical risk models and large-scale data workloads. The engineer will help maintain and enhance a cloud-native platform used by statisticians and data scientists working with millions of customer accounts. The platform runs on Google Cloud Platform and supports distributed data processing using Apache Spark.
Experience with large-scale datasets, Google Cloud Platform services, Spark, and containerized microservices is essential. Experience in financial services is preferred but not required.
Key Responsibilities
Platform Engineering
- Deploy, configure, and maintain OpenShift clusters or Google Cloud Platform projects for Spark workloads
- Support platform capabilities for statistical model execution
Distributed Data Processing
- Design and implement large-scale data processing workflows using Apache Spark
- Tune Spark jobs using Kubernetes orchestration and auto-scaling
Application & Tooling
- Build Python/Django services and microservices supporting platform users
- Configure tooling and internal applications for data processing and model execution
Automation & CI/CD
- Build and maintain CI/CD pipelines using GitHub Actions, Sonar, Harness, Helm
Monitoring & Troubleshooting
- Monitor Spark jobs and cluster health using Prometheus, Grafana, and Google Cloud Platform tools
- Debug distributed systems and optimize resource utilization
Security & Compliance
- Implement RBAC and encryption for data in transit and at rest
Collaboration
- Work closely with cross-functional teams to define requirements and support deployments
Qualifications
Experience
- 10+ years with Apache Spark
- 4+ years Django development
- 2+ years creating/maintaining conda environments
- 4+ years with OpenShift/Kubernetes
- Experience working with Google Cloud Platform or another major cloud provider
Technical Skills
- Spark frameworks (PySpark, Scala, or Java)
- OpenShift/Kubernetes administration
- Docker container experience
- Python, Scala, or Java programming
- Experience with GitHub Actions, Helm, Harness
- Knowledge of distributed systems and cloud storage (S3, GCS, HDFS)
Education
- Bachelor s degree in Computer Science, Engineering, or related field
CORE STACK
- Apache Spark / PySpark (must have)
- Google Cloud Platform (Google Cloud Platform strongly preferred)
- Kubernetes / OpenShift
- Python (Django + APIs)
- CI/CD: GitHub Actions, Helm, Harness
WHAT THIS ROLE REALLY IS
- Platform engineer + data + cloud + DevOps
- Backend-focused (APIs, workflows, clusters)
- Heavy debugging + optimization of distributed systems
KEY INITIATIVE (SPOTLIGHT CALL)
- Hadoop Google Cloud Platform/Oepnshift migration
- Build + support hybrid cloud platform (PyFarm)
- Ongoing platform enhancements post-migration
TOP SKILL PRIORITIES
- Spark at scale (real production use)
- Google Cloud Platform hands-on (NOT just exposure) or OpenShift hands-on
- Kubernetes/OpenShift
- Python + microservices
- Debugging + performance tuning
NICE TO HAVE (DIFFERENTIATOR)
- AI/LLM integration experience (building capabilities, not usage)
- GPU / platform-level AI exposure
- Hadoop migration experience
TEAM / ENVIRONMENT (SPOTLIGHT)
- Platform + App Dev team (this role)
- Works with Data + Support teams
TARGET CANDIDATES
- Platform Engineer (Data / ML Platform)
- Cloud Data Engineer (Spark-heavy)
- Big Data Engineer (with K8s + Google Cloud Platform)
Thanks & Regards
Shivam Rajpal
Team Lead- US IT Recruitment
Desk- X 142
Email-
LinkedIn-
Voto Consulting LLC- M/WBE Certified Company