Platform Engineer need locals Cupertino, CA

Overview

On Site
$50 - $60
Contract - Independent
Contract - W2
Contract - 6 Month(s)

Skills

Google ADK

Job Details

Job Title: Platform Engineer
Location: Cupertino, CA (Local preferred)
Client : Apple Job Summary:
We are seeking a highly skilled Platform Engineer with expertise in Python functional programming, experience using Google ADK tools, and a strong background in Trino cluster performance optimization. The ideal candidate will play a key role in managing and improving the performance of large-scale data infrastructure systems. Key Responsibilities:
Design, develop, and maintain platform-level solutions supporting scalable and reliable infrastructure.
Write clean, efficient, and modular code in Python with an emphasis on functional programming paradigms.
Utilize Google ADK tools to streamline development, deployment, and monitoring workflows.
Collaborate with data engineering teams to manage and optimize Trino clusters for performance, reliability, and scalability.
Identify bottlenecks, propose solutions, and implement improvements in data querying and processing.
Create automation scripts and tools to enhance operational efficiency and reduce manual tasks.
Work closely with cross-functional teams to support platform upgrades, incident management, and performance tuning. Required Skills:
experience in software or platform engineering roles.
Strong proficiency in Python, particularly functional programming techniques.
Hands-on experience with Google ADK (Android Development Kit) or similar tools.
Deep understanding of distributed query engines, especially Trino (formerly PrestoSQL).
Proven expertise in performance tuning, resource optimization, and cluster management.
Experience working in a Linux-based environment with CI/CD pipelines and monitoring tools.
Strong analytical and debugging skills, with the ability to troubleshoot complex system issues. Preferred Qualifications:
Experience working in a product-based company is a strong plus.
Familiarity with containerization (Docker) and orchestration (Kubernetes).
Exposure to big data technologies such as Spark, Kafka, Hive, or Hadoop.
Prior experience with large-scale, high-availability systems.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.