Overview
Skills
Job Details
Core Responsibilities
Manage the complete data lifecycle including ingestion, validation, processing, storage, and delivery
Develop scalable ingestion services using infrastructure-as-code principles
Design and maintain well-structured RESTful and gRPC APIs
Partner with vendors and operations teams to troubleshoot and resolve edge cases in the field
Maintain high system reliability, performance, and observability
Safeguard data integrity through robust testing and real-time monitoring
Drive CI/CD best practices to enhance scalability and system uptime
Required Qualifications
4+ years of experience as a software or data engineer
Proven expertise in building and maintaining data pipelines (e.g., Kafka, Airflow, Debezium)
Proficient with cloud-native infrastructure and tools (e.g., AWS, Google Cloud Platform, Terraform)
Strong knowledge of both SQL and NoSQL databases, including schema design and performance tuning
Skilled in backend programming languages such as Go, Java, or similar
Experience designing and consuming REST and/or gRPC APIs
Strong product intuition with the ability to collaborate directly with users and vendors
Exceptional communication and problem-solving skills
Previous startup experience with a high degree of autonomy and cross-functional work
Must be located in or open to relocating to Los Angeles for an on-site role
Preferred Qualifications
Experience working with real-time, event-driven architectures (e.g., MQTT, WebSockets)
Background in sectors such as aviation, mobility, IoT, logistics, or fleet telemetry
Prior experience at an early-stage, venture-backed startup
Familiarity with GPS, sensor networks, or telematics data systems