Job Title: Perception Engineer (Client: Mitsubishi Electric)
Location: Detroit, MI (5 days a week, onsite)
Rate: As per market standard
Client: Mitsubishi Electric
Job description
Design and implement advanced perception algorithms for autonomous vehicles using LiDAR, cameras, radar, and GNSS.
Develop and optimize sensor fusion techniques to combine data from multiple sensors, improving the accuracy and reliability of perception systems.
Create algorithms for object detection, tracking, semantic segmentation, and classification from 3D point clouds (LiDAR) and camera data.
Develop sensor calibration techniques (intrinsic and extrinsic) and coordinate transformations between sensors.
Develop robust perception algorithms that maintain performance in adverse weather conditions such as rain, snow, fog, and low-light scenarios.
Participate in real-time systems design and optimization to meet the high-performance requirements of autonomous driving.
Work with ROS2 for integration and deployment of perception algorithms.
Develop, test, and deploy machine learning models for perception tasks such as object detection and tracking.
Collaborate with cross-functional teams to integrate perception algorithms into larger autonomous systems.
Stay up-to-date with industry trends and emerging technologies to innovate and improve perception systems.
What You Will Bring:
Minimum 3+ years of experience in sensor calibration, multi-sensor fusion, or related domains.
Strong foundation in linear algebra, 3D geometry, coordinate frames, quaternions, probability, Bayesian filtering, and data association.
Hands-on experience with intrinsic and extrinsic calibration of LiDAR, cameras, and radar, including geometric calibration, coordinate transforms, and sensor synchronization.
Proven experience with perception algorithms for autonomous systems, particularly in the areas of LiDAR, camera, radar, GNSS, or other sensor modalities.
Deep understanding of LiDAR technology, point cloud data structures, and processing techniques; experience with PCL or Open3D.
Proficiency in sensor fusion for combining data from LiDAR, camera, radar, and GNSS, including handling time synchronization and motion distortion.
Solid background in computer vision techniques; experience with OpenCV and object detection models such as YOLO, Faster R-CNN, or SSD.
Experience with deep learning frameworks (TensorFlow or PyTorch) for object detection and tracking tasks.
Hands-on experience with multi-object tracking algorithms such as SORT, DeepSORT, Kalman Filters, UKF, IMM, or JPDA.
Strong programming skills in C++ and Python; familiarity with geometric optimization libraries.
Familiarity with ROS2 for perception-based autonomous systems development.
Experience with parallel computing for real-time performance optimization (e.g., CUDA, OpenCL).