Overview
Skills
Job Details
JOB SUMMARY
The Calibration/Localization Algorithm Engineer The ideal candidate will be responsible for developing and optimizing algorithms related to surround view camera calibration, perspective transformations, and Visual SLAM (Simultaneous Localization and Mapping). This role requires strong expertise in computer vision, image processing, and mathematical modeling to improve the precision and reliability of our advanced driver assistance systems.
JOB FUNCTIONS (ESSENTIAL)
Develop and enhance calibration algorithms for multi-camera systems.
Implement procedures to correct lens distortion and camera misalignment.
Develop algorithms for converting 2D images into 3D models and back to 2D projections.
Apply mathematical techniques such as homography and projective transformations.
Design and implement Visual SLAM algorithms for real-time localization and mapping.
Utilize techniques for feature extraction, matching, and tracking.
Integrate SLAM algorithms with data from other sensors (e.g., LiDAR, IMU) to enhance system performance.
Continuously improve the performance of algorithm SW to ensure low latency and high accuracy in real-time, safety-critical automotive environments.
Troubleshoot and resolve technical issues related to sensor data processing.
Validate Algorithms using rea-world data and simulations.
Work closely with hardware, software, and vehicle control engineers to integrate perception algorithms into Advanced Driver Assistance Systems (ADAS) and other safety features.
Ensure all developed perception algorithms and safety systems comply with automotive industry safety standards, such as ISO 26262 for functional safety.
Education/Experience
At least 5 years of hands-on experience in computer vision, image processing, and machine learning.
Proficiency in programming languages such as C/C++, and Python.
Experience with camera calibration techniques and tools (e.g., OpenCV, ROS).
Strong understanding of perspective transformations and 3D reconstruction methods.
Familiarity with Visual SLAM frameworks (e.g., ORB-SLAM, LSD-SLAM, RTAB-Map).
Technical Knowledge
Experience with automotive ADAS systems and sensor fusion
Knowledge of deep learning techniques for computer vision applications.
Experience with real-time processing and optimization techniques.
Experience with machine learning algorithm development for embedded ECUs
Strong background in mathematics, signal/image/video processing
Experience with version control (Git) and requirement management tools (like PTC Integrity)
Knowledge of structured problem solving (8D) methods and/or techniques