ACRS 2025
Conference Management System
Main Site
Submission Guide
Register
Login
User List | Statistics
Abstract List | Statistics
Poster List
Paper List
Reviewer List
Presentation Video
Online Q&A Forum
Ifory System
:: Abstract ::

<< back

Adaptive Sensor Fusion of LiDAR and Stereo Camera for Robust Autonomous Navigation in Outdoor Environments
Kenta Ishizuka (a*), Akito Arai (a), Arata Nagasaka (a), Kazuyuki Hashimoto (b), Shotaro Kobayashi (b), Masafumi Nakagawa (a)

a) Shibaura Institute of Technology, Japan
*ah21014[at]shibaura-it.ac.jp
b) Watanabe Engineering Co., Ltd, Japan


Abstract

Autonomous vehicles are widely promoted as a solution to reduce traffic accidents and improve logistics efficiency. However, many technical challenges remain before they can be put into practical use, such as improving the accuracy of self-position estimation and object recognition. Sensors installed in autonomous vehicles must be cost-effective while delivering high accuracy and real-time performance. Although previous research has achieved high-precision sensing, continuous output increases data processing demands data and power consumption. High-accuracy recognition using LiDAR and stereo cameras has been reported, but most approaches require significant computational resources, such as GPU processing, which hinders real-time performance. On the other hand, integrating data from different types of sensors is considered effective for 3D measurement and navigation under changing weather conditions. However, the performance largely depends on the integration method. This study proposes a method for dynamically optimizing the output of LiDAR and stereo cameras based on environmental conditions to improve measurement performance in both sunny and rainy weather. In addition, the proposed method was applied to LIO-SAM, which combines non-repetitive scanning LiDAR with visual simultaneous localization and mapping (Visual SLAM) using a stereo camera, and its effectiveness was evaluated. However, sufficient self-position estimation accuracy was not achieved. Frame integration, horizontal plane estimation, and mask processing using reflection intensity values were attempted, but improvements remained limited. Future work will focus on developing point cloud correction techniques specific to non-repetitive scanning LiDAR and refining frame-to-frame interpolation using velocity estimation.

Keywords: sensor fusion, LiDAR, stereo camera, LIO-SAM, Visual SLAM, autonomous vehicle

Topic: Topic B: Applications of Remote Sensing

Plain Format | Corresponding Author (Kenta Ishizuka)

Share Link

Share your abstract link to your social media or profile page

ACRS 2025 - Conference Management System

Powered By Konfrenzi Ultimate 1.832M-Build8 © 2007-2025 All Rights Reserved