Enhancing Visual SLAM Accuracy in Low-Light Environments via Image Enhancement Techniques Yusuke Ito (a) , Masayuki Matsuoka (a*)
a) Department of Information Engineering, Mie University
1577 Kurima-machiya, Tsu, Mie 514-8507, Japan
*matsuoka[at]info.mie-u.ac.jp
Abstract
The advancement of autonomous driving technologies has intensified the need for accurate self-localization and real-time environmental mapping. Simultaneous Localization and Mapping (SLAM) addresses this need by enabling robots or vehicles to build a map of an unknown environment while simultaneously determining their position within it. Among various types of SLAM, Visual SLAM (VSLAM) is notable for its low-cost implementation with monocular or stereo cameras. However, VSLAM systems are seriously limited in low-light environments, where the number of detectable visual features declines significantly, leading to reduced localization accuracy and potential tracking failure. This research aims to improve VSLAM performance under such challenging lighting conditions by integrating image enhancement modules into the VSLAM pipeline. In particular, we explore deep learning-based enhancement techniques, including those using Generative Adversarial Networks (GANs), to preprocess input images and improve feature visibility. The Oxford RobotCar Dataset is used, which includes sequences captured under varying illumination conditions. We compare conventional VSLAM with enhanced versions that incorporate diff-erent image enhancement methods. Performance is evaluated based on two primary metrics: the number of extracted feature points and the computational time required for processing. Experimental results demonstrate that the enhanced VSLAM systems consistently outperform the baseline in low-light environments, showing increased feature point extraction without significant degradation in execution speed. These findings suggest that image enhancement can serve as a viable solution for improving VSLAM robustness in visually degraded settings. The results of this study have potential applications in autonomous vehicles operating at night, as well as in robotic vision systems deployed in low-visibility environments such as tunnels, disaster zones, or outer space. Future work will focus on optimizing these systems for real-time performance and further enhancing localization accuracy by incorporating deep learning techniques into feature extraction and matching processes.