Automated Road Marking Extraction from High-Resolution Aerial Imagery Using Deep Learning Techniques Tee-Ann Teo, Ting-Ni Chen
National Yang Ming Chiao Tung University, Taiwan
Abstract
With the rapid growth of intelligent transportation systems and autonomous driving, accurate segmentation of road markings is critical for vehicle localization, navigation, and control. Classical methods based on hand-crafted features or lightweight CNNs, although efficient, are vulnerable to shadows, surface wear, and illumination changes, limiting their robustness in real-world scenes. To address these issues, this study adopts YOLOv11-seg, a supervised instance-segmentation framework, for automated extraction of diverse road markings from high-resolution aerial imagery. we therefore create dataset with ten road-marking categories and train the model end-to-end with data augmentation designed for thin, elongated targets. Model performance is evaluated using precision, recall, and F1-score for each class, with overall micro, macro, and weighted averages. The model achieves high precision overall (macro = - 0.962- weighted = 0.967), with competitive F1-scores (macro = 0.907- weighted = 0.899). Classes with compact or well-bounded shapes (e.g., Bike Crossing ID, Stop Waiting Zone) exhibit the strongest F1-scores, while elongated or visually fragmented markings (e.g., Crosswalk, Painted Island) show lower recall, indicating missed instances under occlusion or heavy wear. These findings suggest that a properly trained YOLOv11-seg model offers a practical and accurate solution for large-scale road-marking mapping from aerial imagery. Future work will focus on class-balanced sampling, boundary-aware loss functions, multi-scale tiling at higher input resolutions, and morphology-guided post-processing to further improve recall for thin, discontinuous markings.
Keywords: Road markings, Deep Learning, Semantic Segmentation, Aerial Image
Topic: Topic C: Emerging Technologies in Remote Sensing