:: Abstract List ::

Page 9 (data 241 to 270 of 351) | Displayed ini 30 data/page << PREV
1 2 3 4 5 6 7 8 9 10 11 12 NEXT >>
241 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-285 |
Towards Fully Automated 3D Reconstruction Using UAV LiDAR and Deep Learning Calvin Wijaya (*), Ruli Andaru, Harintaka, Catur Aries Rokhmana
Department of Geodetic Engineering, Faculty of Engineering, Universitas Gadjah Mada, Jl. Grafika Bulaksumur No.2, Sleman, Indonesia
*calvin.wijaya[at]ugm.ac.id
Abstract
Digital twin technology has emerged as a significant research trend in recent years, offering enhanced capabilities by integrating traditional 2D geospatial data with 3D models and real-time sensor data. This integration enables a more detailed and dynamic representation of physical environments, supporting better-informed decision-making processes. A critical component in the development of a digital twin is the reconstruction of accurate and detailed 3D models. However, this process remains a major challenge due to its complexity and time-consuming nature-from data acquisition to model generation. This research presents and evaluates an automated workflow for 3D model reconstruction, starting from data acquisition using Unmanned Aerial Vehicle (UAV)-based LiDAR systems. The study focuses on the Universitas Gadjah Mada campus area as a case study. The input data consists of UAV-acquired point clouds and high-resolution aerial imagery. We employ a combination of deep learning techniques and geometric processing to preprocess the point cloud data. Specifically, the Dynamic Graph Convolutional Neural Network (DGCNN) is used to classify point clouds into semantic categories such as ground, buildings, and vegetation. Using the classified point cloud data and extracted building outlines, the workflow reconstructs 3D building models automatically. The Geoflow algorithm is applied to generate the final 3D campus model in Level of Detail (LOD) 2.2, encoded in the CityJSON format. This format adheres to Open Geospatial Consortium (OGC) standards, ensuring data interoperability and usability in smart city and digital twin applications. The results demonstrate a streamlined and efficient approach for producing semantically rich 3D city models with minimal manual intervention.
Keywords: 3D Reconstruction, 3D City, Digital Twin, LiDAR, Sustainable Cities
Share Link
| Plain Format
| Corresponding Author (Calvin Wijaya)
|
242 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-33 |
Traffic Sign Mapping in Cambodia with Deep Learning and GNSS Sophal Ratitya 1* and Mitsuharu Tokunaga2
1Graduate Student, Department of Civil and Environmental Engineering,
Kanazawa Institute of Technology, Japan
2Professor, Department of Civil and Environmental Engineering,
Kanazawa Institute of Technology, Japan
*titan.titya[at]gmail.com
Abstract
Traffic signs are necessary road features that help ensure safety. However, mapping these objects can be a time-consuming and tedious task. This study explores the use of deep learning and computer vision to improve the efficiency of traffic sign mapping in Cambodia. A traffic sign dataset was created from selected frames captured by a GoPro 12 action camera mounted on top of a car. These images were then labelled using image annotation software. Next, the traffic sign detection and classification models were trained using the pretrained YOLOv8 model in the Ultralytics framework. The Ultralytics tracking was used to maintain a unique ID of each sign in the video, allowing us to capture two frames as the object crosses a designated line. To determine the frame coordinate, we synchronize the video starting time with the Global Navigation Satellite System (GNSS) time. The next step is to calculate the detected sign in real-world coordinates. This involves finding the camera to GNSS difference based on camera height and field of view. The Haversine formula is used to find the distance between the camera in the pair frame. The objects^ pixel width in both frames is obtained from the detection model. The camera-to-object distance is calculated using the distance between the camera and the object^s pixel width, while the angle of the object to the camera^s line of sight is also measured. Finally, coordinates are calculated based on distance and angle. The experiments indicate that the approach accurately detects and identifies the coordinates of traffic signs with a mean absolute error of less than 5 meters. Consequently, the mapping process becomes easier and time-effective. Furthermore, an inventory can be easily built and updated frequently, facilitating efficient road asset management. While the system^s accuracy is mainly on the GNSS signal, it can be further improved using a high-precision GNSS integrated with other sensors or a real-time positioning (RTK) application. This research contributes to remote sensing by reducing the time needed for object mapping and by effectively integrating deep learning, computer vision, and GNSS for enhanced data acquisition.
Keywords: Computer Vision, Deep Learning, GNSS, Traffic Sign Mapping, YOLOv8
Share Link
| Plain Format
| Corresponding Author (SOPHAL RATITYA)
|
243 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-41 |
Deep Learning-Based Early Detection of Harmful Algal Blooms in Jakarta Bay Using High Resolution Satellite Imagery Zahra Z. A. (a), Hanafie A. (b) , Semedi B. (c) and Chang K.T. (a*)
a) Dept. of Civil Eng. and Environmental Informatics, Minghsin Uni. of Science and Technology, Taiwan.
b) Strong Engineering Consulting Co., Shalu Township, Taichung County, Taiwan.
c) Sekolah Pascasarjana, Universitas Brawijaya, Indonesia.
Abstract
Over the past two decades, Jakarta Bay has experienced recurring harmful algal bloom events, with a noticeable increase in their frequency, intensity, and duration in recent years. These blooms have resulted in considerable ecological degradation, including widespread fish mortality, and have negatively affected local fisheries, tourism, and other coastal economic sectors. The intensification of such events is widely attributed to escalating anthropogenic pressures, including urban runoff and industrial effluents from surrounding areas. In response to this growing concern, the present study proposes an early detection framework that integrates high resolution satellite imagery with deep learning techniques. Multispectral data from Sentinel-3A and Sentinel-3B Ocean and Land Colour Instruments, acquired during the early months of 2019, particularly in March, were utilised. The analysis focused on spectral bands sensitive to chlorophyll and phytoplankton concentrations, specifically Band 4 at 490 nm, Band 6 at 560 nm, Band 8 at 665 nm, Band 10 at 681.25 nm, Band 11 at 708.75 nm, and Band 17 at 865 nm. Ground truth data from Hawkeye sensors collected during the same period were employed for validation. A convolutional neural network was developed to extract and classify spatial and spectral features associated with harmful algal bloom presence. The input data underwent rigorous pre-processing, including atmospheric correction and spatial alignment. The proposed model demonstrated robust predictive performance, achieving a classification accuracy exceeding 90%, with high precision, recall, and F1 score. These findings underscore the potential of combining artificial intelligence and satellite-based Earth observation to enable timely and accurate monitoring of harmful algal blooms. This approach offers a scalable and operationally viable tool for supporting proactive coastal management in ecologically and economically vulnerable marine environments such as Jakarta Bay.
Keywords: convolutional neural network, harmful algal bloom, chlorophyll detection, Jakarta Bay
Share Link
| Plain Format
| Corresponding Author (ZALFA AFIFAH ZAHRA)
|
244 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-307 |
Integrated UAV-Satellite Remote Sensing for High-Precision Spatial Analysis of Production and Physiological Health in Kappaphycus Aquaculture, Indonesia Nurjannah Nurdin (a,c,d*), Evangelos (b), Agus Aris (c,d), M. Akbar AS (d), Laurent Barille (b)
(a) Department of Marine Science, Marine Science and Fisheries Faculty, Hasanuddin University, Makassar, 90245. Indonesia
*nurjannahnurdin[at]unhas.ac.id
(b) Institut des Substances et Organismes de la Mer (ISOMer), Nantes Universite, UR 2160, F-44000 Nantes, France.
(c) Department of Remote Sensing and Geographic Information System, Vocational Faculty, Hasanuddin University, Makassar 90245. Indonesia
(d) Research and Development Center for Marine, Coast, and Small Islands, Hasanuddin University, Makassar 90245. Indonesia
Abstract
Climate change and seasonal fluctuations have become critical determinants of seaweed farming success in Indonesia coastal regions, particularly in South Sulawesi, Indonesia. The primary cultivated species, Kappaphycus alvarezii, is of high economic value because of its carrageenan content, which is widely used in the food industry and has other applications. Although cultivation practices are relatively simple and require low capital investment, production is frequently disrupted by shifting environmental conditions, disease outbreaks and pest infestations. Ice-ice disease, which causes tissue damage and depigmentation, is a major global threat to yield reduction. This study investigated the relationship between seasonal variability (west monsoon, east monsoon, and two transitional phases) and the growth dynamics of K. alvarezii while assessing the potential of high-resolution remote sensing for health monitoring. Environmental parameters, such as sea surface temperature, salinity, and nutrient concentration, were derived from satellite imagery. At finer scales, field observations were conducted using a multispectral UAV (DJI Phantom 4 RTK D-GPS) equipped with five spectral bands spanning the visible and near-infrared ranges. Artificial intelligence using machine learning algorithms was applied to correlate spectral reflectance with biometric traits and carrageenan content and to detect early color changes as indicators of biological stress. The findings revealed distinct seasonal patterns influencing productivity and disease vulnerability, with certain transitional periods triggering notable declines in crop quality. A predictive model that integrates geospatial and climate datasets from 2019 to 2024 successfully mapped spatial production patterns and potential stress hotspots. This approach demonstrates that combining satellite data, UAV-based monitoring, and AI-driven analysis provides an effective early warning system, optimizes harvest timing, and mitigates economic losses associated with the disease. Beyond improving operational efficiency, this strategy strengthens aquaculture resilience to climate change and supports sustainable coastal-management practices.
Keywords: UAV- Satellite Data- Precission- Seaweed Aquaculture- Physiological Health
Share Link
| Plain Format
| Corresponding Author (Nurjannah Nurdin)
|
245 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-53 |
Research on the Construction Method of 3D Tree Model Based on Laser Point Cloud and Tree Radar Zhang,Ruiling(-a*)-、-Wang,Muzi(a)、-Dong,Youqiang(a)、-Wang,Xinhao(a)、-Yin,Keru(a)
(a)School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, China
* 1108140522004[at]stu.bucea.edu.cn
Abstract
In view of the fact that it is difficult to quantify the degradation of mechanical properties caused by internal cavitation decay in traditional tree morphology modeling, it is impossible to characterize the anomaly of mass distribution and stiffness attenuation. In this study, a three-dimensional modeling method based on multi-source data fusion of laser point cloud and tree radar was proposed. Firstly, based on the high-precision defoliated point cloud, the KD-Tree spatial index was used to accelerate the proximity search and the Dijkstra algorithm to extract the tree skeleton line, and the segmented cylinder fitting modeling of the branch was realized through the cross-section centroid positioning and geometric parameter calculation. Secondly, the tree radar reconstructs a two-dimensional cross-sectional image at the corresponding height by using the physical characteristics of electromagnetic waves with slower propagation speed and enhanced attenuation in the decay region, showing the distribution difference between the internal air decay and the intact wood area. Image segmentation and density-weighted centroid estimation were carried out on the two-dimensional cross-sectional image to accurately identify the true centroid position of the cavitation area. Finally, through the highly hierarchical and directional registration technology, the cavitation data was mapped to the 3D model, the centroid offset vector caused by cavitation was calculated, and the local volume was deducted on the model to realize the correction of the mass distribution of the model. Experiments are carried out on a variety of tree species, and the results show that the proposed method can synchronously integrate external geometric features and internal decay information, providing a high-precision structural basis for tree mechanics simulation and risk assessment. Compared with the traditional modeling method, this method can more truly reflect the influence of mass distribution and local defects on the overall structure, and has good adaptability and promotion potential.
Keywords: laser point clouds-tree radar-3D modeling- data fusion
Share Link
| Plain Format
| Corresponding Author (木子 王)
|
246 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-64 |
Enhancing Unpaved Road Condition Monitoring in Uganda Using Smartphone Imagery and Deep Learning Gerald Obalim (a*), Mitsuharu Tokunaga (b)
a) Research Student, Department of Civil and Environmental Engineering, Graduate School of Engineering, Kanazawa Institute of Technology, Japan
*geraldobalim[at]gmail.com
b) Professor, Department of Civil and Environmental Engineering, Graduate School of Engineering, Kanazawa Institute of Technology, Japan
Abstract
Developing countries such as Uganda face significant challenges in maintaining vast networks of unpaved roads due to resource constraints, lack of real-time data, and manual inspection inefficiencies. Of Uganda^s estimated 150,000 km road network, less than 15% is paved, making the need for management solutions especially urgent. This study explores a low-cost, scalable framework for unpaved road condition monitoring using smartphone imagery and deep learning. Focusing on district roads in Northern Uganda, geo-tagged images were captured along selected road sections and annotated using the Visual Geometry Group (VGG) image annotator. A total of 360 images were sorted and divided into two sets for training and validation. The images were used to train and evaluate an Ultralytics YOLOv8 object detection model, capable of identifying visible defects such as potholes and surface erosion. Detected damages were quantified and visualized using QGIS, with road segments classified according to Uganda^s established road distress rating guidelines. Initial results demonstrate a strong correlation between the model^s output and conventional field assessments by experienced road engineers, thereby offering a faster and more objective alternative to manual inspection. This not only enhances consistency and decision-making but also enables more strategic deployment of maintenance resources. Shadow interference from roadside vegetation occasionally affected detection accuracy, suggesting potential benefits from future shadow-removal processing. The study introduces a foundational dataset for Uganda^s unpaved roads and establishes a transferable framework for similar applications in other low-resources contexts.
Keywords: Deep learning, road condition monitoring, smartphone imagery, unpaved roads
Share Link
| Plain Format
| Corresponding Author (Gerald Obalim)
|
247 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-83 |
Click, Segment, Learn! Using SAM to Explore Remote Sensing Imagery Muhammad Azzam A.W. (1*), Az-Azira A.A.(1), Syalini M.S.(1), Siti Nor Afzan A.H.(1), Norhayati C.M.(1), Siti Masayu Y. (1) and Mohd Aizat Hisyam I.(1)
(1) Researcher, ICT Development & Geoinformatics Division, Malaysian Space Agency, Malaysia
Abstract
Remote sensing images with high resolution are increasingly essential around the world for tracking changes in land surfaces, monitoring urban expansion, and studying environmental aspects. However, extracting useful information from these images remains difficult because landscapes are complex and varied, image qualities are differed, and manual labeling data is time-consuming and labor intensive. Moreover, existing segmentation methods often lack flexibility across different environments, making it hard for both practical applications and newcomers learning AI-driven remote sensing to scale these techniques effectively. In response to these challenges, this study explores integrating the Segment Anything Model (SAM), a recent vision foundation model, into remote sensing workflows. We explore how SAM could improve segmentation accuracy of complex landscapes over Malaysia while simplifying the process so that users, even beginners, could interactively engage in what we call a ^Click, Segment, Learn^ workflow. This intuitive approach allows users to simply click on areas of interest, watch SAM automatically segment features, and learn from the outputs to better understand geospatial patterns. By applying SAM to urban, agricultural, and coastal datasets from Peninsular Malaysia, this study demonstrates how prompt-driven segmentation using zero-shot and interactive modes would reduce dependency on large, annotated datasets and extensive technical expertise. Preliminary results show that SAM outperforms conventional deep learning-based methods in segmenting key features such as built-up areas, road networks, coastal areas, and vegetation. Furthermore, this approach holds promise as a foundational platform bridging complex AI methods with practical geospatial applications, supporting national planning, environmental monitoring, and disaster response efforts, while simultaneously serving as a valuable educational resource for beginners engaging with AI in remote sensing.
Keywords: computer vision, SAM, remote sensing, Malaysia
Share Link
| Plain Format
| Corresponding Author (MUHAMMAD AZZAM A WAHAB)
|
248 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-345 |
Integration of Machine Learning, Remote Sensing, and WebGIS for Landslide Hazard Potential Monitoring Yanuarsyah I, Hidayat J, Setiawan I, Agus S.B
Ibn Khaldun University of Bogor, Pakuan University, LSP MAPIN, IPB University
Abstract
This study integrates four case studies based on remote sensing and GIS in Indonesia, covering landslide susceptibility modeling using Random Forest, biomass estimation, spatial analysis of landlside hazard, and development of an interactive WebGIS. Each study employed different data sources, such as Sentinel imagery, Landsat 8 OLI, elevation model, and field survey data, with analytical methods including machine learning classification, vegetation index regression, spatial analysis scoring, and web mapping applications. The integration aims to build a unified framework for landslide hazard potential condition monitoring accessible to stakeholders in real time. Results indicate that the combination of machine learning and GIS improves disaster prediction accuracy and environmental information quality. Landslide susceptibility modeling achieved an AUC, with slope gradient and rainfall as the most influential variables. Biomass estimation with NDVI as a key predictor. Landslide hazard analysis identified high risk zones near rivers and lower lying areas, while the WebGIS successfully delivered interactive thematic maps for easier information access. The proposed integration framework supports the National Geospatial Data Infrastructure and promotes the use of AI, UAV, and remote sensing data for based policy making.
Keywords: Machine Learning, Random Forest, WebGIS, Remote Sensing, Multi-Hazard, NGDI
Share Link
| Plain Format
| Corresponding Author (Iksal Yanuarsyah)
|
249 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-92 |
Assessing the Potential of Neural Radiance Fields for UAV-Based DSM Generation: A Preliminary Comparison with Photogrammetry Farhan Ardianzaf Putra (a*), Cheng-Hsin Li (b), Chao-Hung Lin (c), Jiann-Yeou Rau (c)
a) PhD Student, Department of Geomatics, National Cheng Kung University, Taiwan
*farhanardianzaf[at]gmail.com
b) Master Student, Department of Geomatics, National Cheng Kung University, Taiwan
c) Professor, Department of Geomatics, National Cheng Kung University, Taiwan
Abstract
Digital Surface Models (DSMs) are fundamental products in geospatial analysis, urban planning, and environmental monitoring. Traditionally, DSMs are derived from aerial imagery through photogrammetry, which performs effectively in well-textured urban areas by reconstructing dense and accurate 3D surfaces. However, photogrammetric methods often encounter challenges in areas with reflective materials, dense vegetation, or low-texture surfaces, where feature detection and matching become unreliable. Recent advances in deep learning-based 3D reconstruction, particularly Neural Radiance Fields (NeRF) and its variants, offer data-driven alternatives capable of learning volumetric scene representations from sparse UAV imagery. This study explores the feasibility of generating DSMs directly from UAV images using several NeRF-based models and compares these preliminary results against those produced by conventional photogrammetric workflows. Initial findings suggest that while photogrammetry currently achieves higher geometric accuracy and completeness, certain NeRF-based models demonstrate promising potential in retaining surface detail and structural coherence, particularly in complex scenes. Notably, one model based on an improved NeRF architecture produced denser point clouds and finer representation of building edges, indicating a pathway for further enhancement. Although these results remain preliminary, they highlight the potential for deep learning-based methods to complement or augment traditional photogrammetric techniques, especially under limited data conditions or where traditional methods face constraints. Continued research and optimisation are expected to narrow the performance gap, offering more robust and efficient solutions for DSM generation from UAV imagery.
Keywords: 3D Reconstruction, Digital Surface Model, Neural Radiance Fields, Photogrammetry, UAV Imagery
Share Link
| Plain Format
| Corresponding Author (Farhan Ardianzaf Putra)
|
250 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-359 |
Evaluating Satellite Gridded Precipitation Errors in the Sungai Sarawak Basin: A Triple Collocation Approach Mohd Nadzri Md Reba1,2*, Azalea Kamellia Abdullah2, Mazlan Hashim1,2, Mohd Rizaludin Mahmud1,2, Wan Anwar Nadir Wan Ahmad2
1Geoscience and Digital Earth Centre (INSTeG), Research Institute for Sustainable Environment (RISE), Universiti Teknologi Malaysia, Johor, Malaysia
2Faculty of Built Environment and Surveying, Universiti Teknologi Malaysia, Johor, Malaysia
*nadzri[at]utm.my
Abstract
Satellite Precipitation Estimates (SPE) offer a valuable alternative to traditional ground measurements by providing synoptic coverage and enhancing rainfall estimation accuracy in Malaysia. To ensure confidence in the selection of the most suitable precipitation repository, validation of these rain estimates is crucial. However, conventional validation methods using rain gauges become ineffective when the quality of the reference data deteriorates, particularly in areas with sparse stations and poor coverage. In critical situations where the reference data is unreliable or possesses low accuracy, equivalent measurement parameters can serve as validation agents due to their similarity to other precipitation estimates. Past studies have shown an overestimation of rainfall by gridded Satellite Precipitation Estimates (SPEs) like CHIRPS and PERSIANN-CDR. However, the influence of spatial and temporal variability on these errors has been less thoroughly examined. This study evaluates the error estimates of CHIRPS, PERSIANN-CDR, and ERA5 precipitation datasets against rain-gauge station measurements over a 30-year period in the Sungai Sarawak basin. To achieve this, a Triple Collocation (TC) analysis was employed. To ensure spatial consistency for intercomparison, data from different spatial resolutions were interpolated using the Inverse Distance Weighting (IDW) method, with a 5-km grid, corresponding to the native CHIRPS resolution, being adopted. Temporal differences between the datasets were minimized by calculating monthly summed precipitation estimates within these 5-km grids. The error variance of simultaneously observed monthly precipitation estimates from the four datasets was formulated, and the signal-to-noise ratio (SNR) and relative rank performance were subsequently analyzed. This study highlights the effectiveness of TC in evaluating errors across various SPE origins. Rain gauges consistently ranked highest among SPEs. CHIRPS demonstrated superior performance, attributed to its development incorporating adjustments from multiple data sources. PERSIANN exhibited higher error covariance, while ERA5 showed underestimation in seasonal analysis. TC proves capable of assessing errors in both ungauged and sparsely gauged areas. Accurate error estimation is vital for drought and flood analysis, and for water resource management to address climate change impacts in tropical regions.
Keywords: CHIRPS- PERSIANN-CDR- ERA5- Triple Collocation
Share Link
| Plain Format
| Corresponding Author (Mohd Nadzri Md Reba)
|
251 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-108 |
Development of a Method for Estimating Height from LiDAR Data Chizuka Fujishima (a*), Junichi Susaki (b), Yoshie Ishii (c)
a) Student, Graduate School of Engineering, Kyoto University, Japan
*fujishima.chizuka.24t[at]st.kyoto-u.ac.jp
b) Professor, Graduate School of Engineering, Kyoto University, Japan
c) Assistant Professor, Graduate School of Engineering, Kyoto University, Japan
Abstract
Currently, optical satellites are mainly used to observe land cover and topography. They provide full-color image and high spatial resolution data, but some problems exist including limitation of high-precision use of 3D maps and potential errors of several meters in estimated ground heights under forest canopy. To address these issues, altimeter LiDAR satellites with full-waveform LiDAR are now being developed. Full-waveform LiDAR is the technology to continuously acquire reflected intensity of LiDAR and record it as a reflected waveform. In addition, coordinated observation of commercial small optical observation systems and altimeter LiDAR satellite is expected to enable the generation of most advanced 3D terrain information in the world. For the practical use of altimeter LiDAR satellites, this study develops a method to estimate tree height from 3D point cloud data.
Based on the methods adopted in existing LiDAR missions, I propose a method to estimate tree height from reflected waveform created from point cloud data. Also, considering the correlation between waveform and point cloud data, I propose a method to estimate tree height directly from point cloud data. The feature of this study is applying the assumption that there are two types of point clouds -ground points and vegetation points- and they exist according to a separate distribution. The estimated heights are validated by comparing them to the true value. The minimum RMSE was 2.20 m for the waveform-based estimation and 0.31 m for the point cloud-based estimation. Especially in flat areas, most values could be estimated with an error of less than 1.00 m. In addition, the accuracy of point cloud separation had a significant impact on the estimation accuracy. Future tasks are clarifying the relationship between the reflected waveform and point cloud data and developing a method for creating continuous maps with optical images through deep learning.
Keywords: LiDAR- Point Cloud- Height Estimation
Share Link
| Plain Format
| Corresponding Author (Chizuka Fujishima)
|
252 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-365 |
Investigation of the accuracy of World View-2 HD satellite imagery for large-scale mapping in Indonesia Soni Darmawan, Rika Hernawati, Rizka Awwaludin Kamil
Department of Geodetic Engineering, Institut Teknologi Nasional Bandung, Indonesia
Abstract
This study aims to investigate the variation accuracy of WorldView-2 HD 15 cm satellite imagery orthorectified using the Rational Polynomial Coefficients from Ground Control Points method in four types of areas: flat, undulating, homogeneous, and heterogeneous in Cirebon City West Java. The methodology used includes image preprocessing, orthorectification performed with distributed ground control points and supported by national digital elevation model (DEMNAS) data, and accuracy evaluation. Accuracy evaluation is carried out by comparing orthorectified imagery with RTK GNSS field measurements, focusing on point positions as independent control point (ICP) assessed using the Root Mean Square Error (RMSE). The results show for the flat area the accuracy can meet the standard at 1:1.000 scale on level 3 and for all type of area in study area the accuracy can meet the standard at 1:5,000 scale on level 1.
Keywords: Orthorectification- WorldView-2 HD- RMSE- spatial accuracy- large-scale mapping
Share Link
| Plain Format
| Corresponding Author (Soni Darmawan)
|
253 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-377 |
Spatio-Temporal Dynamics of Precipitation Anomalies in Southeast Asia: ENSO Influence and Machine Learning-Based Prediction Inuwa S.S1*, Dimyati M.1, Masita D.M.M.1 and Hafid S.1
Department of Geography, Faculty of Mathematic and Natural Science, Universitas Indonesia
Abstract
Keywords: CHIRPS, ENSO, Google Earth Engine, Machine learning, Precipitation anomaly,
Share Link
| Plain Format
| Corresponding Author (Inuwa sani Sani)
|
254 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-141 |
Enhancing UAV Photogrammetry-Derived Shallow Water Bathymetry Accuracy Through Regression-Based Refraction Correction I GD Yudha Partama, I Gede Gegiranang Wiryadi, I Dewa Gede Agung Pandawana
Universitas Mahasaraswati Denpasar
Abstract
Accurate shallow-water bathymetry is essential for coastal planning, habitat conservation, and disaster mitigation. Uncrewed Aerial Vehicle (UAV) photogrammetry offers a cost-effective method to generate high-resolution Digital Surface Models (DSM) in nearshore environments. However, optical distortions particularly from light refraction at the water surface introduce significant depth errors. This study evaluates the effectiveness of four regression-based models: Simple Linear Regression (SLR), Polynomial Regression, Generalized Additive Models (GAM), and Support Vector Regression (SVR) in correcting refraction-induced errors in UAV-derived DSMs. Ground-truth depth data were collected using Real-Time Kinematic GPS (RTK-GPS) and used to train each model, with DSM elevation as the predictor variable. A k-fold cross-validation approach was applied to assess model robustness, and performance was evaluated using Root Mean Square Error (RMSE) and Mean Error (ME). Results show that GAM achieved the lowest RMSE (0.261 m) and the smallest ME (-0.0063 m), indicating high accuracy and low bias. SLR performed comparably (RMSE = 0.262 m, ME =-0.0139 m), validating its utility as a simple yet reliable model. SVR also showed good performance (RMSE = 0.275 m), though with slightly higher bias (ME =-0.051 m). Polynomial Regression performed the poorest (RMSE = 0.736 m), suggesting its limited ability to model the complexity of refractive distortion. Spatial visualization of the corrected depth rasters confirmed the quantitative findings, with GAM and SVR producing more realistic bathymetric patterns. The study highlights the potential of non-linear and machine learning-based models, particularly GAM and SVR, to enhance the accuracy of UAV-based bathymetry in optically shallow coastal zones. These methods offer scalable, low-cost solutions for improving nearshore depth mapping and support informed coastal management decisions.
Keywords: UAV-photogrammetry, bathymetry, regression model, coastal mapping, refraction correction
Share Link
| Plain Format
| Corresponding Author (I GD Yudha Partama)
|
255 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-151 |
Monitoring Landslide Progression in Leyte, Philippines using Sentinel-2 Imagery and AI-Based Semantic Segmentation Bernadette Anne B. Recto (a*), Raymond Freth A. Lagria (a), Jude Vito C. Agapito (a), Likha G. Minimo (b,c)
a) Department of Industrial Engineering and Operations Research, University of the Philippines Diliman, Quezon City, Philippines
*bbrecto[at]up.edu.ph
b) Science and Society Program, University of the Philippines Diliman, Quezon City, Philippines
c) University of the Philippines Resilience Institute, Quezon City, Philippines
Abstract
Mountainous regions are often devastated by landslides especially following triggering events such as intense rainfall, earthquakes, and volcanic activity. While numerous studies have applied artificial intelligence (AI) for detecting landslides immediately after such events, few have focused on monitoring their spatial and temporal progression over time. This study highlights the potential of AI-based semantic segmentation to monitor landslide progression using multitemporal Sentinel-2 imagery, following the impact of Tropical Storm Agaton in the province of Leyte, Philippines, in April 2022. Sentinel-2 Level 2A images captured immediately after the event, as well as one month, three months, six months, and one year later were acquired and clipped to the municipality of Abuyog in Leyte. From each image, the Red, Green, Blue, and Near-Infrared (NIR) bands, along with the computed Normalized Difference Vegetation Index (NDVI) were extracted and stacked with the elevation values and slope derived from an Interferometric Synthetic Aperture Radar (IFSAR) Digital Terrain Model (DTM) to create multiband inputs for analysis. A U-Net model, trained on labeled landslide polygons which were validated by experts, was then utilized to detect the extent and progression of landslide-affected areas across sequential satellite images captured over time. The model demonstrated consistent segmentation performance across all dates, with F1-scores ranging from 0.684 to 0.821. The results show subtle spatial progression of landslides in certain areas in images taken immediately after the typhoon and those captured one month later, likely due to factors such as prolonged rainfall and terrain instability. In contrast, early signs of vegetation recovery become apparent in some regions between six months to one year after the event. This study provides a starting point for further research on post-disaster recovery monitoring and the identification of areas at risk of secondary landslides, offering practical value to local government units in planning and decision-making.
Keywords: Landslide- Change Detection- Sentinel-2- Artificial Intelligence- U-Net
Share Link
| Plain Format
| Corresponding Author (Bernadette Anne Recto)
|
256 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-162 |
Evaluation of CNN-Based Regression Models for Automated SNR Estimation of High-Resolution Satellite Imagery Hongjun Youn(a), Jihyeon Lim(b), and Taejung Kim(b*)
a) Program in Smart City Engineering, Inha University
100 Inha-ro, Incheon 22212, Republic of Korea
b) Dept. of Geoinformatic Engineering, Inha University
100 Inha-ro, Incheon 22212, Republic of Korea
*tezid[at]inha.ac.kr
Abstract
With the growing use of satellite imagery, the need for quantitative image quality assessment has become more pronounced. Signal-to-Noise Ratio (SNR) quantifies the ratio of useful signal power to noise power in an image and serves as a key metric for assessing image quality. In particular, high-resolution satellite imagery can exhibit SNR variation due to factors such as atmospheric condition and sensor degradation. There is a strong need for automated evaluation of SNR per image. Traditional SNR calculation methods rely on statistical analysis or high-pass filters within regions with uniform Digital Number (DN) values, often combined with manual operations. These approaches are limited by the difficulty of applying them to images with complex textures or boundaries. Furthermore, manual evaluation lacks consistency and scalability. This study aims to analyze the applicability of a Convolutional Neural Network (CNN) based regression approach for automatic quantitative estimation of SNR in satellite imagery. To this end, homogeneous regions were selected from high-resolution images and augmented with varying levels of artificial Gaussian noise to construct a training dataset. Regression models utilizing existing CNN architectures were then trained and evaluated. All CNN models were pretrained on ImageNet before fine-tuning. The performance of CNN regression models was compared across major architectures, including DenseNet-121, ResNet-50, and EfficientNet-B0. The experiment results showed that DenseNet-121 achieved high predictive accuracy with an RMSE of 7.87 and an R^2 of 0.82. In several homogeneous regions such as ocean and bare land, the model produced stable estimations, indicating that it captured underlying SNR characteristics beyond local intensity or contrast-based cues. Compared to existing statistics-based SNR estimation methods, the proposed regression model maintained precision for different noise levels and demonstrated its applicability in complex sce
Keywords: SNR- Convolution Neural Network- Natural Target-based assessment- Satellite Image Quality- Quality assessment
Share Link
| Plain Format
| Corresponding Author (Hongjun Youn)
|
257 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-163 |
Automated Road Marking Extraction from High-Resolution Aerial Imagery Using Deep Learning Techniques Tee-Ann Teo, Ting-Ni Chen
National Yang Ming Chiao Tung University, Taiwan
Abstract
With the rapid growth of intelligent transportation systems and autonomous driving, accurate segmentation of road markings is critical for vehicle localization, navigation, and control. Classical methods based on hand-crafted features or lightweight CNNs, although efficient, are vulnerable to shadows, surface wear, and illumination changes, limiting their robustness in real-world scenes. To address these issues, this study adopts YOLOv11-seg, a supervised instance-segmentation framework, for automated extraction of diverse road markings from high-resolution aerial imagery. we therefore create dataset with ten road-marking categories and train the model end-to-end with data augmentation designed for thin, elongated targets. Model performance is evaluated using precision, recall, and F1-score for each class, with overall micro, macro, and weighted averages. The model achieves high precision overall (macro = - 0.962- weighted = 0.967), with competitive F1-scores (macro = 0.907- weighted = 0.899). Classes with compact or well-bounded shapes (e.g., Bike Crossing ID, Stop Waiting Zone) exhibit the strongest F1-scores, while elongated or visually fragmented markings (e.g., Crosswalk, Painted Island) show lower recall, indicating missed instances under occlusion or heavy wear. These findings suggest that a properly trained YOLOv11-seg model offers a practical and accurate solution for large-scale road-marking mapping from aerial imagery. Future work will focus on class-balanced sampling, boundary-aware loss functions, multi-scale tiling at higher input resolutions, and morphology-guided post-processing to further improve recall for thin, discontinuous markings.
Keywords: Road markings, Deep Learning, Semantic Segmentation, Aerial Image
Share Link
| Plain Format
| Corresponding Author (Tee-Ann Teo)
|
258 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-175 |
Re-evaluating Urban Flood Causality: Integrating InSAR Time-Series and Multi-Sensor Satellite Observations for the 2024 Makassar Flood Agustan Agustan1,2*, Mukhsan Putra Hatta3, Ilham Alimuddin3, Takeo ITO1
1* Earthquake and Volcano Research Center, Nagoya University, JAPAN
2 National Research and Innovation Agency (BRIN) - Jakarta, INDONESIA
3 Universitas Hasanuddin, INDONESIA
Abstract
Urban flooding in coastal Southeast Asia is often attributed to land subsidence driven by excessive groundwater extraction and natural compaction. However, such assumptions require critical evaluation using robust geospatial evidence. This study investigates the causes of a major flood event that occurred between 8 and 20 December 2024 in Makassar, South Sulawesi, Indonesia. While local narratives linked the flood to accelerated land subsidence, our multi-sensor satellite analysis suggests a more complex hydrological context. We integrated time-series InSAR data processed using MintPy with Sentinel-1 radar composites and Sentinel-2 optical indices (NDWI, NDVI, moisture index, and SWIR) to evaluate the spatial dynamics of land deformation and surface moisture. The InSAR analysis shows that while some areas in southeastern Makassar exhibit long-term subsidence trends of up to -20 cm over six years, no significant acceleration or abrupt displacement was observed prior to or during the flood period. This undermines the hypothesis that sudden subsidence triggered the inundation. Conversely, Sentinel-1 pre- and post-event imagery reveals extensive backscatter anomalies in low-lying urban and peri-urban areas, indicating widespread surface water presence consistent with flood patterns. The Sentinel-2 NDWI and moisture index maps highlight pre-existing hydrologically vulnerable zones, especially near poorly connected irrigation and drainage networks. These findings point to systemic water mismanagement, rather than tectonic or anthropogenic ground failure, as the primary flood driver. Our results emphasize the importance of multi-sensor remote sensing approaches in disentangling overlapping geohazards in rapidly urbanizing deltaic cities. The study calls for a rethinking of flood attribution in policymaking and advocates for integrated spatial diagnostics in hydrological infrastructure planning.
Keywords: flood, subsidence, makassar, insar, time-series
Share Link
| Plain Format
| Corresponding Author (Agustan Agustan)
|
259 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-184 |
Model-based Boat Recognition for Urban River Navigation using Waterborne LiDAR and Scan Matching Kazuki Ohira(a*), Tetsu Yamaguchi(a), Nobuaki Kubo(b), Etsuro Shimizu(b), Masafumi Nakagawa(a)
a) Shibaura Institute of Technology, Japan
*ah20091[at]shibaura-it.ac.jp
b) Tokyo University of Marine Science and Technology, Japan
Abstract
In recent years, the development of autonomous boats utilizing communication and sensing technologies has been actively pursued worldwide. However, collisions remain frequent among small boats such as fishing boats. Compared to large ships, such as tankers and container ships, most small boats lack advanced navigational aids and instead rely on visual navigation by their crews. The prolonged hours required for manual operation, such as maneuvering and monitoring the surroundings, place a significant burden on small boat operators. In addition, the Tokyo Metropolitan Government has begun using rivers as commuter routes to alleviate traffic congestion caused by population concentration. However, river transportation presents technical challenges, including narrow channels and numerous obstacles. Therefore, obstacle avoidance functions are required for autonomous boats. Existing methods for boat collision avoidance include position sharing using GNSS and image-based object detection using deep learning techinques such as Faster-RCNN. However, GNSS-based position sharing is unavailable in non-GNSS positioning environments such as under bridges. In addition, deep learning-based image processing requires a large amounts of pre-collected training data featuring boats from various orientations to achive reliable detection. Therefore, we propose a model-based boat detection method from a moving boat with LiDAR and scan matching to recognize surrounding moving boats and collisions automatically. Moreover, we evaluated our methodology through experiments using waterborne LiDAR mounted on a boat in urban rivers.
Keywords: SLAM, scan matching, LiDAR, autonomous boats, urban rivers
Share Link
| Plain Format
| Corresponding Author (Ohira Kazuki)
|
260 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-186 |
Simulation-based Assessment of Marker Placement for Point Cloud Acquisition using a Lunar Survey Rove Tomoki Sugihara(a*), Rikako Shigefuji(a), Masafumi Nakagawa(a), Masanori Takigawa(b), Keitaro Kitamura(b), Taizo Kobayashi(c)
a) Shibaura Institute of Technology, Japan
*ah20086[at]shibaura-it.ac.jp
b) Asia Air Survey Co., Ltd., Japan
c) Ritsumeikan University, Japan
Abstract
The Ministry of Land, Infrastructure, Transport and Tourism (MLIT) has been promoting a project to develop innovative unmanned construction technologies for use on the lunar surface. Unmanned surveying and remote construction technologies are essential for ground surveying in the initial phase of a lunar base construction project. However, due to the extreme temperature changes and space radiation in the lunar environment make conducting, conventional surveying using a total station difficult. Moreover, conventional Simultaneous Localization and Mapping using LiDAR (LiDAR-SLAM) is suitable for 3D measurement in a non-GNSS environment. However, the LiDAR-SLAM is not suitable for lunar surfaces due to the regolith^s poor geometric features. Therefore, we proposed a methodology to use spherical markers as landmarks with LiDAR-SLAM to improve the self-position estimation performance. However, the design of the marker placement has not yet been discussed. Therefore, we conducted an experiment on spherical marker arrangement for LiDAR-SLAM to evaluate its relative accuracy. In this study, we evaluated the validity of the marker placement and simulated the reduction in the number of markers by using data acquired in a lunar surface simulation field and a robot simulator that reconstructed the experimental field. We acquired point clouds of the lunar surface simulation field is acquired using LiDAR mounted on the prototype rover. In addition, we selected Webots as a robot simulator. Through our experiments, we applied a multivariate analysis to quantitative variables, such as the relative distance and angle between the spherical markers and the LiDAR. We also, proposed a methodology to evaluate marker placement planning.
Keywords: LiDAR-SLAM, robot simulator, unmanned surveying
Share Link
| Plain Format
| Corresponding Author (TOMOKI SUGIHARA)
|
261 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-192 |
Automated Glacial Lake Mapping in the Himalayas: An Ensemble Multi-Sensor Approach with Random Forest and High-Resolution PlanetScope Imagery Bhawna Pathak , Ankit Singh, Dericks P. Shukla
Dexter Lab, School of Civil and Environmental Engineering, Indian Institute of Technology (IIT) Mandi, 175005, Himachal, India.
Abstract
Glacial lakes are critical indicators of climate change and present significant Glacial Lake Outburst Flood (GLOF) risks in high-mountain areas. Accurate and automated mapping and monitoring of these dynamic features is crucial but challenged by complex terrain, persistent cloud cover, and spectral ambiguities. This study introduces an automated method for detecting glacial lakes in Northwestern Himalaya. This method leverages multi-source remote sensing data and a robust Random Forest (RF) classifier.
Our approach introduces a classification method that integrates an ensemble of Sentinel-1 Synthetic Aperture Radar (SAR), Sentinel-2 Multi-spectral Instrument (MSI), SRTM Digital Elevation Model (DEM), and high-resolution 3-meter PlanetScope optical imagery. This data fusion offers exceptional detail necessary for accurately defining glacial lake boundaries.
The RF model, trained on an augmented dataset, tackles the problem of misclassifications involving streams and wet surfaces- it demonstrates impressive results. The model achieved an overall accuracy of 94.44%, along with precision, recall, and F-1 scores of 0.95, 0.97, and 0.96, respectively, and an AUC-ROC score of 0.983.
This method demonstrates clear advantages over existing approaches. Deep learning models like GLNet require large amounts of labeled training data, high computational resources, and specialized GPU infrastructure. On the other hand, our machine learning model (RF) offers comparable performance without such intensive requirements, making it more accessible and practical for broader glaciological applications. It effectively deals with common issues such as misclassifications of shadows, supraglacial melt ponds, and streams. Furthermore, the approach is temporally transferable and can be adapted for multi-temporal analysis of glacial lake dynamics. This offers valuable insights into long-term monitoring and integration into early warning systems and GLOF risk reduction frameworks. This scalable and interpretable workflow provides a practical alternative to deep learning models, supporting high-resolution glacial lake inventories.
Keywords: Glacial lakes, GLOF, automated detection, Random Forest.
Share Link
| Plain Format
| Corresponding Author (Bhawna Pathak)
|
262 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-204 |
A Study on Improving Water Body Detection Accuracy in CAS500-1 Satellite Imagery Using Deep Learning SeoJin Kong(a), Wonwoo Seo(a), SooAhm Rhee(a*)
(a) Image Eng. Research Center, 3DLabs Co. Ltd, Republic of Korea,
(*)ahmkun[at]3dlabs.co.kr
Abstract
Efficient management and continuous monitoring of water resources is essential for agriculture, urban, disaster response and other sectors. Accordingly, the demand for automated water body detection techniques is steadily increasing. CAS500-1, a high-resolution satellite developed in Korea, provides Analysis Ready Data (ARD), including surface reflectance images and additional information such as water body and cloud masks. The water body mask is currently created through manual digitization, which is time-consuming, costly, and limited in reflecting temporal changes. This study aims to automate and improve the accuracy of water body detection in CAS500-1 satellite imagery through deep learning models. To this end, deep learning models for semantic segmentation models-U-Net and HRNet-were applied and their performance was compared. And the resulting water body masks were evaluated against existing labeled data. The results showed high performance: U-Net achieved an F1-score of 0.95 and IoU of 0.91, while HRNet achieved an F1-score of 0.92 and IoU of 0.86. Notably, the models were able to distinguish small objects such as ships and bridges from water bodies, even when such details were absent in the label data. This study overcomes the limitations of the existing manual method and enables automated detection of water body using high-resolution satellite imagery. It facilitates continuous and precise monitoring of surface water areas, and is expected to contribute meaningfully to decision-making processes related to water resource utilization.
Keywords: ARD, CAS500-1, Deep Learning, Semantic Segmentation, Water body detection
Share Link
| Plain Format
| Corresponding Author (Seo Jin Kong)
|
263 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-209 |
Enhancing Land Cover Classification Accuracy in Cloud-Prone Tropical Regions Using Majority Filtering and Google Earth Engine Yastika, P.E. 1*, Sudipa I.N.1, Gunantara I.M.O.2 and Karmadi K.A.3
1 Regional and Rural Planning, Universitas Mahasaraswati Denpasar, Denpasar, Indonesia
2. Environmental Sciences, Udayana University, Denpasar Indonesia
3. Environmental Engineering, Universitas Mahasaraswati Denpasar, Denpasar, Indonesia
Abstract
As a region develops, its land use patterns become increasingly dynamic. To support sustainable development and minimize conflicts, accurate and timely land use data is essential, particularly in the form of regional-scale land cover maps. Satellite imagery is commonly used for this purpose due to its capability to efficiently cover large areas. However, in tropical regions, cloud cover often interferes with the quality of optical imagery. This study proposes a method called majority filtering to enhance land cover classification accuracy. A total of 831 Sentinel-2 images from 2019 to 2024, covering the Badung-Denpasar region in Bali, were processed using Google Earth Engine cloud computing. An initial classification using the Random Forest algorithm was conducted in a time-series framework. The application of a majority filter improved the overall classification accuracy to 85%, particularly in areas frequently affected by cloud-related distortions. Additionally, the filter helped to smooth class boundaries and reduce classification noise, resulting in more coherent and reliable mapping outputs. By addressing the limitations of optical imagery in cloudy regions, this research offers a simple yet effective improvement for land cover mapping that can be applied in various planning and environmental monitoring efforts.
Keywords: Google Earth Engine, Land Cover Classification, Majority Filtering, Random Forest, Sentinel-2
Share Link
| Plain Format
| Corresponding Author (Putu Edi Yastika)
|
264 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-227 |
Improving Remote Sensing Change Detection via Spatial Autocorrelation Regularization and Momentum Orthogonalization Rahmat Faisal
ESRI Indonesia
Abstract
Change detection in remote sensing is vital for monitoring environmental shifts, urban expansion, and disaster impacts. Accurately identifying spatial changes over time is critical for effective policy development, sustainable resource allocation, and responsive early warning systems. However, many existing deep learning models treat change detection as an independent pixel-wise classification problem, failing to account for the spatial correlations embedded in geospatial imagery. This often leads to outputs that are noisy, fragmented, and spatially inconsistent.
To overcome these challenges, we introduce a novel deep learning framework that explicitly incorporates spatial autocorrelation into the training process. Our approach enhances the loss function with a regularization term derived from Moran^s I statistic and spatial neighborhood smoothness, guiding the model to preserve local spatial structures and produce more coherent prediction maps.
In addition, we employ the MuOn optimizer-short for Momentum Orthogonalized by Newton-Schulz-a cutting-edge optimization method that improves training dynamics by orthogonalizing the momentum vector with respect to the gradient direction. This process reduces redundant updates, enhances gradient diversity, and accelerates convergence, which is especially advantageous in high-dimensional remote sensing models.
By combining spatial autocorrelation-aware regularization with the MuOn optimizer, our framework delivers improved spatial coherence and classification accuracy. This makes it a robust, efficient, and interpretable solution for high-resolution remote sensing change detection.
Keywords: Change Detection, Spatial Autocorrelation, Momentum Orthogonalization
Share Link
| Plain Format
| Corresponding Author (Rahmat Faisal)
|
265 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-229 |
Identification of Geothermal Manifestations in Java Island Based on Satellite Data Images Using Random Forest Classification Rasta Faisal
Padjadjaran University
Abstract
Pulau Jawa memiliki potensi panas bumi yang signifikan, namun eksplorasi konvensional seringkali terkendala oleh biaya tinggi dan aksesibilitas yang sulit. Penelitian ini bertujuan untuk mengembangkan alur kerja yang efisien dan otomatis guna memetakan manifestasi panas bumi di seluruh Pulau Jawa. Metode yang diusulkan mengintegrasikan data citra satelit Landsat-9, platform komputasi awan Google Earth Engine (GEE), dan algoritma klasifikasi Random Forest. Dengan memanfaatkan Land Surface Temperature (LST) dan Normalized Difference Vegetation Index (NDVI) sebagai parameter prediktor utama, alur kerja ini dirancang untuk mengidentifikasi sebaran area berpotensi. Pendekatan terintegrasi ini diharapkan dapat menjadi alat bantu yang efektif dan akurat untuk mendukung eksplorasi awal panas bumi dalam skala regional.
Keywords: Please Just Try to Submit This Sample Abstract
Share Link
| Plain Format
| Corresponding Author (Rasta Faisal)
|
266 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-235 |
Comparison of Histogram Matching Preprocessing Methods for Generating Natural GOCI-II Full Disk Images Kim, S., Song, S. and Rhee, S.*
3D Labs Co., Ltd. Incheon, Republic of Korea
*ahmkun[at]3dlabs.co.kr
Abstract
Full disk imagery is significant in that it enables observation of atmospheric and oceanic changes on a global scale. It is particularly important for time-series analysis, which supports various applications such as cloud tracking and ocean current monitoring through continuous image acquisition. Accordingly, satellites such as the GOES series (USA), Himawari series (Japan), and Meteosat series (EU) continuously provide full disk imagery and derived products. Korea also contributes with its GEO-KOMPSAT-2A and 2B satellites, which provide global coverage for atmospheric and oceanic monitoring. Similar to conventional full disk systems, the GOCI-II payload aboard the GK-2B satellite captures slot-by-slot images that must be mosaicked into a single full disk image. However, this sequential acquisition introduces time differences between slots, resulting in pixel value imbalances that can degrade analytical accuracy. In addition, brightness range inconsistencies between slots can lead to visually unnatural mosaicked images. In this paper, we aimed to generate visually natural full disk images by using histogram matching in the image matching process. In this process, we performed preprocessing procedures such as changing the matching order, removing cloud regions, and stretching the image, and compared the results. Experimental results showed that the most visually natural full disk images were produced by adjusting the histogram offsets of cloud-unfiltered input images to the reference image and performing matching in slot number order. These results indicate potential for use as baseline data in global monitoring systems and time-series pattern analysis. However, since histogram matching directly modifies pixel values, further validation is necessary to ensure the reliability of time-series analyses based on these processed images.
Keywords: GOCI-II, Full disk mosaicking, Histogram matching, Ocean satellite, Image processing
Share Link
| Plain Format
| Corresponding Author (Seunghee Kim)
|
267 |
Topic C: Emerging Technologies in Remote Sensing |
ABS-237 |
Development of An Automated Satellite Image Collection and Processing System for Image Utilization Song, S.H. (a), Jeong, Y.J. (a), Kim, S.W. (a), Jeong, S.W. (a), Kim, T.J. (b*)
a) 3D Labs Co., Ltd
b) Department of Geoinformatic Engineering, Inha University
*tezid[at]inha.ac.kr
Abstract
As the value and demand for satellite imagery continue to grow, non-traditional and non-expert users, such as local government officials and practitioners of existing industry, are being more interested in utilizing satellite images. However, recent interviews with them revealed several improvements were required for wider utilization They felt that the utilization of satellite images required professional knowledge on remote sensing and image processing and that the search and collection of images over their region of interest (ROI) was very professional and difficult tasks.
We consider the automation of image collection is a key component for mitigating these difficulties and for enabling rapid and user-friendly analysis. For these reasons, we have developed a system that handles the entire workflow from image collection to processing of various imagery captured by satellites. In this paper, we describe the features developed for the automated collection of satellite imagery. The automated collection process allows users to define search criteria such as an ROI area, date, image type, and cloud coverage. Then, it provides a list of matching satellite images. Users can review metadata and spatial coverage for each image and select the desired data. The data selected, which may take several hours, are downloaded automatically. After the completion of the download, the data are saved to a database and inputted to flow-up processing steps. We implemented the automated image collection for satellite images provided by pre-defined API (Application Programming Interface) protocols, such as Landsat and Sentinel images, or by official Web pages, such as CAS500-1 images. This functionality is expected to support a wide range of applications, including time-series analysis, AI-based object detection, and change monitoring, and to contribute to the establishment of a reliable and efficient image processing framework.
Keywords: Satellite Image, Automated Collection, Image Utilization
Share Link
| Plain Format
| Corresponding Author (Seunghwan Song)
|
268 |
Topic D: Geospatial Data Integration |
ABS-259 |
Development of a Horizontally Rotating 3D LiDAR System for Control Point Surveying in Lunar Environments using a Rover Harada Amane(a), Tomoki Sugihara(a), Rikako Shigefuji(a), Masanori Takigawa(b), Keitaro Kitamura(b),Takahiro Hiramatu(b), Tomowo Ohga(b), Hisatoshi Sano(b), Taizo Kobayashi(c) ,Masafumi Nakagawa(a)
a)Shibaura Institute of Technology
b)Asia Air Survey Co., Ltd.
c) Ritsumeikan University
Abstract
Robust surveying technologies are necessary for lunar development due to extreme environments, high transportation costs, and the need for accurate construction. Conventional image measurements struggle with regolith-covered surfaces, and GNSS is unavailable complicate LiDAR-based SLAM. Therefore, we have developed a LiDAR-SfM/MVS that combines LiDAR-SLAM with spherical markers for control point surveying and SfM/MVS for clouds. This method uses a turntable mounted 3D-LiDAR and multidirectional cameras. Previous experiments using a horizontal scanning 3D-LiDAR mounted on a rover showed the narrow vertical angle range of laser scanning prevents the rover from climing inclined surfaces when no vertical objects exist in the measured environment. To address this isssue, we developed a terrestrial LiDAR-like 3D measurement system consisting of a vertical scanning 3D-LiDAR mounted on a horizontal turntable. We also developed a self-calibration method based on point cloud matching with the iterative closest point algorithm to estimate the internal orientation parameters consisting of the line-of-sight offset angle, the rolling distortion angle, and the rotation axis offset distance. Experiments conducted on a lunar-simulated terrain confirmed that the proposed method achieved a spherical marker fitting accuracy of 0.01m or less for center position estimation. Registration errors using total station survey results averaged 0.0121m, and sequential LiDAR point cloud registration showed an average residual of 0.0271m, which is consistent with LiDAR^s 0.03m ranging accuracy. The LiDAR-SfM/MVS system generated dense point clouds with a point density of 0.01m or less, as well as an accuracy of 0.03m (RMSE) in registration between LiDAR and SfM/MVS point clouds. These results confirms that the proposed method meets the required measurement accuracy of 0.10m for unmanned construction. Future work will focus on hardening the sensor system to prepare it for the lunar environment.
Keywords: lunar surveying, LiDAR, SLAM, SfM/MVS
Share Link
| Plain Format
| Corresponding Author (Amane Harada)
|
269 |
Topic D: Geospatial Data Integration |
ABS-8 |
Mapping Potential Saltern Areas Using Remote Sensing and Geospatial Analysis of Physical and Climatic Parameters for Sustainable and Innovative Salt Production Rodel T. Utrera(1*), Nathaniel R. Alibuyog(2), Julius Jonar L. Butay(3), Joemel G. Agreda(4), Nadine Sharinette R. Bravo(1), Lord Ian R. Galano(1)
1) Research Directorate, Mariano Marcos State University
*rtutrera[at]mmsu.edu.ph
2) College of Engineering, Mariano Marcos State University
3) Planning Directorate, Mariano Marcos State University
4) College of Computing and Information Sciences, Mariano
Marcos State University
Abstract
This study maps potential saltern sites in Region 1, Philippines, by integrating remote sensing and geospatial technologies to identify optimal locations for sustainable and innovative salt production. A Geographic Information System (GIS)-based approach was employed, utilizing remotely sensed data-such as land cover, digital elevation models (DEMs), and satellite-derived environmental variables-to assess site suitability based on physical and environmental parameters. Key physical factors analyzed included land cover classification, topography, slope, soil type, and proximity to coastal and inland water sources. Climatic parameters such as rainfall, temperature, wind speed, and relative humidity were also examined to determine ideal conditions for natural salt crystallization and evaporation.
Remote sensing significantly enhanced the spatial analysis process, allowing for the efficient development of high-resolution suitability maps. For validation, the study employed both field observations and high-resolution drone imagery to verify the accuracy of the mapped outputs. Drone-based aerial surveys provided up-to-date, site-specific visual data that supported ground truthing and improved the spatial resolution and reliability of the validation process.
The results identified priority areas that meet the environmental and logistical requirements for saltern development. This integrated methodology offers a scalable framework for local governments and stakeholders, facilitating data-driven decision-making in support of artisanal salt production and regional development. The study underscores the value of combining satellite remote sensing, GIS, and drone-based validation to enhance spatial planning and contribute to the sustainable growth of the salt industry.
Keywords: Remote sensing, Geospatial analysis, Saltern site suitability, Digital elevation model, Sustainable salt production, Region 1 Philippines, GIS, Climatological parameters
Share Link
| Plain Format
| Corresponding Author (Rodel Tolosa Utrera)
|
270 |
Topic D: Geospatial Data Integration |
ABS-13 |
Predicting Landslide Susceptibility by Using Logistic Regression and Random Forest at Different Spatial Resolution in Taiwan Shyue-Cherng Liaw1, Uen-Hao Wang2 and Wan-Jiun Chen3
1 Professor, Department of Geography, National Taiwan Normal University, Taiwan
2 Assistant Researcher, Forest Management Division, Taiwan Forest Research Institute, Taiwan
3 Professor, Institute of Natural Resource and Environmental Management, National Taipei University, Taiwan
Abstract
This study aims to compare the effectiveness of logistic regression and random forest methods in predicting landslide susceptibility, using four spatial resolutions for analysis. The research site is located in the Lioukuei Experimental Forest, Kaohsiung City. Landslide susceptibility models were developed and analyzed based on land cover maps from August 2009 (post-Typhoon Morakot) and January 2024. Variable selection for potential landslide influencers included both variables identified in previous studies, while this research uniquely incorporates canopy structural indices derived from airborne LiDAR, resulting in a total of 14 variables used for model construction. Results indicate that the random forest model outperforms the logistic regression model across all spatial scales, with the 10m model achieving optimal performance. The 10m model validation shows an AUC of 0.929 and an accuracy of 85.74%, demonstrating excellent predictive discrimination. Furthermore, in the 10m random forest model, four canopy structure indicators (Canopy Cover Ratio, Mean Top-of-the-Canopy, Variance of Canopy Height, and Canopy Volume Ratio) rank as the top variables, confirming their significant contribution to landslide susceptibility prediction by effectively integrating forest structure characteristics. This study provides valuable insights into spatial scale selection and LiDAR data application for landslide susceptibility modeling and offers a scientific basis for forest management and disaster prevention strategies.
Keywords: machine learning, airborne LiDAR, canopy structure indices, Lioukuei Experimental Forest.
Share Link
| Plain Format
| Corresponding Author (Shyue-Cherng Liaw)
|
Page 9 (data 241 to 270 of 351) | Displayed ini 30 data/page << PREV
1 2 3 4 5 6 7 8 9 10 11 12 NEXT >>
|