CINXE.COM

Search results for: lidar

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: lidar</title> <meta name="description" content="Search results for: lidar"> <meta name="keywords" content="lidar"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="lidar" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="lidar"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 107</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: lidar</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">107</span> Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sa%C5%A1o%20Pe%C4%8Dnik">Sašo Pečnik</a>, <a href="https://publications.waset.org/abstracts/search?q=Borut%20%C5%BDalik"> Borut Žalik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=filtering" title="filtering">filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=graphics" title=" graphics"> graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=level-of-details" title=" level-of-details"> level-of-details</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20visualization" title=" real-time visualization"> real-time visualization</a> </p> <a href="https://publications.waset.org/abstracts/16857/real-time-visualization-using-gpu-accelerated-filtering-of-lidar-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">308</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">106</span> Obstacle Classification Method Based on 2D LIDAR Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Moohyun%20Lee">Moohyun Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Soojung%20Hur"> Soojung Hur</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongwan%20Park"> Yongwan Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper is proposed a method uses only LIDAR system to classification an obstacle and determine its type by establishing database for classifying obstacles based on LIDAR. The existing LIDAR system, in determining the recognition of obstruction in an autonomous vehicle, has an advantage in terms of accuracy and shorter recognition time. However, it was difficult to determine the type of obstacle and therefore accurate path planning based on the type of obstacle was not possible. In order to overcome this problem, a method of classifying obstacle type based on existing LIDAR and using the width of obstacle materials was proposed. However, width measurement was not sufficient to improve accuracy. In this research, the width data was used to do the first classification; database for LIDAR intensity data by four major obstacle materials on the road were created; comparison is made to the LIDAR intensity data of actual obstacle materials; and determine the obstacle type by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that data declined in quality in comparison to 3D LIDAR and it was possible to classify obstacle materials using 2D LIDAR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=obstacle" title="obstacle">obstacle</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=database" title=" database"> database</a>, <a href="https://publications.waset.org/abstracts/search?q=LIDAR" title=" LIDAR"> LIDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity" title=" intensity"> intensity</a> </p> <a href="https://publications.waset.org/abstracts/11838/obstacle-classification-method-based-on-2d-lidar-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11838.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">105</span> Optical Parametric Oscillators Lidar Sounding of Trace Atmospheric Gases in the 3-4 µm Spectral Range </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Olga%20V.%20Kharchenko">Olga V. Kharchenko</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Applicability of a KTA crystal-based laser system with optical parametric oscillators (OPO) generation to lidar sounding of the atmosphere in the spectral range 3&ndash;4 &micro;m is studied in this work. A technique based on differential absorption lidar (DIAL) method and differential optical absorption spectroscopy (DOAS) is developed for lidar sounding of trace atmospheric gases (TAG). The DIAL-DOAS technique is tested to estimate its efficiency for lidar sounding of atmospheric trace gases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=atmosphere" title="atmosphere">atmosphere</a>, <a href="https://publications.waset.org/abstracts/search?q=lidar%20sounding" title=" lidar sounding"> lidar sounding</a>, <a href="https://publications.waset.org/abstracts/search?q=DIAL" title=" DIAL"> DIAL</a>, <a href="https://publications.waset.org/abstracts/search?q=DOAS" title=" DOAS"> DOAS</a>, <a href="https://publications.waset.org/abstracts/search?q=trace%20gases" title=" trace gases"> trace gases</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20crystal" title=" nonlinear crystal"> nonlinear crystal</a> </p> <a href="https://publications.waset.org/abstracts/46707/optical-parametric-oscillators-lidar-sounding-of-trace-atmospheric-gases-in-the-3-4-m-spectral-range" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46707.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">104</span> Advancing Horizons: Standardized Future Trends in LiDAR and Remote Sensing Technologies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Spoorthi%20Sripad">Spoorthi Sripad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rapid advancements in LiDAR (Light Detection and Ranging) technology, coupled with the synergy of remote sensing, have revolutionized Earth observation methodologies. This paper delves into the transformative impact of integrated LiDAR and remote sensing systems. Focusing on miniaturization, cost reduction, and improved resolution, the study explores the evolving landscape of terrestrial and aquatic environmental monitoring. The integration of multi-wavelength and dual-mode LiDAR systems, alongside collaborative efforts with other remote sensing technologies, presents a comprehensive approach. The paper highlights the pivotal role of LiDAR in environmental assessment, urban planning, and infrastructure development. As the amalgamation of LiDAR and remote sensing reshapes Earth observation, this research anticipates a paradigm shift in our understanding of dynamic planetary processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title="LiDAR">LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=earth%20observation" title=" earth observation"> earth observation</a>, <a href="https://publications.waset.org/abstracts/search?q=advancements" title=" advancements"> advancements</a>, <a href="https://publications.waset.org/abstracts/search?q=integration" title=" integration"> integration</a>, <a href="https://publications.waset.org/abstracts/search?q=environmental%20monitoring" title=" environmental monitoring"> environmental monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-wavelength" title=" multi-wavelength"> multi-wavelength</a>, <a href="https://publications.waset.org/abstracts/search?q=dual-mode" title=" dual-mode"> dual-mode</a>, <a href="https://publications.waset.org/abstracts/search?q=technology" title=" technology"> technology</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20planning" title=" urban planning"> urban planning</a>, <a href="https://publications.waset.org/abstracts/search?q=infrastructure" title=" infrastructure"> infrastructure</a>, <a href="https://publications.waset.org/abstracts/search?q=resolution" title=" resolution"> resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=miniaturization" title=" miniaturization"> miniaturization</a> </p> <a href="https://publications.waset.org/abstracts/179167/advancing-horizons-standardized-future-trends-in-lidar-and-remote-sensing-technologies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">103</span> Application of Remote Sensing Technique on the Monitoring of Mine Eco-Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haidong%20Li">Haidong Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Weishou%20Shen"> Weishou Shen</a>, <a href="https://publications.waset.org/abstracts/search?q=Guoping%20Lv"> Guoping Lv</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Wang"> Tao Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aiming to overcome the limitation of the application of traditional remote sensing (RS) technique in the mine eco-environmental monitoring, in this paper, we first classified the eco-environmental damages caused by mining activities and then introduced the principle, classification and characteristics of the Light Detection and Ranging (LiDAR) technique. The potentiality of LiDAR technique in the mine eco-environmental monitoring was analyzed, particularly in extracting vertical structure parameters of vegetation, through comparing the feasibility and applicability of traditional RS method and LiDAR technique in monitoring different types of indicators. The application situation of LiDAR technique in extracting typical mine indicators, such as land destruction in mining areas, damage of ecological integrity and natural soil erosion. The result showed that the LiDAR technique has the ability to monitor most of the mine eco-environmental indicators, and exhibited higher accuracy comparing with traditional RS technique, specifically speaking, the applicability of LiDAR technique on each indicator depends on the accuracy requirement of mine eco-environmental monitoring. In the item of large mine, LiDAR three-dimensional point cloud data not only could be used as the complementary data source of optical RS, Airborne/Satellite LiDAR could also fulfill the demand of extracting vertical structure parameters of vegetation in large areas. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title="LiDAR">LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=mine" title=" mine"> mine</a>, <a href="https://publications.waset.org/abstracts/search?q=ecological%20damage" title=" ecological damage"> ecological damage</a>, <a href="https://publications.waset.org/abstracts/search?q=monitoring" title=" monitoring"> monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=traditional%20remote%20sensing%20technique" title=" traditional remote sensing technique"> traditional remote sensing technique</a> </p> <a href="https://publications.waset.org/abstracts/65821/application-of-remote-sensing-technique-on-the-monitoring-of-mine-eco-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65821.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">397</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">102</span> Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edgardo%20V.%20Gubatanga%20Jr.">Edgardo V. Gubatanga Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joshua%20Salvacion"> Mark Joshua Salvacion</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20LiDAR" title="aerial LiDAR">aerial LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=colorization" title=" colorization"> colorization</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20images" title=" intensity images"> intensity images</a> </p> <a href="https://publications.waset.org/abstracts/94116/application-of-deep-learning-in-colorization-of-lidar-derived-intensity-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94116.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">101</span> DIAL Measurements of Vertical Distribution of Ozone at the Siberian Lidar Station in Tomsk</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Oleg%20A.%20Romanovskii">Oleg A. Romanovskii</a>, <a href="https://publications.waset.org/abstracts/search?q=Vladimir%20D.%20Burlakov"> Vladimir D. Burlakov</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergey%20I.%20Dolgii"> Sergey I. Dolgii</a>, <a href="https://publications.waset.org/abstracts/search?q=Olga%20V.%20Kharchenko"> Olga V. Kharchenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexey%20A.%20Nevzorov"> Alexey A. Nevzorov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexey%20V.%20Nevzorov"> Alexey V. Nevzorov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents the results of DIAL measurements of the vertical ozone distribution. The ozone lidar operate as part of the measurement complex at Siberian Lidar Station (SLS) of V.E. Zuev Institute of Atmospheric Optics SB RAS, Tomsk (56.5&ordm;N; 85.0&ordm;E) and designed for study of the vertical ozone distribution in the upper troposphere&ndash;lower stratosphere. Most suitable wavelengths for measurements of ozone profiles are selected. We present an algorithm for retrieval of vertical distribution of ozone with temperature and aerosol correction during DIAL lidar sounding of the atmosphere. The temperature correction of ozone absorption coefficients is introduced in the software to reduce the retrieval errors. Results of lidar measurement at wavelengths of 299 and 341 nm agree with model estimates, which point to acceptable accuracy of ozone sounding in the 6&ndash;18 km altitude range. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lidar" title="lidar">lidar</a>, <a href="https://publications.waset.org/abstracts/search?q=ozone%20distribution" title=" ozone distribution"> ozone distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=atmosphere" title=" atmosphere"> atmosphere</a>, <a href="https://publications.waset.org/abstracts/search?q=DIAL" title=" DIAL"> DIAL</a> </p> <a href="https://publications.waset.org/abstracts/46524/dial-measurements-of-vertical-distribution-of-ozone-at-the-siberian-lidar-station-in-tomsk" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46524.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">100</span> Challenges and Opportunities: One Stop Processing for the Automation of Indonesian Large-Scale Topographic Base Map Using Airborne LiDAR Data </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elyta%20Widyaningrum">Elyta Widyaningrum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The LiDAR data acquisition has been recognizable as one of the fastest solution to provide the basis data for topographic base mapping in Indonesia. The challenges to accelerate the provision of large-scale topographic base maps as a development plan basis gives the opportunity to implement the automated scheme in the map production process. The one stop processing will also contribute to accelerate the map provision especially to conform with the Indonesian fundamental spatial data catalog derived from ISO 19110 and geospatial database integration. Thus, the automated LiDAR classification, DTM generation and feature extraction will be conducted in one GIS-software environment to form all layers of topographic base maps. The quality of automated topographic base map will be assessed and analyzed based on its completeness, correctness, contiguity, consistency and possible customization. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automation" title="automation">automation</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS%20environment" title=" GIS environment"> GIS environment</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR%20processing" title=" LiDAR processing"> LiDAR processing</a>, <a href="https://publications.waset.org/abstracts/search?q=map%20quality" title=" map quality"> map quality</a> </p> <a href="https://publications.waset.org/abstracts/60469/challenges-and-opportunities-one-stop-processing-for-the-automation-of-indonesian-large-scale-topographic-base-map-using-airborne-lidar-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60469.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">99</span> 3D Building Model Utilizing Airborne LiDAR Dataset and Terrestrial Photographic Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Jasmee">J. Jasmee</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Roslina"> I. Roslina</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Mohammed%20Yaziz%20%26%20A.H%20Juazer%20Rizal"> A. Mohammed Yaziz &amp; A.H Juazer Rizal </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The need of an effective building information collection method is vital to support a diversity of land development activities. At present, advances in remote sensing such as airborne LiDAR (Light Detection and Ranging) is an established technology for building information collection, location, and elevation of the reflecting laser points towards the construction of 3D building models. In this study, LiDAR datasets and terrestrial photographic images of buildings towards the construction of 3D building models is explored. It is found that, the quantitative accuracy of the constructed 3D building model, namely in the horizontal and vertical components were ± 0.31m (RMSEx,y) and ± 0.145m (RMSEz) respectively. The accuracies were computed based on sixty nine (69) horizontal and twenty (20) vertical surveyed points. As for the qualitative assessment, it is shown that the appearance of the 3D building model is adequate to support the requirements of LOD3 presentation based on the OGC (Open Geospatial Consortium) standard CityGML. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LiDAR%20datasets" title="LiDAR datasets">LiDAR datasets</a>, <a href="https://publications.waset.org/abstracts/search?q=DSM" title=" DSM"> DSM</a>, <a href="https://publications.waset.org/abstracts/search?q=DTM" title=" DTM"> DTM</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20building%20models" title=" 3D building models"> 3D building models</a> </p> <a href="https://publications.waset.org/abstracts/13620/3d-building-model-utilizing-airborne-lidar-dataset-and-terrestrial-photographic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">98</span> Submarine Topography and Beach Survey of Gang-Neung Port in South Korea, Using Multi-Beam Echo Sounder and Shipborne Mobile Light Detection and Ranging System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won%20Hyuck%20Kim">Won Hyuck Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang%20Hwan%20Kim"> Chang Hwan Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun%20Wook%20Kim"> Hyun Wook Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Myoung%20Hoon%20Lee"> Myoung Hoon Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Chan%20Hong%20Park"> Chan Hong Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyeon%20Yeong%20Park"> Hyeon Yeong Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We conducted submarine topography & beach survey from December 2015 and January 2016 using multi-beam echo sounder EM3001(Kongsberg corporation) & Shipborne Mobile LiDAR System. Our survey area were the Anmok beach in Gangneung, South Korea. We made Shipborne Mobile LiDAR System for these survey. Shipborne Mobile LiDAR System includes LiDAR (RIEGL LMS-420i), IMU ((Inertial Measurement Unit, MAGUS Inertial+) and RTKGNSS (Real Time Kinematic Global Navigation Satellite System, LEIAC GS 15 GS25) for beach's measurement, LiDAR's motion compensation & precise position. Shipborne Mobile LiDAR System scans beach on the movable vessel using the laser. We mounted Shipborne Mobile LiDAR System on the top of the vessel. Before beach survey, we conducted eight circles IMU calibration survey for stabilizing heading of IMU. This exploration should be as close as possible to the beach. But our vessel could not come closer to the beach because of latency objects in the water. At the same time, we conduct submarine topography survey using multi-beam echo sounder EM3001. A multi-beam echo sounder is a device observing and recording the submarine topography using sound wave. We mounted multi-beam echo sounder on left side of the vessel. We were equipped with a motion sensor, DGNSS (Differential Global Navigation Satellite System), and SV (Sound velocity) sensor for the vessel's motion compensation, vessel's position, and the velocity of sound of seawater. Shipborne Mobile LiDAR System was able to reduce the consuming time of beach survey rather than previous conventional methods of beach survey. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anmok" title="Anmok">Anmok</a>, <a href="https://publications.waset.org/abstracts/search?q=beach%20survey" title=" beach survey"> beach survey</a>, <a href="https://publications.waset.org/abstracts/search?q=Shipborne%20Mobile%20LiDAR%20System" title=" Shipborne Mobile LiDAR System"> Shipborne Mobile LiDAR System</a>, <a href="https://publications.waset.org/abstracts/search?q=submarine%20topography" title=" submarine topography"> submarine topography</a> </p> <a href="https://publications.waset.org/abstracts/65092/submarine-topography-and-beach-survey-of-gang-neung-port-in-south-korea-using-multi-beam-echo-sounder-and-shipborne-mobile-light-detection-and-ranging-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65092.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">429</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">97</span> LiDAR Based Real Time Multiple Vehicle Detection and Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhongzhen%20Luo">Zhongzhen Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeid%20Habibi"> Saeid Habibi</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20v.%20Mohrenschildt"> Martin v. Mohrenschildt</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lidar" title="lidar">lidar</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/43729/lidar-based-real-time-multiple-vehicle-detection-and-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43729.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">96</span> Cracks Detection and Measurement Using VLP-16 LiDAR and Intel Depth Camera D435 in Real-Time</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xinwen%20Zhu">Xinwen Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xingguang%20Li"> Xingguang Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Sun%20Yi"> Sun Yi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Crack is one of the most common damages in buildings, bridges, roads and so on, which may pose safety hazards. However, cracks frequently happen in structures of various materials. Traditional methods of manual detection and measurement, which are known as subjective, time-consuming, and labor-intensive, are gradually unable to meet the needs of modern development. In addition, crack detection and measurement need be safe considering space limitations and danger. Intelligent crack detection has become necessary research. In this paper, an efficient method for crack detection and quantification using a 3D sensor, LiDAR, and depth camera is proposed. This method works even in a dark environment, which is usual in real-world applications. The LiDAR rapidly spins to scan the surrounding environment and discover cracks through lasers thousands of times per second, providing a rich, 3D point cloud in real-time. The LiDAR provides quite accurate depth information. The precision of the distance of each point can be determined within around  ±3 cm accuracy, and not only it is good for getting a precise distance, but it also allows us to see far of over 100m going with the top range models. But the accuracy is still large for some high precision structures of material. To make the depth of crack is much more accurate, the depth camera is in need. The cracks are scanned by the depth camera at the same time. Finally, all data from LiDAR and Depth cameras are analyzed, and the size of the cracks can be quantified successfully. The comparison shows that the minimum and mean absolute percentage error between measured and calculated width are about 2.22% and 6.27%, respectively. The experiments and results are presented in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title="LiDAR">LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20camera" title=" depth camera"> depth camera</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=detection%20and%20%20measurement" title=" detection and measurement "> detection and measurement </a> </p> <a href="https://publications.waset.org/abstracts/127081/cracks-detection-and-measurement-using-vlp-16-lidar-and-intel-depth-camera-d435-in-real-time" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127081.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">224</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">95</span> Satellite LiDAR-Based Digital Terrain Model Correction using Gaussian Process Regression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keisuke%20Takahata">Keisuke Takahata</a>, <a href="https://publications.waset.org/abstracts/search?q=Hiroshi%20Suetsugu"> Hiroshi Suetsugu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forest height is an important parameter for forest biomass estimation, and precise elevation data is essential for accurate forest height estimation. There are several globally or nationally available digital elevation models (DEMs) like SRTM and ASTER. However, its accuracy is reported to be low particularly in mountainous areas where there are closed canopy or steep slope. Recently, space-borne LiDAR, such as the Global Ecosystem Dynamics Investigation (GEDI), have started to provide sparse but accurate ground elevation and canopy height estimates. Several studies have reported the high degree of accuracy in their elevation products on their exact footprints, while it is not clear how this sparse information can be used for wider area. In this study, we developed a digital terrain model correction algorithm by spatially interpolating the difference between existing DEMs and GEDI elevation products by using Gaussian Process (GP) regression model. The result shows that our GP-based methodology can reduce the mean bias of the elevation data from 3.7m to 0.3m when we use airborne LiDAR-derived elevation information as ground truth. Our algorithm is also capable of quantifying the elevation data uncertainty, which is critical requirement for biomass inventory. Upcoming satellite-LiDAR missions, like MOLI (Multi-footprint Observation Lidar and Imager), are expected to contribute to the more accurate digital terrain model generation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20terrain%20model" title="digital terrain model">digital terrain model</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20LiDAR" title=" satellite LiDAR"> satellite LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=gaussian%20processes" title=" gaussian processes"> gaussian processes</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty%20quantification" title=" uncertainty quantification"> uncertainty quantification</a> </p> <a href="https://publications.waset.org/abstracts/148360/satellite-lidar-based-digital-terrain-model-correction-using-gaussian-process-regression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148360.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">94</span> Open Source, Open Hardware Ground Truth for Visual Odometry and Simultaneous Localization and Mapping Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janusz%20Bedkowski">Janusz Bedkowski</a>, <a href="https://publications.waset.org/abstracts/search?q=Grzegorz%20Kisala"> Grzegorz Kisala</a>, <a href="https://publications.waset.org/abstracts/search?q=Michal%20Wlasiuk"> Michal Wlasiuk</a>, <a href="https://publications.waset.org/abstracts/search?q=Piotr%20Pokorski"> Piotr Pokorski</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ground-truth data is essential for VO (Visual Odometry) and SLAM (Simultaneous Localization and Mapping) quantitative evaluation using e.g. ATE (Absolute Trajectory Error) and RPE (Relative Pose Error). Many open-access data sets provide raw and ground-truth data for benchmark purposes. The issue appears when one would like to validate Visual Odometry and/or SLAM approaches on data captured using the device for which the algorithm is targeted for example mobile phone and disseminate data for other researchers. For this reason, we propose an open source, open hardware groundtruth system that provides an accurate and precise trajectory with a 3D point cloud. It is based on LiDAR Livox Mid-360 with a non-repetitive scanning pattern, on-board Raspberry Pi 4B computer, battery and software for off-line calculations (camera to LiDAR calibration, LiDAR odometry, SLAM, georeferencing). We show how this system can be used for the evaluation of various the state of the art algorithms (Stella SLAM, ORB SLAM3, DSO) in typical indoor monocular VO/SLAM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SLAM" title="SLAM">SLAM</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20truth" title=" ground truth"> ground truth</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20odometry" title=" visual odometry"> visual odometry</a>, <a href="https://publications.waset.org/abstracts/search?q=mapping" title=" mapping"> mapping</a> </p> <a href="https://publications.waset.org/abstracts/187389/open-source-open-hardware-ground-truth-for-visual-odometry-and-simultaneous-localization-and-mapping-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187389.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">69</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">93</span> Investigating the Vehicle-Bicyclists Conflicts using LIDAR Sensor Technology at Signalized Intersections</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alireza%20Ansariyar">Alireza Ansariyar</a>, <a href="https://publications.waset.org/abstracts/search?q=Mansoureh%20Jeihani"> Mansoureh Jeihani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Light Detection and Ranging (LiDAR) sensors are capable of recording traffic data including the number of passing vehicles and bicyclists, the speed of vehicles and bicyclists, and the number of conflicts among both road users. In order to collect real-time traffic data and investigate the safety of different road users, a LiDAR sensor was installed at Cold Spring Ln – Hillen Rd intersection in Baltimore City. The frequency and severity of collected real-time conflicts were analyzed and the results highlighted that 122 conflicts were recorded over a 10-month time interval from May 2022 to February 2023. By using an innovative image-processing algorithm, a new safety Measure of Effectiveness (MOE) was proposed to recognize the critical zones for bicyclists entering each zone. Considering the trajectory of conflicts, the results of the analysis demonstrated that conflicts in the northern approach (zone N) are more frequent and severe. Additionally, sunny weather is more likely to cause severe vehicle-bike conflicts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LiDAR%20sensor" title="LiDAR sensor">LiDAR sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20encroachment%20time%20threshold%20%28PET%29" title=" post encroachment time threshold (PET)"> post encroachment time threshold (PET)</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle-bike%20conflicts" title=" vehicle-bike conflicts"> vehicle-bike conflicts</a>, <a href="https://publications.waset.org/abstracts/search?q=a%20measure%20of%20effectiveness%20%28MOE%29" title=" a measure of effectiveness (MOE)"> a measure of effectiveness (MOE)</a>, <a href="https://publications.waset.org/abstracts/search?q=weather%20condition" title=" weather condition"> weather condition</a> </p> <a href="https://publications.waset.org/abstracts/166804/investigating-the-vehicle-bicyclists-conflicts-using-lidar-sensor-technology-at-signalized-intersections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166804.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">236</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">92</span> Real Time Lidar and Radar High-Level Fusion for Obstacle Detection and Tracking with Evaluation on a Ground Truth</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hatem%20Hajri">Hatem Hajri</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed-Cherif%20Rahal"> Mohamed-Cherif Rahal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Both Lidars and Radars are sensors for obstacle detection. While Lidars are very accurate on obstacles positions and less accurate on their velocities, Radars are more precise on obstacles velocities and less precise on their positions. Sensor fusion between Lidar and Radar aims at improving obstacle detection using advantages of the two sensors. The present paper proposes a real-time Lidar/Radar data fusion algorithm for obstacle detection and tracking based on the global nearest neighbour standard filter (GNN). This algorithm is implemented and embedded in an automative vehicle as a component generated by a real-time multisensor software. The benefits of data fusion comparing with the use of a single sensor are illustrated through several tracking scenarios (on a highway and on a bend) and using real-time kinematic sensors mounted on the ego and tracked vehicles as a ground truth. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ground%20truth" title="ground truth">ground truth</a>, <a href="https://publications.waset.org/abstracts/search?q=Hungarian%20algorithm" title=" Hungarian algorithm"> Hungarian algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=lidar%20Radar%20data%20fusion" title=" lidar Radar data fusion"> lidar Radar data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20nearest%20neighbor%20filter" title=" global nearest neighbor filter"> global nearest neighbor filter</a> </p> <a href="https://publications.waset.org/abstracts/95451/real-time-lidar-and-radar-high-level-fusion-for-obstacle-detection-and-tracking-with-evaluation-on-a-ground-truth" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">171</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">91</span> Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohkammad%20Nur%20Cahyadi">Mohkammad Nur Cahyadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Imam%20Wahyu%20Farid"> Imam Wahyu Farid</a>, <a href="https://publications.waset.org/abstracts/search?q=Ronny%20Mardianto"> Ronny Mardianto</a>, <a href="https://publications.waset.org/abstracts/search?q=Agung%20Budi%20Cahyono"> Agung Budi Cahyono</a>, <a href="https://publications.waset.org/abstracts/search?q=Eko%20Yuli%20Handoko"> Eko Yuli Handoko</a>, <a href="https://publications.waset.org/abstracts/search?q=Daud%20Wahyu%20Imani"> Daud Wahyu Imani</a>, <a href="https://publications.waset.org/abstracts/search?q=Arizal%20Bawazir"> Arizal Bawazir</a>, <a href="https://publications.waset.org/abstracts/search?q=Luki%20Adi%20Triawan"> Luki Adi Triawan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title="LiDAR">LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=low-cost%20GNSS" title=" low-cost GNSS"> low-cost GNSS</a>, <a href="https://publications.waset.org/abstracts/search?q=contour" title=" contour"> contour</a> </p> <a href="https://publications.waset.org/abstracts/159891/low-cost-lidar-gnss-uav-technology-development-for-pt-garams-three-dimensional-stockpile-modeling-needs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159891.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">90</span> Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahul%20Paul">Rahul Paul</a>, <a href="https://publications.waset.org/abstracts/search?q=Peter%20Mctaggart"> Peter Mctaggart</a>, <a href="https://publications.waset.org/abstracts/search?q=Luke%20Skinner"> Luke Skinner</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=point%20cloud" title="point cloud">point cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=L%C4%B0DAR%20data" title=" LİDAR data"> LİDAR data</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=catenary%20curve" title=" catenary curve"> catenary curve</a>, <a href="https://publications.waset.org/abstracts/search?q=vegetation%20management" title=" vegetation management"> vegetation management</a>, <a href="https://publications.waset.org/abstracts/search?q=utility%20industry" title=" utility industry"> utility industry</a> </p> <a href="https://publications.waset.org/abstracts/156067/clustering-and-modelling-electricity-conductors-from-3d-point-clouds-in-complex-real-world-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156067.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">89</span> Geomechanical Technologies for Assessing Three-Dimensional Stability of Underground Excavations Utilizing Remote-Sensing, Finite Element Analysis, and Scientific Visualization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kwang%20Chun">Kwang Chun</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20Kemeny"> John Kemeny</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Light detection and ranging (LiDAR) has been a prevalent remote-sensing technology applied in the geological fields due to its high precision and ease of use. One of the major applications is to use the detailed geometrical information of underground structures as a basis for the generation of a three-dimensional numerical model that can be used in a geotechnical stability analysis such as FEM or DEM. To date, however, straightforward techniques in reconstructing the numerical model from the scanned data of the underground structures have not been well established or tested. In this paper, we propose a comprehensive approach integrating all the various processes, from LiDAR scanning to finite element numerical analysis. The study focuses on converting LiDAR 3D point clouds of geologic structures containing complex surface geometries into a finite element model. This methodology has been applied to Kartchner Caverns in Arizona, where detailed underground and surface point clouds can be used for the analysis of underground stability. Numerical simulations were performed using the finite element code Abaqus and presented by 3D computing visualization solution, ParaView. The results are useful in studying the stability of all types of underground excavations including underground mining and tunneling. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=finite%20element%20analysis" title="finite element analysis">finite element analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=remote-sensing" title=" remote-sensing"> remote-sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=scientific%20visualization" title=" scientific visualization"> scientific visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=underground%20stability" title=" underground stability"> underground stability</a> </p> <a href="https://publications.waset.org/abstracts/105946/geomechanical-technologies-for-assessing-three-dimensional-stability-of-underground-excavations-utilizing-remote-sensing-finite-element-analysis-and-scientific-visualization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105946.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">88</span> Boundary Alert System for Powered Wheelchair in Confined Area Training</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tsoi%20Kim%20Ming">Tsoi Kim Ming</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20King%20Pong"> Yu King Pong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: With powered wheelchair, patients can travel more easily and conveniently. However, some patients suffer from other difficulties, such as visual impairment, cognitive disorder, or psychological issues, which make them unable to control powered wheelchair safely. Purpose: Therefore, those patients are required to complete a comprehensive driving training by therapists on confined area, which simulates narrow paths in daily live. During the training, therapists will give series of driving instruction to patients, which may be unaware of patients crossing out the boundary of area. To facilitate the training, it is needed to develop a device to provide warning to patients during training Method: We adopt LIDAR for distance sensing started from center of confined area. Then, we program the LIDAR with linear geometry to remember each side of the area. The LIDAR will sense the location of wheelchair continuously. Once the wheelchair is driven out of the boundary, audio alert will be given to patient. Result: Patients can pay their attention to the particular driving situation followed by audio alert during driving training, which can learn how to avoid out of boundary in similar situation next time. Conclusion: Instead of only instructed by therapist, the LIDAR can facilitate the powered wheelchair training by patients actively pay their attention to driving situation. After training, they are able to control the powered wheelchair safely when facing difficult and narrow path in real life. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PWC" title="PWC">PWC</a>, <a href="https://publications.waset.org/abstracts/search?q=training" title=" training"> training</a>, <a href="https://publications.waset.org/abstracts/search?q=rehab" title=" rehab"> rehab</a>, <a href="https://publications.waset.org/abstracts/search?q=AT" title=" AT"> AT</a> </p> <a href="https://publications.waset.org/abstracts/159535/boundary-alert-system-for-powered-wheelchair-in-confined-area-training" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">87</span> Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaobo%20Liu">Xiaobo Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Rakesh%20Mishra"> Rakesh Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Yun%20Zhang"> Yun Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=forest%20canopy%20height" title=" forest canopy height"> forest canopy height</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=Sentinel-2" title=" Sentinel-2"> Sentinel-2</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest%20regression" title=" random forest regression"> random forest regression</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/161534/monitoring-large-coverage-forest-canopy-height-by-integrating-lidar-and-sentinel-2-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161534.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">86</span> Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joseph%20Quinto">Mark Joseph Quinto</a>, <a href="https://publications.waset.org/abstracts/search?q=Roan%20Beronilla"> Roan Beronilla</a>, <a href="https://publications.waset.org/abstracts/search?q=Guiller%20Damian"> Guiller Damian</a>, <a href="https://publications.waset.org/abstracts/search?q=Eliza%20Camaso"> Eliza Camaso</a>, <a href="https://publications.waset.org/abstracts/search?q=Ronaldo%20Alberto"> Ronaldo Alberto</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree&rsquo;s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=carbon%20stock" title="carbon stock">carbon stock</a>, <a href="https://publications.waset.org/abstracts/search?q=forest%20inventory" title=" forest inventory"> forest inventory</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20count" title=" tree count"> tree count</a> </p> <a href="https://publications.waset.org/abstracts/71998/extraction-of-forest-plantation-resources-in-selected-forest-of-san-manuel-pangasinan-philippines-using-lidar-data-for-forest-status-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">85</span> Multimedia Container for Autonomous Car</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janusz%20Bobulski">Janusz Bobulski</a>, <a href="https://publications.waset.org/abstracts/search?q=Mariusz%20Kubanek"> Mariusz Kubanek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=an%20autonomous%20car" title="an autonomous car">an autonomous car</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=lidar" title=" lidar"> lidar</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20detection" title=" obstacle detection"> obstacle detection</a> </p> <a href="https://publications.waset.org/abstracts/133088/multimedia-container-for-autonomous-car" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133088.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">226</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">84</span> Topographic Mapping of Farmland by Integration of Multiple Sensors on Board Low-Altitude Unmanned Aerial System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mengmeng%20Du">Mengmeng Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Noboru%20Noguchi"> Noboru Noguchi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hiroshi%20Okamoto"> Hiroshi Okamoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Noriko%20Kobayashi"> Noriko Kobayashi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduced a topographic mapping system with time-saving and simplicity advantages based on integration of Light Detection and Ranging (LiDAR) data and Post Processing Kinematic Global Positioning System (PPK GPS) data. This topographic mapping system used a low-altitude Unmanned Aerial Vehicle (UAV) as a platform to conduct land survey in a low-cost, efficient, and totally autonomous manner. An experiment in a small-scale sugarcane farmland was conducted in Queensland, Australia. Subsequently, we synchronized LiDAR distance measurements that were corrected by using attitude information from gyroscope with PPK GPS coordinates for generation of precision topographic maps, which could be further utilized for such applications like precise land leveling and drainage management. The results indicated that LiDAR distance measurements and PPK GPS altitude reached good accuracy of less than 0.015 m. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=land%20survey" title="land survey">land survey</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20detection%20and%20ranging" title=" light detection and ranging"> light detection and ranging</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20processing%20kinematic%20global%20positioning%20system" title=" post processing kinematic global positioning system"> post processing kinematic global positioning system</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=topographic%20map" title=" topographic map"> topographic map</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a> </p> <a href="https://publications.waset.org/abstracts/80276/topographic-mapping-of-farmland-by-integration-of-multiple-sensors-on-board-low-altitude-unmanned-aerial-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80276.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">236</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">83</span> Identification of Landslide Features Using Back-Propagation Neural Network on LiDAR Digital Elevation Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chia-Hao%20Chang">Chia-Hao Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Geng-Gui%20Wang"> Geng-Gui Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jee-Cheng%20Wu"> Jee-Cheng Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The prediction of a landslide is a difficult task because it requires a detailed study of past activities using a complete range of investigative methods to determine the changing condition. In this research, first step, LiDAR 1-meter by 1-meter resolution of digital elevation model (DEM) was used to generate six environmental factors of landslide. Then, back-propagation neural networks (BPNN) was adopted to identify scarp, landslide areas and non-landslide areas. The BPNN uses 6 environmental factors in input layer and 1 output layer. Moreover, 6 landslide areas are used as training areas and 4 landslide areas as test areas in the BPNN. The hidden layer is set to be 1 and 2; the hidden layer neurons are set to be 4, 5, 6, 7 and 8; the learning rates are set to be 0.01, 0.1 and 0.5. When using 1 hidden layer with 7 neurons and the learning rate sets to be 0.5, the result of Network training root mean square error is 0.001388. Finally, evaluation of BPNN classification accuracy by the confusion matrix shows that the overall accuracy can reach 94.4%, and the Kappa value is 0.7464. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20elevation%20model" title="digital elevation model">digital elevation model</a>, <a href="https://publications.waset.org/abstracts/search?q=DEM" title=" DEM"> DEM</a>, <a href="https://publications.waset.org/abstracts/search?q=environmental%20factors" title=" environmental factors"> environmental factors</a>, <a href="https://publications.waset.org/abstracts/search?q=back-propagation%20neural%20network" title=" back-propagation neural network"> back-propagation neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=BPNN" title=" BPNN"> BPNN</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR "> LiDAR </a> </p> <a href="https://publications.waset.org/abstracts/93322/identification-of-landslide-features-using-back-propagation-neural-network-on-lidar-digital-elevation-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93322.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">82</span> A Simple Approach to Establish Urban Energy Consumption Map Using the Combination of LiDAR and Thermal Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yu-Cheng%20Chen">Yu-Cheng Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Tzu-Ping%20Lin"> Tzu-Ping Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng-Yi%20Lin"> Feng-Yi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih-Yu%20Chen"> Chih-Yu Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the urban heat island effect caused by highly development of city, the heat stress increased in recent year rapidly. Resulting in a sharp raise of the energy used in urban area. The heat stress during summer time exacerbated the usage of air conditioning and electric equipment, which caused more energy consumption and anthropogenic heat. Therefore, an accurate and simple method to measure energy used in urban area can be helpful for the architectures and urban planners to develop better energy efficiency goals. This research applies the combination of airborne LiDAR data and thermal imager to provide an innovate method to estimate energy consumption. Owing to the high resolution of remote sensing data, the accurate current volume and total floor area and the surface temperature of building derived from LiDAR and thermal imager can be herein obtained to predict energy used. In the estimate process, the LiDAR data will be divided into four type of land cover which including building, road, vegetation, and other obstacles. In this study, the points belong to building were selected to overlay with the land use information; therefore, the energy consumption can be estimated precisely with the real value of total floor area and energy use index for different use of building. After validating with the real energy used data from the government, the result shows the higher building in high development area like commercial district will present in higher energy consumption, caused by the large quantity of total floor area and more anthropogenic heat. Furthermore, because of the surface temperature can be warm up by electric equipment used, this study also applies the thermal image of building to find the hot spots of energy used and make the estimation method more complete. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urban%20heat%20island" title="urban heat island">urban heat island</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20planning" title=" urban planning"> urban planning</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20imager" title=" thermal imager"> thermal imager</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20consumption" title=" energy consumption"> energy consumption</a> </p> <a href="https://publications.waset.org/abstracts/81506/a-simple-approach-to-establish-urban-energy-consumption-map-using-the-combination-of-lidar-and-thermal-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81506.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">239</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">81</span> Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Varin">M. Varin</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20M.%20Dubois"> A. M. Dubois</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Gadbois-Langevin"> R. Gadbois-Langevin</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Chalghaf"> B. Chalghaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=very%20high%20spatial%20resolution" title="very high spatial resolution">very high spatial resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=WorlView-3" title=" WorlView-3"> WorlView-3</a>, <a href="https://publications.waset.org/abstracts/search?q=canopy%20height%20models" title=" canopy height models"> canopy height models</a>, <a href="https://publications.waset.org/abstracts/search?q=CHM" title=" CHM"> CHM</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV" title=" UAV"> UAV</a> </p> <a href="https://publications.waset.org/abstracts/121479/using-3d-satellite-imagery-to-generate-a-high-precision-canopy-height-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121479.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">80</span> Geographical Data Visualization Using Video Games Technologies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nizar%20Karim%20Uribe-Orihuela">Nizar Karim Uribe-Orihuela</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20Brambila-Paz"> Fernando Brambila-Paz</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivette%20Caldelas"> Ivette Caldelas</a>, <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20Montufar-Chaveznava"> Rodrigo Montufar-Chaveznava</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present the advances corresponding to the implementation of a strategy to visualize geographical data using a Software Development Kit (SDK) for video games. We use multispectral images from Landsat 7 platform and Laser Imaging Detection and Ranging (LIDAR) data from The National Institute of Geography and Statistics of Mexican (INEGI). We select a place of interest to visualize from Landsat platform and make some processing to the image (rotations, atmospheric correction and enhancement). The resulting image will be our gray scale color-map to fusion with the LIDAR data, which was selected using the same coordinates than in Landsat. The LIDAR data is translated to 8-bit raw data. Both images are fused in a software developed using Unity (an SDK employed for video games). The resulting image is then displayed and can be explored moving around. The idea is the software could be used for students of geology and geophysics at the Engineering School of the National University of Mexico. They will download the software and images corresponding to a geological place of interest to a smartphone and could virtually visit and explore the site with a virtual reality visor such as Google cardboard. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title="virtual reality">virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20technologies" title=" interactive technologies"> interactive technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=geographical%20data%20visualization" title=" geographical data visualization"> geographical data visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games%20technologies" title=" video games technologies"> video games technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=educational%20material" title=" educational material"> educational material</a> </p> <a href="https://publications.waset.org/abstracts/79894/geographical-data-visualization-using-video-games-technologies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79894.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">79</span> High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bilel%20Chalghaf">Bilel Chalghaf</a>, <a href="https://publications.waset.org/abstracts/search?q=Mathieu%20Varin"> Mathieu Varin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tree%20species" title="tree species">tree species</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based" title=" object-based"> object-based</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral" title=" multispectral"> multispectral</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=WorldView-3" title=" WorldView-3"> WorldView-3</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a> </p> <a href="https://publications.waset.org/abstracts/119023/high-resolution-satellite-imagery-and-lidar-data-for-object-based-tree-species-classification-in-quebec-canada" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119023.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">78</span> Multi Object Tracking for Predictive Collision Avoidance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bruk%20Gebregziabher">Bruk Gebregziabher</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The safe and efficient operation of Autonomous Mobile Robots (AMRs) in complex environments, such as manufacturing, logistics, and agriculture, necessitates accurate multiobject tracking and predictive collision avoidance. This paper presents algorithms and techniques for addressing these challenges using Lidar sensor data, emphasizing ensemble Kalman filter. The developed predictive collision avoidance algorithm employs the data provided by lidar sensors to track multiple objects and predict their velocities and future positions, enabling the AMR to navigate safely and effectively. A modification to the dynamic windowing approach is introduced to enhance the performance of the collision avoidance system. The overall system architecture encompasses object detection, multi-object tracking, and predictive collision avoidance control. The experimental results, obtained from both simulation and real-world data, demonstrate the effectiveness of the proposed methods in various scenarios, which lays the foundation for future research on global planners, other controllers, and the integration of additional sensors. This thesis contributes to the ongoing development of safe and efficient autonomous systems in complex and dynamic environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20mobile%20robots" title="autonomous mobile robots">autonomous mobile robots</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-object%20tracking" title=" multi-object tracking"> multi-object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=predictive%20collision%20avoidance" title=" predictive collision avoidance"> predictive collision avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20Kalman%20filter" title=" ensemble Kalman filter"> ensemble Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=lidar%20sensors" title=" lidar sensors"> lidar sensors</a> </p> <a href="https://publications.waset.org/abstracts/169056/multi-object-tracking-for-predictive-collision-avoidance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169056.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lidar&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lidar&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lidar&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=lidar&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10