CINXE.COM
Search results for: satellite imagery
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: satellite imagery</title> <meta name="description" content="Search results for: satellite imagery"> <meta name="keywords" content="satellite imagery"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="satellite imagery" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="satellite imagery"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 950</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: satellite imagery</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">950</span> Comparative Study of Accuracy of Land Cover/Land Use Mapping Using Medium Resolution Satellite Imagery: A Case Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20C.%20Paliwal">M. C. Paliwal</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20K.%20Jain"> A. K. Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20K.%20Katiyar"> S. K. Katiyar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Classification of satellite imagery is very important for the assessment of its accuracy. In order to determine the accuracy of the classified image, usually the assumed-true data are derived from ground truth data using Global Positioning System. The data collected from satellite imagery and ground truth data is then compared to find out the accuracy of data and error matrices are prepared. Overall and individual accuracies are calculated using different methods. The study illustrates advanced classification and accuracy assessment of land use/land cover mapping using satellite imagery. IRS-1C-LISS IV data were used for classification of satellite imagery. The satellite image was classified using the software in fourteen classes namely water bodies, agricultural fields, forest land, urban settlement, barren land and unclassified area etc. Classification of satellite imagery and calculation of accuracy was done by using ERDAS-Imagine software to find out the best method. This study is based on the data collected for Bhopal city boundaries of Madhya Pradesh State of India. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=resolution" title="resolution">resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy%20assessment" title=" accuracy assessment"> accuracy assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20use%20mapping" title=" land use mapping"> land use mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20truth%20data" title=" ground truth data"> ground truth data</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20matrices" title=" error matrices"> error matrices</a> </p> <a href="https://publications.waset.org/abstracts/13294/comparative-study-of-accuracy-of-land-coverland-use-mapping-using-medium-resolution-satellite-imagery-a-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">507</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">949</span> Satellite Imagery Classification Based on Deep Convolution Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhong%20Ma">Zhong Ma</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhuping%20Wang"> Zhuping Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Congxin%20Liu"> Congxin Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiangzeng%20Liu"> Xiangzeng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery%20classification" title="satellite imagery classification">satellite imagery classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20convolution%20network" title=" deep convolution network"> deep convolution network</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=hyper-parameter%20optimization" title=" hyper-parameter optimization"> hyper-parameter optimization</a> </p> <a href="https://publications.waset.org/abstracts/44963/satellite-imagery-classification-based-on-deep-convolution-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">948</span> Multi-Temporal Cloud Detection and Removal in Satellite Imagery for Land Resources Investigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Feng%20Yin">Feng Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Clouds are inevitable contaminants in optical satellite imagery, and prevent the satellite imaging systems from acquiring clear view of the earth surface. The presence of clouds in satellite imagery bring negative influences for remote sensing land resources investigation. As a consequence, detecting the locations of clouds in satellite imagery is an essential preprocessing step, and further remove the existing clouds is crucial for the application of imagery. In this paper, a multi-temporal based satellite imagery cloud detection and removal method is proposed, which will be used for large-scale land resource investigation. The proposed method is mainly composed of four steps. First, cloud masks are generated for cloud contaminated images by single temporal cloud detection based on multiple spectral features. Then, a cloud-free reference image of target areas is synthesized by weighted averaging time-series images in which cloud pixels are ignored. Thirdly, the refined cloud detection results are acquired by multi-temporal analysis based on the reference image. Finally, detected clouds are removed via multi-temporal linear regression. The results of a case application in Hubei province indicate that the proposed multi-temporal cloud detection and removal method is effective and promising for large-scale land resource investigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cloud%20detection" title="cloud detection">cloud detection</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud%20remove" title=" cloud remove"> cloud remove</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-temporal%20imagery" title=" multi-temporal imagery"> multi-temporal imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20resources%20investigation" title=" land resources investigation"> land resources investigation</a> </p> <a href="https://publications.waset.org/abstracts/90359/multi-temporal-cloud-detection-and-removal-in-satellite-imagery-for-land-resources-investigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">947</span> Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evans%20Belly">Evans Belly</a>, <a href="https://publications.waset.org/abstracts/search?q=Imdad%20Rizvi"> Imdad Rizvi</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20M.%20Kadam"> M. M. Kadam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building%20detection" title="building detection">building detection</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20detection" title=" shadow detection"> shadow detection</a>, <a href="https://publications.waset.org/abstracts/search?q=landscape%20generation" title=" landscape generation"> landscape generation</a>, <a href="https://publications.waset.org/abstracts/search?q=label" title=" label"> label</a>, <a href="https://publications.waset.org/abstracts/search?q=partitioning" title=" partitioning"> partitioning</a>, <a href="https://publications.waset.org/abstracts/search?q=very%20high%20resolution%20%28VHR%29%20satellite%20imagery" title=" very high resolution (VHR) satellite imagery"> very high resolution (VHR) satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/76690/automatic-extraction-of-arbitrarily-shaped-buildings-from-vhr-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76690.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">314</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">946</span> Rainfall Estimation Using Himawari-8 Meteorological Satellite Imagery in Central Taiwan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chiang%20Wei">Chiang Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Hui-Chung%20Yeh"> Hui-Chung Yeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Yen-Chang%20Chen"> Yen-Chang Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this study is to estimate the rainfall using the new generation Himawari-8 meteorological satellite with multi-band, high-bit format, and high spatiotemporal resolution, ground rainfall data at the Chen-Yu-Lan watershed of Joushuei River Basin (443.6 square kilometers) in Central Taiwan. Accurate and fine-scale rainfall information is essential for rugged terrain with high local variation for early warning of flood, landslide, and debris flow disasters. 10-minute and 2 km pixel-based rainfall of Typhoon Megi of 2016 and meiyu on June 1-4 of 2017 were tested to demonstrate the new generation Himawari-8 meteorological satellite can capture rainfall variation in the rugged mountainous area both at fine-scale and watershed scale. The results provide the valuable rainfall information for early warning of future disasters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=estimation" title="estimation">estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=Himawari-8" title=" Himawari-8"> Himawari-8</a>, <a href="https://publications.waset.org/abstracts/search?q=rainfall" title=" rainfall"> rainfall</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/93847/rainfall-estimation-using-himawari-8-meteorological-satellite-imagery-in-central-taiwan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">945</span> Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lucas%20Barbosa%20Da%20Silva">Lucas Barbosa Da Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Jun%20Okamoto%20Jr."> Jun Okamoto Jr.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=mosaic" title=" mosaic"> mosaic</a>, <a href="https://publications.waset.org/abstracts/search?q=soil" title=" soil"> soil</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrient%20content" title=" nutrient content"> nutrient content</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=aerial%20imagery" title=" aerial imagery"> aerial imagery</a> </p> <a href="https://publications.waset.org/abstracts/86336/estimation-of-soil-nutrient-content-using-google-earth-and-pleiades-satellite-imagery-for-small-farms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">944</span> Plot Scale Estimation of Crop Biophysical Parameters from High Resolution Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreedevi%20Moharana">Shreedevi Moharana</a>, <a href="https://publications.waset.org/abstracts/search?q=Subashisa%20Dutta"> Subashisa Dutta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study focuses on the estimation of crop biophysical parameters like crop chlorophyll, nitrogen and water stress at plot scale in the crop fields. To achieve these, we have used high-resolution satellite LISS IV imagery. A new methodology has proposed in this research work, the spectral shape function of paddy crop is employed to get the significant wavelengths sensitive to paddy crop parameters. From the shape functions, regression index models were established for the critical wavelength with minimum and maximum wavelengths of multi-spectrum high-resolution LISS IV data. Moreover, the functional relationships were utilized to develop the index models. From these index models crop, biophysical parameters were estimated and mapped from LISS IV imagery at plot scale in crop field level. The result showed that the nitrogen content of the paddy crop varied from 2-8%, chlorophyll from 1.5-9% and water content variation observed from 40-90% respectively. It was observed that the variability in rice agriculture system in India was purely a function of field topography. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=crop%20parameters" title="crop parameters">crop parameters</a>, <a href="https://publications.waset.org/abstracts/search?q=index%20model" title=" index model"> index model</a>, <a href="https://publications.waset.org/abstracts/search?q=LISS%20IV%20imagery" title=" LISS IV imagery"> LISS IV imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=plot%20scale" title=" plot scale"> plot scale</a>, <a href="https://publications.waset.org/abstracts/search?q=shape%20function" title=" shape function"> shape function</a> </p> <a href="https://publications.waset.org/abstracts/89499/plot-scale-estimation-of-crop-biophysical-parameters-from-high-resolution-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">943</span> A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yongquan%20Zhao">Yongquan Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Bo%20Huang"> Bo Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hybrid%20spatial-temporal-spectral%20fusion" title="hybrid spatial-temporal-spectral fusion">hybrid spatial-temporal-spectral fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery" title=" high resolution synthetic imagery"> high resolution synthetic imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20square%20regression" title=" least square regression"> least square regression</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20transformation" title=" spectral transformation"> spectral transformation</a> </p> <a href="https://publications.waset.org/abstracts/74667/a-hybrid-image-fusion-model-for-generating-high-spatial-temporal-spectral-resolution-data-using-oli-modis-hyperion-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74667.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">942</span> Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Ayesha%20Fathima">Shaik Ayesha Fathima</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaik%20Noor%20Jahan"> Shaik Noor Jahan</a>, <a href="https://publications.waset.org/abstracts/search?q=Duvvada%20Rajeswara%20Rao"> Duvvada Rajeswara Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=area%20calculation" title="area calculation">area calculation</a>, <a href="https://publications.waset.org/abstracts/search?q=atrous%20convolution" title=" atrous convolution"> atrous convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20globe%20land%20cover%20classification" title=" deep globe land cover classification"> deep globe land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deepLabv3" title=" deepLabv3"> deepLabv3</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20cover%20classification" title=" land cover classification"> land cover classification</a>, <a href="https://publications.waset.org/abstracts/search?q=resnet%2050" title=" resnet 50"> resnet 50</a> </p> <a href="https://publications.waset.org/abstracts/147677/classification-of-land-cover-usage-from-satellite-images-using-deep-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">941</span> A Novel Spectral Index for Automatic Shadow Detection in Urban Mapping Based on WorldView-2 Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaveh%20Shahi">Kaveh Shahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Helmi%20Z.%20M.%20Shafri"> Helmi Z. M. Shafri</a>, <a href="https://publications.waset.org/abstracts/search?q=Ebrahim%20Taherzadeh"> Ebrahim Taherzadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In remote sensing, shadow causes problems in many applications such as change detection and classification. It is caused by objects which are elevated, thus can directly affect the accuracy of information. For these reasons, it is very important to detect shadows particularly in urban high spatial resolution imagery which created a significant problem. This paper focuses on automatic shadow detection based on a new spectral index for multispectral imagery known as Shadow Detection Index (SDI). The new spectral index was tested on different areas of World-View 2 images and the results demonstrated that the new spectral index has a massive potential to extract shadows effectively and automatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spectral%20index" title="spectral index">spectral index</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20detection" title=" shadow detection"> shadow detection</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing%20images" title=" remote sensing images"> remote sensing images</a>, <a href="https://publications.waset.org/abstracts/search?q=World-View%202" title=" World-View 2"> World-View 2</a> </p> <a href="https://publications.waset.org/abstracts/13500/a-novel-spectral-index-for-automatic-shadow-detection-in-urban-mapping-based-on-worldview-2-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13500.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">940</span> Modeling and Monitoring of Agricultural Influences on Harmful Algal Blooms in Western Lake Erie</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaofang%20Wei">Xiaofang Wei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Harmful Algal Blooms are a recurrent disturbing occurrence in Lake Erie that has caused significant negative impacts on water quality and aquatic ecosystem around Great Lakes areas in the United States. Targeting the recent HAB events in western Lake Erie, this paper utilizes satellite imagery and hydrological modeling to monitor HAB cyanobacteria blooms and analyze the impacts of agricultural activities from Maumee watershed, the biggest watershed of Lake Erie and agriculture dominant.SWAT (Soil & Water Assessment Tool) Model for Maumee watershed was established with DEM, land use data, crop data layer, soil data, and weather data, and calibrated with Maumee River gauge stations data for streamflow and nutrients. Fast Line-of-sight Atmospheric Analysis of Hypercubes (FLAASH) was applied to remove atmospheric attenuation and cyanobacteria Indices were calculated from Landsat OLI imagery to study the intensity of HAB events in the years 2015, 2017, and 2019. The agricultural practice and nutrients management within the Maumee watershed was studied and correlated with HAB cyanobacteria indices to study the relationship between HAB intensity and nutrient loadings. This study demonstrates that hydrological models and satellite imagery are effective tools in HAB monitoring and modeling in rivers and lakes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=harmful%20algal%20bloom" title="harmful algal bloom">harmful algal bloom</a>, <a href="https://publications.waset.org/abstracts/search?q=landsat%20OLI%20imagery" title=" landsat OLI imagery"> landsat OLI imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=SWAT" title=" SWAT"> SWAT</a>, <a href="https://publications.waset.org/abstracts/search?q=HAB%20cyanobacteria" title=" HAB cyanobacteria"> HAB cyanobacteria</a> </p> <a href="https://publications.waset.org/abstracts/140628/modeling-and-monitoring-of-agricultural-influences-on-harmful-algal-blooms-in-western-lake-erie" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">939</span> Reinforcement Learning for Classification of Low-Resolution Satellite Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadija%20Bouzaachane">Khadija Bouzaachane</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Mahdi%20El%20Guarmah"> El Mahdi El Guarmah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The classification of low-resolution satellite images has been a worthwhile and fertile field that attracts plenty of researchers due to its importance in monitoring geographical areas. It could be used for several purposes such as disaster management, military surveillance, agricultural monitoring. The main objective of this work is to classify efficiently and accurately low-resolution satellite images by using novel technics of deep learning and reinforcement learning. The images include roads, residential areas, industrial areas, rivers, sea lakes, and vegetation. To achieve that goal, we carried out experiments on the sentinel-2 images considering both high accuracy and efficiency classification. Our proposed model achieved a 91% accuracy on the testing dataset besides a good classification for land cover. Focus on the parameter precision; we have obtained 93% for the river, 92% for residential, 97% for residential, 96% for the forest, 87% for annual crop, 84% for herbaceous vegetation, 85% for pasture, 78% highway and 100% for Sea Lake. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/141097/reinforcement-learning-for-classification-of-low-resolution-satellite-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">213</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">938</span> Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ola%20Hall">Ola Hall</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Wahab"> Ibrahim Wahab</a>, <a href="https://publications.waset.org/abstracts/search?q=Thorsteinn%20Rognvaldsson"> Thorsteinn Rognvaldsson</a>, <a href="https://publications.waset.org/abstracts/search?q=Mattias%20Ohlsson"> Mattias Ohlsson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=poverty%20prediction" title="poverty prediction">poverty prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20readers" title=" human readers"> human readers</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanzania" title=" Tanzania"> Tanzania</a> </p> <a href="https://publications.waset.org/abstracts/163428/estimating-poverty-levels-from-satellite-imagery-a-comparison-of-human-readers-and-an-artificial-intelligence-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163428.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">937</span> A Review on the Future Canadian RADARSAT Constellation Mission and Its Capabilities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Dabboor">Mohammed Dabboor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spaceborne Synthetic Aperture Radar (SAR) systems are active remote sensing systems independent of weather and sun illumination, two factors which usually inhibit the use of optical satellite imagery. A SAR system could acquire single, dual, compact or fully polarized SAR imagery. Each SAR imagery type has its advantages and disadvantages. The sensitivity of SAR images is a function of the: 1) band, polarization, and incidence angle of the transmitted electromagnetic signal, and 2) geometric and dielectric properties of the radar target. The RADARSAT-1 (launched on November 4, 1995), RADARSAT-2 ((launched on December 14, 2007) and RADARSAT Constellation Mission (to be launched in July 2018) are three past, current, and future Canadian SAR space missions. Canada is developing the RADARSAT Constellation Mission (RCM) using small satellites to further maximize the capability to carry out round-the-clock surveillance from space. The Canadian Space Agency, in collaboration with other government-of-Canada departments, is leading the design, development and operation of the RADARSAT Constellation Mission to help addressing key priorities. The purpose of our presentation is to give an overview of the future Canadian RCM SAR mission with its satellites. Also, the RCM SAR imaging modes along with the expected SAR products will be described. An emphasis will be given to the mission unique capabilities and characteristics, such as the new compact polarimetry SAR configuration. In this presentation, we will summarize the RCM advancement from previous RADARSAT satellite missions. Furthermore, the potential of the RCM mission for different Earth observation applications will be outlined. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compact%20polarimetry" title="compact polarimetry">compact polarimetry</a>, <a href="https://publications.waset.org/abstracts/search?q=RADARSAT" title=" RADARSAT"> RADARSAT</a>, <a href="https://publications.waset.org/abstracts/search?q=SAR%20mission" title=" SAR mission"> SAR mission</a>, <a href="https://publications.waset.org/abstracts/search?q=SAR%20applications" title=" SAR applications"> SAR applications</a> </p> <a href="https://publications.waset.org/abstracts/74263/a-review-on-the-future-canadian-radarsat-constellation-mission-and-its-capabilities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">936</span> Monitoring of Cannabis Cultivation with High-Resolution Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Levent%20Basayigit">Levent Basayigit</a>, <a href="https://publications.waset.org/abstracts/search?q=Sinan%20Demir"> Sinan Demir</a>, <a href="https://publications.waset.org/abstracts/search?q=Burhan%20Kara"> Burhan Kara</a>, <a href="https://publications.waset.org/abstracts/search?q=Yusuf%20Ucar">Yusuf Ucar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cannabis is mostly used for drug production. In some countries, an excessive amount of illegal cannabis is cultivated and sold. Most of the illegal cannabis cultivation occurs on the lands far from settlements. In farmlands, it is cultivated with other crops. In this method, cannabis is surrounded by tall plants like corn and sunflower. It is also cultivated with tall crops as the mixed culture. The common method of the determination of the illegal cultivation areas is to investigate the information obtained from people. This method is not sufficient for the determination of illegal cultivation in remote areas. For this reason, more effective methods are needed for the determination of illegal cultivation. Remote Sensing is one of the most important technologies to monitor the plant growth on the land. The aim of this study is to monitor cannabis cultivation area using satellite imagery. The main purpose of this study was to develop an applicable method for monitoring the cannabis cultivation. For this purpose, cannabis was grown as single or surrounded by the corn and sunflower in plots. The morphological characteristics of cannabis were recorded two times per month during the vegetation period. The spectral signature library was created with the spectroradiometer. The parcels were monitored with high-resolution satellite imagery. With the processing of satellite imagery, the cultivation areas of cannabis were classified. To separate the Cannabis plots from the other plants, the multiresolution segmentation algorithm was found to be the most successful for classification. WorldView Improved Vegetative Index (WV-VI) classification was the most accurate method for monitoring the plant density. As a result, an object-based classification method and vegetation indices were sufficient for monitoring the cannabis cultivation in multi-temporal Earthwiev images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cannabis" title="Cannabis">Cannabis</a>, <a href="https://publications.waset.org/abstracts/search?q=drug" title=" drug"> drug</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based%20classification" title=" object-based classification"> object-based classification</a> </p> <a href="https://publications.waset.org/abstracts/74202/monitoring-of-cannabis-cultivation-with-high-resolution-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74202.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">935</span> The Effect of PETTLEP Imagery on Equestrian Jumping Tasks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nurwina%20Anuar">Nurwina Anuar</a>, <a href="https://publications.waset.org/abstracts/search?q=Aswad%20Anuar"> Aswad Anuar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Imagery is a popular mental technique used by athletes and coaches to improve learning and performance. It has been widely investigated and beneficial in the sports context. However, the imagery application in equestrian sport has been understudied. Thus, the effectiveness of imagery should encompass the application in the equestrian sport to ensure its application covert all sports. Unlike most sports (e.g., football, badminton, tennis, ski) which are both mental and physical are dependent solely upon human decision and response, equestrian sports involves the interaction of human-horse collaboration to success in the equestrian tasks. This study aims to investigate the effect of PETTLEP imagery on equestrian jumping tasks, motivation and imagery ability. It was hypothesized that the use of PETTLEP imagery intervention will significantly increase in the skill equestrian jumping tasks. It was also hypothesized that riders’ imagery ability and motivation will increase across phases. The participants were skilled riders with less to no imagery experience. A single-subject ABA design was employed. The study was occurred over five week’s period at Universiti Teknologi Malaysia Equestrian Park. Imagery ability was measured using the Sport Imagery Assessment Questionnaires (SIAQ), the motivational measured based on the Motivational imagery ability measure for Sport (MIAMS). The effectiveness of the PETTLEP imagery intervention on show jumping tasks were evaluated by the professional equine rider on the observational scale. Results demonstrated the improvement on all equestrian jumping tasks for the most participants from baseline to intervention. Result shows the improvement on imagery ability and participants’ motivations after the PETTLEP imagery intervention. Implication of the present study include underlining the impact of PETTLEP imagery on equestrian jumping tasks. The result extends the previous research on the effectiveness of PETTLEP imagery in the sports context that involves interaction and collaboration between human and horse. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PETTLEP%20imagery" title="PETTLEP imagery">PETTLEP imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=imagery%20ability" title=" imagery ability"> imagery ability</a>, <a href="https://publications.waset.org/abstracts/search?q=equestrian" title=" equestrian"> equestrian</a>, <a href="https://publications.waset.org/abstracts/search?q=equestrian%20jumping%20tasks" title=" equestrian jumping tasks"> equestrian jumping tasks</a> </p> <a href="https://publications.waset.org/abstracts/82648/the-effect-of-pettlep-imagery-on-equestrian-jumping-tasks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">202</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">934</span> Satellite Solutions for Koshi Floods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sujan%20Tyata">Sujan Tyata</a>, <a href="https://publications.waset.org/abstracts/search?q=Alison%20Shilpakar"> Alison Shilpakar</a>, <a href="https://publications.waset.org/abstracts/search?q=Nayan%20Bakhadyo"> Nayan Bakhadyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Kushal%20K.%20C."> Kushal K. C.</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhas%20Maskey"> Abhas Maskey</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Koshi River, acknowledged as the "Sorrow of Bihar," poses intricate challenges characterized by recurrent flooding. Within the Koshi Basin, floods have historically inflicted damage on infrastructure, agriculture, and settlements. The Koshi River exhibits a highly braided pattern across a 48 km stretch to the south of Chatara. The devastating flood from the Koshi River, which began in Nepal's Sunsari District in 2008, led to significant casualties and the destruction of agricultural areas.The catastrophe was exacerbated by a levee breach, underscoring the vulnerability of the region's flood defenses. A comprehensive understanding of environmental changes in the area is unveiled through satellite imagery analysis. This analysis facilitates the identification of high-risk zones and their contributing factors. Employing remote sensing, the analysis specifically pinpoints locations vulnerable to levee breaches. Topographical features of the area along with longitudinal and cross sectional profiles of the river and levee obtained from digital elevation model are used in the hydrological analysis for assessment of flood. To mitigate the impact of floods, the strategy involves the establishment of reservoirs upstream. Leveraging satellite data, optimal locations for water storage are identified. This approach presents a dual opportunity to not only alleviate flood risks but also catalyze the implementation of pumped storage hydropower initiatives. This holistic approach addresses environmental challenges while championing sustainable energy solutions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=flood%20mitigation" title="flood mitigation">flood mitigation</a>, <a href="https://publications.waset.org/abstracts/search?q=levee" title=" levee"> levee</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery%20analysis" title=" satellite imagery analysis"> satellite imagery analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=sustainable%20energy%20solutions" title=" sustainable energy solutions"> sustainable energy solutions</a> </p> <a href="https://publications.waset.org/abstracts/177859/satellite-solutions-for-koshi-floods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177859.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">933</span> A Study of ZY3 Satellite Digital Elevation Model Verification and Refinement with Shuttle Radar Topography Mission</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo%20Wang">Bo Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the first high-resolution civil optical satellite, ZY-3 satellite is able to obtain high-resolution multi-view images with three linear array sensors. The images can be used to generate Digital Elevation Models (DEM) through dense matching of stereo images. However, due to the clouds, forest, water and buildings covered on the images, there are some problems in the dense matching results such as outliers and areas failed to be matched (matching holes). This paper introduced an algorithm to verify the accuracy of DEM that generated by ZY-3 satellite with Shuttle Radar Topography Mission (SRTM). Since the accuracy of SRTM (Internal accuracy: 5 m; External accuracy: 15 m) is relatively uniform in the worldwide, it may be used to improve the accuracy of ZY-3 DEM. Based on the analysis of mass DEM and SRTM data, the processing can be divided into two aspects. The registration of ZY-3 DEM and SRTM can be firstly performed using the conjugate line features and area features matched between these two datasets. Then the ZY-3 DEM can be refined by eliminating the matching outliers and filling the matching holes. The matching outliers can be eliminated based on the statistics on Local Vector Binning (LVB). The matching holes can be filled by the elevation interpolated from SRTM. Some works are also conducted for the accuracy statistics of the ZY-3 DEM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ZY-3%20satellite%20imagery" title="ZY-3 satellite imagery">ZY-3 satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=DEM" title=" DEM"> DEM</a>, <a href="https://publications.waset.org/abstracts/search?q=SRTM" title=" SRTM"> SRTM</a>, <a href="https://publications.waset.org/abstracts/search?q=refinement" title=" refinement"> refinement</a> </p> <a href="https://publications.waset.org/abstracts/76112/a-study-of-zy3-satellite-digital-elevation-model-verification-and-refinement-with-shuttle-radar-topography-mission" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76112.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">932</span> Analysis of Spatial and Temporal Data Using Remote Sensing Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kapil%20Pandey">Kapil Pandey</a>, <a href="https://publications.waset.org/abstracts/search?q=Vishnu%20Goyal"> Vishnu Goyal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spatial and temporal data analysis is very well known in the field of satellite image processing. When spatial data are correlated with time, series analysis it gives the significant results in change detection studies. In this paper the GIS and Remote sensing techniques has been used to find the change detection using time series satellite imagery of Uttarakhand state during the years of 1990-2010. Natural vegetation, urban area, forest cover etc. were chosen as main landuse classes to study. Landuse/ landcover classes within several years were prepared using satellite images. Maximum likelihood supervised classification technique was adopted in this work and finally landuse change index has been generated and graphical models were used to present the changes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GIS" title="GIS">GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=landuse%2Flandcover" title=" landuse/landcover"> landuse/landcover</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20and%20temporal%20data" title=" spatial and temporal data"> spatial and temporal data</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a> </p> <a href="https://publications.waset.org/abstracts/40918/analysis-of-spatial-and-temporal-data-using-remote-sensing-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40918.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">931</span> Research on the Strategy of Orbital Avoidance for Optical Remote Sensing Satellite</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zheng%20DianXun">Zheng DianXun</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Bo"> Cheng Bo</a>, <a href="https://publications.waset.org/abstracts/search?q=Lin%20Hetong"> Lin Hetong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper focuses on the orbit avoidance strategies of optical remote sensing satellite. The optical remote sensing satellite, moving along the Sun-synchronous orbit, is equipped with laser warning equipment to alert CCD camera from laser attacks. There are three ways to protect the CCD camera: closing the camera cover, satellite attitude maneuver and satellite orbit avoidance. In order to enhance the safety of optical remote sensing satellite in orbit, this paper explores the strategy of satellite avoidance. The avoidance strategy is expressed as the evasion of pre-determined target points in the orbital coordinates of virtual satellite. The so-called virtual satellite is a passive vehicle which superposes the satellite at the initial stage of avoidance. The target points share the consistent cycle time and the same semi-major axis with the virtual satellite, which ensures the properties of the satellite’s Sun-synchronous orbit remain unchanged. Moreover, to further strengthen the avoidance capability of satellite, it can perform multi-target-points avoid maneuvers. On occasions of fulfilling the satellite orbit tasks, the orbit can be restored back to virtual satellite through orbit maneuvers. Thereinto, the avoid maneuvers adopts pulse guidance. And the fuel consumption is also optimized. The avoidance strategy discussed in this article is applicable to optical remote sensing satellite when it is encountered with hostile attack of space-based laser anti-satellite. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20remote%20sensing%20satellite" title="optical remote sensing satellite">optical remote sensing satellite</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20avoidance" title=" satellite avoidance"> satellite avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20satellite" title=" virtual satellite"> virtual satellite</a>, <a href="https://publications.waset.org/abstracts/search?q=avoid%20target-point" title=" avoid target-point"> avoid target-point</a>, <a href="https://publications.waset.org/abstracts/search?q=avoid%20maneuver" title=" avoid maneuver"> avoid maneuver</a> </p> <a href="https://publications.waset.org/abstracts/34217/research-on-the-strategy-of-orbital-avoidance-for-optical-remote-sensing-satellite" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34217.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">404</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">930</span> Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Varin">M. Varin</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20M.%20Dubois"> A. M. Dubois</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Gadbois-Langevin"> R. Gadbois-Langevin</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Chalghaf"> B. Chalghaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=very%20high%20spatial%20resolution" title="very high spatial resolution">very high spatial resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=WorlView-3" title=" WorlView-3"> WorlView-3</a>, <a href="https://publications.waset.org/abstracts/search?q=canopy%20height%20models" title=" canopy height models"> canopy height models</a>, <a href="https://publications.waset.org/abstracts/search?q=CHM" title=" CHM"> CHM</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV" title=" UAV"> UAV</a> </p> <a href="https://publications.waset.org/abstracts/121479/using-3d-satellite-imagery-to-generate-a-high-precision-canopy-height-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121479.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">929</span> Count of Trees in East Africa with Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nubwimana%20Rachel">Nubwimana Rachel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mugabowindekwe%20Maurice"> Mugabowindekwe Maurice</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20counting" title=" tree counting"> tree counting</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/177935/count-of-trees-in-east-africa-with-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177935.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">928</span> The Strategy of Orbit Avoidance for Optical Remote Sensing Satellite</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dianxun%20Zheng">Dianxun Zheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Wuxing%20Jing"> Wuxing Jing</a>, <a href="https://publications.waset.org/abstracts/search?q=Lin%20Hetong"> Lin Hetong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Optical remote sensing satellite, always running on the Sun-synchronous orbit, equipped laser warning equipment to alert CCD camera from laser attack. There have three ways to protect the CCD camera, closing the camera cover satellite attitude maneuver and satellite orbit avoidance. In order to enhance the safety of optical remote sensing satellite in orbit, this paper explores the strategy of satellite avoidance. The avoidance strategy is expressed as the evasion of pre-determined target points in the orbital coordinates of virtual satellite. The so-called virtual satellite is a passive vehicle which superposes a satellite at the initial stage of avoidance. The target points share the consistent cycle time and the same semi-major axis with the virtual satellite, which ensures the properties of the Sun-synchronous orbit remain unchanged. Moreover, to further strengthen the avoidance capability of satellite, it can perform multi-object avoid maneuvers. On occasions of fulfilling the orbit tasks of the satellite, the orbit can be restored back to virtual satellite through orbit maneuvers. There into, the avoid maneuvers adopts pulse guidance. and the fuel consumption is also optimized. The avoidance strategy discussed in this article is applicable to avoidance for optical remote sensing satellite when encounter the laser hostile attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20remote%20sensing%20satellite" title="optical remote sensing satellite">optical remote sensing satellite</a>, <a href="https://publications.waset.org/abstracts/search?q=always%20running%20on%20the%20sun-synchronous" title=" always running on the sun-synchronous"> always running on the sun-synchronous</a> </p> <a href="https://publications.waset.org/abstracts/31188/the-strategy-of-orbit-avoidance-for-optical-remote-sensing-satellite" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31188.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">400</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">927</span> Geospatial Techniques and VHR Imagery Use for Identification and Classification of Slums in Gujrat City, Pakistan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Ameer%20Nawaz%20Akram">Muhammad Ameer Nawaz Akram</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 21st century has been revealed that many individuals around the world are living in urban settlements than in rural zones. The evolution of numerous cities in emerging and newly developed countries is accompanied by the rise of slums. The precise definition of a slum varies countries to countries, but the universal harmony is that slums are dilapidated settlements facing severe poverty and have lacked access to sanitation, water, electricity, good living styles, and land tenure. The slum settlements always vary in unique patterns within and among the countries and cities. The core objective of this study is the spatial identification and classification of slums in Gujrat city Pakistan from very high-resolution GeoEye-1 (0.41m) satellite imagery. Slums were first identified using GPS for sample site identification and ground-truthing; through this process, 425 slums were identified. Then Object-Oriented Analysis (OOA) was applied to classify slums on digital image. Spatial analysis softwares, e.g., ArcGIS 10.3, Erdas Imagine 9.3, and Envi 5.1, were used for processing data and performing the analysis. Results show that OOA provides up to 90% accuracy for the identification of slums. Jalal Cheema and Allah Ho colonies are severely affected by slum settlements. The ratio of criminal activities is also higher here than in other areas. Slums are increasing with the passage of time in urban areas, and they will be like a hazardous problem in coming future. So now, the executive bodies need to make effective policies and move towards the amelioration process of the city. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=slums" title="slums">slums</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20oriented%20analysis" title=" object oriented analysis"> object oriented analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=zonal%20change%20detection" title=" zonal change detection"> zonal change detection</a> </p> <a href="https://publications.waset.org/abstracts/119513/geospatial-techniques-and-vhr-imagery-use-for-identification-and-classification-of-slums-in-gujrat-city-pakistan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119513.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">926</span> Study of Land Use Changes around an Archaeological Site Using Satellite Imagery Analysis: A Case Study of Hathnora, Madhya Pradesh, India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pranita%20Shivankar">Pranita Shivankar</a>, <a href="https://publications.waset.org/abstracts/search?q=Arun%20Suryawanshi"> Arun Suryawanshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Prabodhachandra%20Deshmukh"> Prabodhachandra Deshmukh</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20V.%20C.%20Kameswara%20Rao"> S. V. C. Kameswara Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Many undesirable significant changes in landscapes and the regions in the vicinity of historically important structures occur as impacts due to anthropogenic activities over a period of time. A better understanding of such influences using recently developed satellite remote sensing techniques helps in planning the strategies for minimizing the negative impacts on the existing environment. In 1982, a fossilized hominid skull cap was discovered at a site located along the northern bank of the east-west flowing river Narmada in the village Hathnora. Close to the same site, the presence of Late Acheulian and Middle Palaeolithic tools have been discovered in the immediately overlying pebbly gravel, suggesting that the ‘Narmada skull’ may be from the Middle Pleistocene age. The reviews of recently carried out research studies relevant to hominid remains all over the world from Late Acheulian and Middle Palaeolithic sites suggest succession and contemporaneity of cultures there, enhancing the importance of Hathnora as a rare precious site. In this context, the maximum likelihood classification using digital interpretation techniques was carried out for this study area using the satellite imagery from Landsat ETM+ for the year 2006 and Landsat TM (OLI and TIRS) for the year 2016. The overall accuracy of Land Use Land Cover (LULC) classification of 2016 imagery was around 77.27% based on ground truth data. The significant reduction in the main river course and agricultural activities and increase in the built-up area observed in remote sensing data analysis are undoubtedly the outcome of human encroachments in the vicinity of the eminent heritage site. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cultural%20succession" title="cultural succession">cultural succession</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20interpretation" title=" digital interpretation"> digital interpretation</a>, <a href="https://publications.waset.org/abstracts/search?q=Hathnora" title=" Hathnora"> Hathnora</a>, <a href="https://publications.waset.org/abstracts/search?q=Homo%20Sapiens" title=" Homo Sapiens"> Homo Sapiens</a>, <a href="https://publications.waset.org/abstracts/search?q=Late%20Acheulian" title=" Late Acheulian"> Late Acheulian</a>, <a href="https://publications.waset.org/abstracts/search?q=Middle%20Palaeolithic" title=" Middle Palaeolithic"> Middle Palaeolithic</a> </p> <a href="https://publications.waset.org/abstracts/132456/study-of-land-use-changes-around-an-archaeological-site-using-satellite-imagery-analysis-a-case-study-of-hathnora-madhya-pradesh-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132456.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">172</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">925</span> Capacity Building on Small Automatic Tracking Antenna Development for Thailand Space Sustainability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Warinthorn%20Kiadtikornthaweeyot%20Evans">Warinthorn Kiadtikornthaweeyot Evans</a>, <a href="https://publications.waset.org/abstracts/search?q=Nawattakorn%20Kaikaew"> Nawattakorn Kaikaew</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The communication system between the ground station and the satellite is very important to guarantee contact between both sides. Thailand, led by Geo-Informatics and Space Technology Development Agency (GISTDA), has received satellite images from other nation's satellites for a number of years. In 2008, Thailand Earth Observation Satellite (THEOS) was the first Earth observation satellite owned by Thailand. The mission was monitoring our country with affordable access to space-based Earth imagery. At this time, the control ground station was initially used to control the THEOS satellite by our Thai engineers. The Tele-commands were sent to the satellite according to requests from government and private sectors. Since then, GISTDA's engineers have gained their skill and experience to operate the satellite. Recently the desire to use satellite data is increasing rapidly due to space technology moving fast and giving us more benefits. It is essential to ensure that Thailand remains competitive in space technology. Thai Engineers have started to improve the performance of the control ground station in many different sections, also developing skills and knowledge in areas of satellite communication. Human resource skills are being enforced with development projects through capacity building. This paper focuses on the hands-on capacity building of GISTDA's engineers to develop a small automatic tracking antenna. The final achievement of the project is the first phase prototype of a small automatic tracking antenna to support the new technology of the satellites. There are two main subsystems that have been developed and tested; the tracking system and the monitoring and control software. The prototype first phase functions testing has been performed with Two Line Element (TLE) and the mission planning plan (MPP) file calculated from THEOS satellite by GISTDA. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=capacity%20building" title="capacity building">capacity building</a>, <a href="https://publications.waset.org/abstracts/search?q=small%20tracking%20antenna" title=" small tracking antenna"> small tracking antenna</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20tracking%20system" title=" automatic tracking system"> automatic tracking system</a>, <a href="https://publications.waset.org/abstracts/search?q=project%20development%20procedure" title=" project development procedure"> project development procedure</a> </p> <a href="https://publications.waset.org/abstracts/168616/capacity-building-on-small-automatic-tracking-antenna-development-for-thailand-space-sustainability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168616.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">924</span> Routing in IP/LEO Satellite Communication Systems: Past, Present and Future</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Hussein">Mohammed Hussein</a>, <a href="https://publications.waset.org/abstracts/search?q=Abualseoud%20Hanani"> Abualseoud Hanani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Low Earth Orbit (LEO) satellite constellation system, routing data from the source all the way to the destination constitutes a daunting challenge because LEO satellite constellation resources are spare and the high speed movement of LEO satellites results in a highly dynamic network topology. This situation limits the applicability of traditional routing approaches that rely on exchanging topology information upon change or setup of a connection. Consequently, in recent years, many routing algorithms and implementation strategies for satellite constellation networks with Inter Satellite Links (ISLs) have been proposed. In this article, we summarize and classify some of the most representative solutions according to their objectives, and discuss their advantages and disadvantages. Finally, with a look into the future, we present some of the new challenges and opportunities for LEO satellite constellations in general and routing protocols in particular. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LEO%20satellite%20constellations" title="LEO satellite constellations">LEO satellite constellations</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20topology" title=" dynamic topology"> dynamic topology</a>, <a href="https://publications.waset.org/abstracts/search?q=IP%20routing" title=" IP routing"> IP routing</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-satellite-links" title=" inter-satellite-links"> inter-satellite-links</a> </p> <a href="https://publications.waset.org/abstracts/54344/routing-in-ipleo-satellite-communication-systems-past-present-and-future" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54344.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">381</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">923</span> Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subham%20Kharel">Subham Kharel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudha%20Ravindranath"> Sudha Ravindranath</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Vidya"> A. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Chandrasekaran"> B. Chandrasekaran</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Ganesha%20Raj"> K. Ganesha Raj</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Shesadri"> T. Shesadri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20oriented%20classification" title="object oriented classification">object oriented classification</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20extraction" title=" shadow extraction"> shadow extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=high-rise%20buildings" title=" high-rise buildings"> high-rise buildings</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20technology" title=" spatial technology"> spatial technology</a> </p> <a href="https://publications.waset.org/abstracts/130749/identification-of-high-rise-buildings-using-object-based-classification-and-shadow-extraction-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130749.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">922</span> Estimating PM2.5 Concentrations Based on Landsat 8 Imagery and Historical Field Data over the Metropolitan Area of Mexico City</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20T.%20Sepulveda-Hirose">Rodrigo T. Sepulveda-Hirose</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20B.%20Carrera-Aguilar"> Ana B. Carrera-Aguilar</a>, <a href="https://publications.waset.org/abstracts/search?q=Francisco%20Andree%20Ramirez-Casas"> Francisco Andree Ramirez-Casas</a>, <a href="https://publications.waset.org/abstracts/search?q=Alondra%20Orozco-Gomez"> Alondra Orozco-Gomez</a>, <a href="https://publications.waset.org/abstracts/search?q=Miguel%20Angel%20Sanchez-Caro"> Miguel Angel Sanchez-Caro</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Herrera-Ventosa"> Carlos Herrera-Ventosa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High concentrations of particulate matter in the atmosphere pose a threat to human health, especially over areas with high concentrations of population; however, field air pollution monitoring is expensive and time-consuming. In order to achieve reduced costs and global coverage of the whole urban area, remote sensing can be used. This study evaluates PM2.5 concentrations, over the Mexico City´s metropolitan area, are estimated using atmospheric reflectance from LANDSAT 8, satellite imagery and historical PM2.5 measurements of the Automatic Environmental Monitoring Network of Mexico City (RAMA). Through the processing of the available satellite images, a preliminary model was generated to evaluate the optimal bands for the generation of the final model for Mexico City. Work on the final model continues with the results of the preliminary model. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=air%20pollution%20modeling" title="air pollution modeling">air pollution modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Landsat%208" title=" Landsat 8"> Landsat 8</a>, <a href="https://publications.waset.org/abstracts/search?q=PM2.5" title=" PM2.5"> PM2.5</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a> </p> <a href="https://publications.waset.org/abstracts/108009/estimating-pm25-concentrations-based-on-landsat-8-imagery-and-historical-field-data-over-the-metropolitan-area-of-mexico-city" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">921</span> Effects of Different Kinds of Combined Action Observation and Motor Imagery on Improving Golf Putting Performance and Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chi%20H.%20Lin">Chi H. Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Chi%20C.%20Lin"> Chi C. Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih%20L.%20Hsieh"> Chih L. Hsieh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motor Imagery (MI) alone or combined with action observation (AO) has been shown to enhance motor performance and skill learning. The most effective way to combine these techniques has received limited scientific scrutiny. In the present study, we examined the effects of simultaneous (i.e., observing an action whilst imagining carrying out the action concurrently), alternate (i.e., observing an action and then doing imagery related to that action consecutively) and synthesis (alternately perform action observation and imagery action and then perform observation and imagery action simultaneously) AOMI combinations on improving golf putting performance and learning. Participants, 45 university students who had no formal experience of using imagery for the study, were randomly allocated to one of four training groups: simultaneous action observation and motor imagery (S-AOMI), alternate action observation and motor imagery (A-AOMI), synthesis action observation and motor imagery (A-S-AOMI), and a control group. And it was applied 'Different Experimental Groups with Pre and Post Measured' designs. Participants underwent eighteen times of different interventions, which were happened three times a week and lasting for six weeks. We analyzed the information we received based on two-factor (group × times) mixed between and within analysis of variance to discuss the real effects on participants' golf putting performance and learning about different intervention methods of different types of combined action observation and motor imagery. After the intervention, we then used imagery questionnaire and journey to understand the condition and suggestion about different motor imagery and action observation intervention from the participants. The results revealed that the three experimental groups both are effective in putting performance and learning but not for the control group, and the A-S-AOMI group is significantly better effect than S-AOMI group on golf putting performance and learning. The results confirmed the effect of motor imagery combined with action observation on the performance and learning of golf putting. In particular, in the groups of synthesis, motor imagery, or action observation were alternately performed first and then performed motor imagery, and action observation simultaneously would have the best effectiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motor%20skill%20learning" title="motor skill learning">motor skill learning</a>, <a href="https://publications.waset.org/abstracts/search?q=motor%20imagery" title=" motor imagery"> motor imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20observation" title=" action observation"> action observation</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/128207/effects-of-different-kinds-of-combined-action-observation-and-motor-imagery-on-improving-golf-putting-performance-and-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=31">31</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=32">32</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=satellite%20imagery&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>