CINXE.COM
Search results for: high resolution synthetic imagery
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: high resolution synthetic imagery</title> <meta name="description" content="Search results for: high resolution synthetic imagery"> <meta name="keywords" content="high resolution synthetic imagery"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="high resolution synthetic imagery" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="high resolution synthetic imagery"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 21600</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: high resolution synthetic imagery</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21600</span> A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yongquan%20Zhao">Yongquan Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Bo%20Huang"> Bo Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hybrid%20spatial-temporal-spectral%20fusion" title="hybrid spatial-temporal-spectral fusion">hybrid spatial-temporal-spectral fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery" title=" high resolution synthetic imagery"> high resolution synthetic imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20square%20regression" title=" least square regression"> least square regression</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20transformation" title=" spectral transformation"> spectral transformation</a> </p> <a href="https://publications.waset.org/abstracts/74667/a-hybrid-image-fusion-model-for-generating-high-spatial-temporal-spectral-resolution-data-using-oli-modis-hyperion-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74667.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21599</span> Plot Scale Estimation of Crop Biophysical Parameters from High Resolution Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreedevi%20Moharana">Shreedevi Moharana</a>, <a href="https://publications.waset.org/abstracts/search?q=Subashisa%20Dutta"> Subashisa Dutta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study focuses on the estimation of crop biophysical parameters like crop chlorophyll, nitrogen and water stress at plot scale in the crop fields. To achieve these, we have used high-resolution satellite LISS IV imagery. A new methodology has proposed in this research work, the spectral shape function of paddy crop is employed to get the significant wavelengths sensitive to paddy crop parameters. From the shape functions, regression index models were established for the critical wavelength with minimum and maximum wavelengths of multi-spectrum high-resolution LISS IV data. Moreover, the functional relationships were utilized to develop the index models. From these index models crop, biophysical parameters were estimated and mapped from LISS IV imagery at plot scale in crop field level. The result showed that the nitrogen content of the paddy crop varied from 2-8%, chlorophyll from 1.5-9% and water content variation observed from 40-90% respectively. It was observed that the variability in rice agriculture system in India was purely a function of field topography. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=crop%20parameters" title="crop parameters">crop parameters</a>, <a href="https://publications.waset.org/abstracts/search?q=index%20model" title=" index model"> index model</a>, <a href="https://publications.waset.org/abstracts/search?q=LISS%20IV%20imagery" title=" LISS IV imagery"> LISS IV imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=plot%20scale" title=" plot scale"> plot scale</a>, <a href="https://publications.waset.org/abstracts/search?q=shape%20function" title=" shape function"> shape function</a> </p> <a href="https://publications.waset.org/abstracts/89499/plot-scale-estimation-of-crop-biophysical-parameters-from-high-resolution-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21598</span> Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evans%20Belly">Evans Belly</a>, <a href="https://publications.waset.org/abstracts/search?q=Imdad%20Rizvi"> Imdad Rizvi</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20M.%20Kadam"> M. M. Kadam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building%20detection" title="building detection">building detection</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20detection" title=" shadow detection"> shadow detection</a>, <a href="https://publications.waset.org/abstracts/search?q=landscape%20generation" title=" landscape generation"> landscape generation</a>, <a href="https://publications.waset.org/abstracts/search?q=label" title=" label"> label</a>, <a href="https://publications.waset.org/abstracts/search?q=partitioning" title=" partitioning"> partitioning</a>, <a href="https://publications.waset.org/abstracts/search?q=very%20high%20resolution%20%28VHR%29%20satellite%20imagery" title=" very high resolution (VHR) satellite imagery"> very high resolution (VHR) satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/76690/automatic-extraction-of-arbitrarily-shaped-buildings-from-vhr-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76690.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">314</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21597</span> Comparative Study of Accuracy of Land Cover/Land Use Mapping Using Medium Resolution Satellite Imagery: A Case Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20C.%20Paliwal">M. C. Paliwal</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20K.%20Jain"> A. K. Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20K.%20Katiyar"> S. K. Katiyar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Classification of satellite imagery is very important for the assessment of its accuracy. In order to determine the accuracy of the classified image, usually the assumed-true data are derived from ground truth data using Global Positioning System. The data collected from satellite imagery and ground truth data is then compared to find out the accuracy of data and error matrices are prepared. Overall and individual accuracies are calculated using different methods. The study illustrates advanced classification and accuracy assessment of land use/land cover mapping using satellite imagery. IRS-1C-LISS IV data were used for classification of satellite imagery. The satellite image was classified using the software in fourteen classes namely water bodies, agricultural fields, forest land, urban settlement, barren land and unclassified area etc. Classification of satellite imagery and calculation of accuracy was done by using ERDAS-Imagine software to find out the best method. This study is based on the data collected for Bhopal city boundaries of Madhya Pradesh State of India. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=resolution" title="resolution">resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy%20assessment" title=" accuracy assessment"> accuracy assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20use%20mapping" title=" land use mapping"> land use mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20truth%20data" title=" ground truth data"> ground truth data</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20matrices" title=" error matrices"> error matrices</a> </p> <a href="https://publications.waset.org/abstracts/13294/comparative-study-of-accuracy-of-land-coverland-use-mapping-using-medium-resolution-satellite-imagery-a-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">507</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21596</span> View Synthesis of Kinetic Depth Imagery for 3D Security X-Ray Imaging</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=O.%20Abusaeeda">O. Abusaeeda</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20P.%20O.%20Evans"> J. P. O. Evans</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Downes"> D. Downes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We demonstrate the synthesis of intermediary views within a sequence of X-ray images that exhibit depth from motion or kinetic depth effect in a visual display. Each synthetic image replaces the requirement for a linear X-ray detector array during the image acquisition process. Scale invariant feature transform, SIFT, in combination with epipolar morphing is employed to produce synthetic imagery. Comparison between synthetic and ground truth images is reported to quantify the performance of the approach. Our work is a key aspect in the development of a 3D imaging modality for the screening of luggage at airport checkpoints. This programme of research is in collaboration with the UK Home Office and the US Dept. of Homeland Security. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=X-ray" title="X-ray">X-ray</a>, <a href="https://publications.waset.org/abstracts/search?q=kinetic%20depth" title=" kinetic depth"> kinetic depth</a>, <a href="https://publications.waset.org/abstracts/search?q=KDE" title=" KDE"> KDE</a>, <a href="https://publications.waset.org/abstracts/search?q=view%20synthesis" title=" view synthesis"> view synthesis</a> </p> <a href="https://publications.waset.org/abstracts/7411/view-synthesis-of-kinetic-depth-imagery-for-3d-security-x-ray-imaging" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7411.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">265</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21595</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21594</span> Improved Super-Resolution Using Deep Denoising Convolutional Neural Network </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pawan%20Kumar%20Mishra">Pawan Kumar Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Singh%20Bisht"> Ganesh Singh Bisht</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=resolution" title="resolution">resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=de-blurring" title=" de-blurring"> de-blurring</a> </p> <a href="https://publications.waset.org/abstracts/78802/improved-super-resolution-using-deep-denoising-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78802.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">517</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21593</span> Integrated Intensity and Spatial Enhancement Technique for Color Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evan%20W.%20Krieger">Evan W. Krieger</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayan%20K.%20Asari"> Vijayan K. Asari</a>, <a href="https://publications.waset.org/abstracts/search?q=Saibabu%20Arigela"> Saibabu Arigela</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20range%20compression" title="dynamic range compression">dynamic range compression</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-level%20Fourier%20features" title=" multi-level Fourier features"> multi-level Fourier features</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20enhancement" title=" nonlinear enhancement"> nonlinear enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=super%20resolution" title=" super resolution"> super resolution</a> </p> <a href="https://publications.waset.org/abstracts/22706/integrated-intensity-and-spatial-enhancement-technique-for-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22706.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">554</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21592</span> A Novel Spectral Index for Automatic Shadow Detection in Urban Mapping Based on WorldView-2 Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaveh%20Shahi">Kaveh Shahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Helmi%20Z.%20M.%20Shafri"> Helmi Z. M. Shafri</a>, <a href="https://publications.waset.org/abstracts/search?q=Ebrahim%20Taherzadeh"> Ebrahim Taherzadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In remote sensing, shadow causes problems in many applications such as change detection and classification. It is caused by objects which are elevated, thus can directly affect the accuracy of information. For these reasons, it is very important to detect shadows particularly in urban high spatial resolution imagery which created a significant problem. This paper focuses on automatic shadow detection based on a new spectral index for multispectral imagery known as Shadow Detection Index (SDI). The new spectral index was tested on different areas of World-View 2 images and the results demonstrated that the new spectral index has a massive potential to extract shadows effectively and automatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spectral%20index" title="spectral index">spectral index</a>, <a href="https://publications.waset.org/abstracts/search?q=shadow%20detection" title=" shadow detection"> shadow detection</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing%20images" title=" remote sensing images"> remote sensing images</a>, <a href="https://publications.waset.org/abstracts/search?q=World-View%202" title=" World-View 2"> World-View 2</a> </p> <a href="https://publications.waset.org/abstracts/13500/a-novel-spectral-index-for-automatic-shadow-detection-in-urban-mapping-based-on-worldview-2-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13500.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21591</span> Quantitative Assessment of Road Infrastructure Health Using High-Resolution Remote Sensing Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wang%20Zhaoming">Wang Zhaoming</a>, <a href="https://publications.waset.org/abstracts/search?q=Shao%20Shegang"> Shao Shegang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen%20Xiaorong"> Chen Xiaorong</a>, <a href="https://publications.waset.org/abstracts/search?q=Qi%20Yanan"> Qi Yanan</a>, <a href="https://publications.waset.org/abstracts/search?q=Tian%20Lei"> Tian Lei</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Jian"> Wang Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study conducts a comparative analysis of the spectral curves of asphalt pavements at various aging stages to improve road information extraction from high-resolution remote sensing imagery. By examining the distinguishing capabilities and spectral characteristics, the research aims to establish a pavement information extraction methodology based on China's high-resolution satellite images. The process begins by analyzing the spectral features of asphalt pavements to construct a spectral assessment model suitable for evaluating pavement health. This model is then tested at a national highway traffic testing site in China, validating its effectiveness in distinguishing different pavement aging levels. The study's findings demonstrate that the proposed model can accurately assess road health, offering a valuable tool for road maintenance planning and infrastructure management. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spectral%20analysis" title="spectral analysis">spectral analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=asphalt%20pavement%20aging" title=" asphalt pavement aging"> asphalt pavement aging</a>, <a href="https://publications.waset.org/abstracts/search?q=high-resolution%20remote%20sensing" title=" high-resolution remote sensing"> high-resolution remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=pavement%20health%20assessment" title=" pavement health assessment"> pavement health assessment</a> </p> <a href="https://publications.waset.org/abstracts/189326/quantitative-assessment-of-road-infrastructure-health-using-high-resolution-remote-sensing-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189326.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">21</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21590</span> An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Akrem%20Sellami">Akrem Sellami</a>, <a href="https://publications.waset.org/abstracts/search?q=Imed%20Riadh%20Farah"> Imed Riadh Farah</a>, <a href="https://publications.waset.org/abstracts/search?q=Basel%20Solaiman"> Basel Solaiman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=band%20selection" title="band selection">band selection</a>, <a href="https://publications.waset.org/abstracts/search?q=dimensionality%20reduction" title=" dimensionality reduction"> dimensionality reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20imagery" title=" hyperspectral imagery"> hyperspectral imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20interpretation" title=" semantic interpretation"> semantic interpretation</a> </p> <a href="https://publications.waset.org/abstracts/55370/an-adaptive-dimensionality-reduction-approach-for-hyperspectral-imagery-semantic-interpretation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55370.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21589</span> Rainfall Estimation Using Himawari-8 Meteorological Satellite Imagery in Central Taiwan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chiang%20Wei">Chiang Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Hui-Chung%20Yeh"> Hui-Chung Yeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Yen-Chang%20Chen"> Yen-Chang Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this study is to estimate the rainfall using the new generation Himawari-8 meteorological satellite with multi-band, high-bit format, and high spatiotemporal resolution, ground rainfall data at the Chen-Yu-Lan watershed of Joushuei River Basin (443.6 square kilometers) in Central Taiwan. Accurate and fine-scale rainfall information is essential for rugged terrain with high local variation for early warning of flood, landslide, and debris flow disasters. 10-minute and 2 km pixel-based rainfall of Typhoon Megi of 2016 and meiyu on June 1-4 of 2017 were tested to demonstrate the new generation Himawari-8 meteorological satellite can capture rainfall variation in the rugged mountainous area both at fine-scale and watershed scale. The results provide the valuable rainfall information for early warning of future disasters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=estimation" title="estimation">estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=Himawari-8" title=" Himawari-8"> Himawari-8</a>, <a href="https://publications.waset.org/abstracts/search?q=rainfall" title=" rainfall"> rainfall</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/93847/rainfall-estimation-using-himawari-8-meteorological-satellite-imagery-in-central-taiwan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21588</span> Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nicholas%20V.%20Scott">Nicholas V. Scott</a>, <a href="https://publications.waset.org/abstracts/search?q=Jack%20McCarthy"> Jack McCarthy </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=explosive%20material" title="explosive material">explosive material</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20imagery" title=" hyperspectral imagery"> hyperspectral imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=polarization" title=" polarization"> polarization</a> </p> <a href="https://publications.waset.org/abstracts/127733/potassium-phosphorus-nitrogen-detection-and-spectral-segmentation-analysis-using-polarized-hyperspectral-imagery-and-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127733.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21587</span> Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sundara%20Subramanian%20Karuppasamy">Sundara Subramanian Karuppasamy</a>, <a href="https://publications.waset.org/abstracts/search?q=Che%20Hua%20Yang"> Che Hua Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=laser%20ultrasonics" title="laser ultrasonics">laser ultrasonics</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20phased%20array" title=" linear phased array"> linear phased array</a>, <a href="https://publications.waset.org/abstracts/search?q=nondestructive%20testing" title=" nondestructive testing"> nondestructive testing</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20focusing%20technique" title=" synthetic aperture focusing technique"> synthetic aperture focusing technique</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasonic%20imaging" title=" ultrasonic imaging"> ultrasonic imaging</a> </p> <a href="https://publications.waset.org/abstracts/130962/laser-ultrasonic-imaging-based-on-synthetic-aperture-focusing-technique-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130962.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">133</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21586</span> Monitoring of Cannabis Cultivation with High-Resolution Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Levent%20Basayigit">Levent Basayigit</a>, <a href="https://publications.waset.org/abstracts/search?q=Sinan%20Demir"> Sinan Demir</a>, <a href="https://publications.waset.org/abstracts/search?q=Burhan%20Kara"> Burhan Kara</a>, <a href="https://publications.waset.org/abstracts/search?q=Yusuf%20Ucar">Yusuf Ucar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cannabis is mostly used for drug production. In some countries, an excessive amount of illegal cannabis is cultivated and sold. Most of the illegal cannabis cultivation occurs on the lands far from settlements. In farmlands, it is cultivated with other crops. In this method, cannabis is surrounded by tall plants like corn and sunflower. It is also cultivated with tall crops as the mixed culture. The common method of the determination of the illegal cultivation areas is to investigate the information obtained from people. This method is not sufficient for the determination of illegal cultivation in remote areas. For this reason, more effective methods are needed for the determination of illegal cultivation. Remote Sensing is one of the most important technologies to monitor the plant growth on the land. The aim of this study is to monitor cannabis cultivation area using satellite imagery. The main purpose of this study was to develop an applicable method for monitoring the cannabis cultivation. For this purpose, cannabis was grown as single or surrounded by the corn and sunflower in plots. The morphological characteristics of cannabis were recorded two times per month during the vegetation period. The spectral signature library was created with the spectroradiometer. The parcels were monitored with high-resolution satellite imagery. With the processing of satellite imagery, the cultivation areas of cannabis were classified. To separate the Cannabis plots from the other plants, the multiresolution segmentation algorithm was found to be the most successful for classification. WorldView Improved Vegetative Index (WV-VI) classification was the most accurate method for monitoring the plant density. As a result, an object-based classification method and vegetation indices were sufficient for monitoring the cannabis cultivation in multi-temporal Earthwiev images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cannabis" title="Cannabis">Cannabis</a>, <a href="https://publications.waset.org/abstracts/search?q=drug" title=" drug"> drug</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based%20classification" title=" object-based classification"> object-based classification</a> </p> <a href="https://publications.waset.org/abstracts/74202/monitoring-of-cannabis-cultivation-with-high-resolution-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74202.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21585</span> High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bilel%20Chalghaf">Bilel Chalghaf</a>, <a href="https://publications.waset.org/abstracts/search?q=Mathieu%20Varin"> Mathieu Varin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tree%20species" title="tree species">tree species</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based" title=" object-based"> object-based</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral" title=" multispectral"> multispectral</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=WorldView-3" title=" WorldView-3"> WorldView-3</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a> </p> <a href="https://publications.waset.org/abstracts/119023/high-resolution-satellite-imagery-and-lidar-data-for-object-based-tree-species-classification-in-quebec-canada" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119023.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21584</span> The Effect of PETTLEP Imagery on Equestrian Jumping Tasks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nurwina%20Anuar">Nurwina Anuar</a>, <a href="https://publications.waset.org/abstracts/search?q=Aswad%20Anuar"> Aswad Anuar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Imagery is a popular mental technique used by athletes and coaches to improve learning and performance. It has been widely investigated and beneficial in the sports context. However, the imagery application in equestrian sport has been understudied. Thus, the effectiveness of imagery should encompass the application in the equestrian sport to ensure its application covert all sports. Unlike most sports (e.g., football, badminton, tennis, ski) which are both mental and physical are dependent solely upon human decision and response, equestrian sports involves the interaction of human-horse collaboration to success in the equestrian tasks. This study aims to investigate the effect of PETTLEP imagery on equestrian jumping tasks, motivation and imagery ability. It was hypothesized that the use of PETTLEP imagery intervention will significantly increase in the skill equestrian jumping tasks. It was also hypothesized that riders’ imagery ability and motivation will increase across phases. The participants were skilled riders with less to no imagery experience. A single-subject ABA design was employed. The study was occurred over five week’s period at Universiti Teknologi Malaysia Equestrian Park. Imagery ability was measured using the Sport Imagery Assessment Questionnaires (SIAQ), the motivational measured based on the Motivational imagery ability measure for Sport (MIAMS). The effectiveness of the PETTLEP imagery intervention on show jumping tasks were evaluated by the professional equine rider on the observational scale. Results demonstrated the improvement on all equestrian jumping tasks for the most participants from baseline to intervention. Result shows the improvement on imagery ability and participants’ motivations after the PETTLEP imagery intervention. Implication of the present study include underlining the impact of PETTLEP imagery on equestrian jumping tasks. The result extends the previous research on the effectiveness of PETTLEP imagery in the sports context that involves interaction and collaboration between human and horse. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PETTLEP%20imagery" title="PETTLEP imagery">PETTLEP imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=imagery%20ability" title=" imagery ability"> imagery ability</a>, <a href="https://publications.waset.org/abstracts/search?q=equestrian" title=" equestrian"> equestrian</a>, <a href="https://publications.waset.org/abstracts/search?q=equestrian%20jumping%20tasks" title=" equestrian jumping tasks"> equestrian jumping tasks</a> </p> <a href="https://publications.waset.org/abstracts/82648/the-effect-of-pettlep-imagery-on-equestrian-jumping-tasks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">202</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21583</span> Reinforcement Learning for Classification of Low-Resolution Satellite Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadija%20Bouzaachane">Khadija Bouzaachane</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Mahdi%20El%20Guarmah"> El Mahdi El Guarmah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The classification of low-resolution satellite images has been a worthwhile and fertile field that attracts plenty of researchers due to its importance in monitoring geographical areas. It could be used for several purposes such as disaster management, military surveillance, agricultural monitoring. The main objective of this work is to classify efficiently and accurately low-resolution satellite images by using novel technics of deep learning and reinforcement learning. The images include roads, residential areas, industrial areas, rivers, sea lakes, and vegetation. To achieve that goal, we carried out experiments on the sentinel-2 images considering both high accuracy and efficiency classification. Our proposed model achieved a 91% accuracy on the testing dataset besides a good classification for land cover. Focus on the parameter precision; we have obtained 93% for the river, 92% for residential, 97% for residential, 96% for the forest, 87% for annual crop, 84% for herbaceous vegetation, 85% for pasture, 78% highway and 100% for Sea Lake. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a> </p> <a href="https://publications.waset.org/abstracts/141097/reinforcement-learning-for-classification-of-low-resolution-satellite-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">213</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21582</span> Improvement of Cross Range Resolution in Through Wall Radar Imaging Using Bilateral Backprojection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashmi%20Yadawad">Rashmi Yadawad</a>, <a href="https://publications.waset.org/abstracts/search?q=Disha%20Narayanan"> Disha Narayanan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ravi%20Gautam"> Ravi Gautam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Through Wall Radar Imaging is gaining increasing importance now a days in the field of Defense and one of the most important criteria that forms the basis for the image quality obtained is the Cross-Range resolution of the image. In this research paper, the Bilateral Back projection algorithm has been implemented for Through Wall Radar Imaging. The sole purpose is to enhance the resolution in the cross range direction of the obtained Back projection image. Synthetic Data is generated for two targets which are placed at various locations in a room of dimensions 8 m by 6m. Two algorithms namely, simple back projection and Bilateral Back projection have been implemented, images are obtained and the obtained images are compared. Numerical simulations have been coded in MATLAB and experimental results of the two algorithms have been shown. Based on the comparison between the two images, it can be clearly seen that the ringing effect and chess board effect have been heavily reduced in the bilaterally back projected image and hence promising results are obtained giving a relatively sharper image with relatively well defined edges. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=through%20wall%20radar%20imaging" title="through wall radar imaging">through wall radar imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral%20back%20projection" title=" bilateral back projection"> bilateral back projection</a>, <a href="https://publications.waset.org/abstracts/search?q=cross%20range%20resolution" title=" cross range resolution"> cross range resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=synthetic%20data" title=" synthetic data "> synthetic data </a> </p> <a href="https://publications.waset.org/abstracts/14369/improvement-of-cross-range-resolution-in-through-wall-radar-imaging-using-bilateral-backprojection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14369.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21581</span> Lab Bench for Synthetic Aperture Radar Imaging System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Karthiyayini%20Nagarajan">Karthiyayini Nagarajan</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20V.%20Ramakrishna"> P. V. Ramakrishna </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar (SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System (Lab Bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20radar" title="synthetic aperture radar">synthetic aperture radar</a>, <a href="https://publications.waset.org/abstracts/search?q=radio%20reflection%20model" title=" radio reflection model"> radio reflection model</a>, <a href="https://publications.waset.org/abstracts/search?q=lab%20bench" title=" lab bench"> lab bench</a>, <a href="https://publications.waset.org/abstracts/search?q=imaging%20engineering" title=" imaging engineering"> imaging engineering</a> </p> <a href="https://publications.waset.org/abstracts/29485/lab-bench-for-synthetic-aperture-radar-imaging-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21580</span> Design and Implementation of a Lab Bench for Synthetic Aperture Radar Imaging System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Karthiyayini%20Nagarajan">Karthiyayini Nagarajan</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20V.%20RamaKrishna"> P. V. RamaKrishna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar(SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System(lab bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20radar" title="synthetic aperture radar">synthetic aperture radar</a>, <a href="https://publications.waset.org/abstracts/search?q=radio%20reflection%20model" title=" radio reflection model"> radio reflection model</a>, <a href="https://publications.waset.org/abstracts/search?q=lab%20bench" title=" lab bench"> lab bench</a> </p> <a href="https://publications.waset.org/abstracts/29475/design-and-implementation-of-a-lab-bench-for-synthetic-aperture-radar-imaging-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">468</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21579</span> Application on Metastable Measurement with Wide Range High Resolution VDL Circuit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Po-Hui%20Yang">Po-Hui Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jing-Min%20Chen"> Jing-Min Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Po-Yu%20Kuo"> Po-Yu Kuo</a>, <a href="https://publications.waset.org/abstracts/search?q=Chia-Chun%20Wu"> Chia-Chun Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposed a high resolution Vernier Delay Line (VDL) measurement circuit with coarse and fine detection mechanism, which improved the trade-off problem between high resolution and less delay cells in traditional VDL circuits. And the measuring time of proposed measurement circuit is also under the high resolution requests. At first, the testing range of input signal which proposed high resolution delay line is detected by coarse detection VDL. Moreover, the delayed input signal is transmitted to fine detection VDL for measuring value with better accuracy. This paper is implemented at 0.18μm process, operating frequency is 100 MHz, and the resolution achieved 2.0 ps with only 16-stage delay cells. The test range is 170ps wide, and 17% stages saved compare with traditional single delay line circuit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vernier%20delay%20line" title="vernier delay line">vernier delay line</a>, <a href="https://publications.waset.org/abstracts/search?q=D-type%20flip-flop" title=" D-type flip-flop"> D-type flip-flop</a>, <a href="https://publications.waset.org/abstracts/search?q=DFF" title=" DFF"> DFF</a>, <a href="https://publications.waset.org/abstracts/search?q=metastable%20phenomenon" title=" metastable phenomenon"> metastable phenomenon</a> </p> <a href="https://publications.waset.org/abstracts/25622/application-on-metastable-measurement-with-wide-range-high-resolution-vdl-circuit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">597</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21578</span> Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ola%20Hall">Ola Hall</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Wahab"> Ibrahim Wahab</a>, <a href="https://publications.waset.org/abstracts/search?q=Thorsteinn%20Rognvaldsson"> Thorsteinn Rognvaldsson</a>, <a href="https://publications.waset.org/abstracts/search?q=Mattias%20Ohlsson"> Mattias Ohlsson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=poverty%20prediction" title="poverty prediction">poverty prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20imagery" title=" satellite imagery"> satellite imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20readers" title=" human readers"> human readers</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanzania" title=" Tanzania"> Tanzania</a> </p> <a href="https://publications.waset.org/abstracts/163428/estimating-poverty-levels-from-satellite-imagery-a-comparison-of-human-readers-and-an-artificial-intelligence-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163428.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21577</span> Students’ Perception of Guided Imagery Improving Anxiety before Examination: A Qualitative Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wong%20Ka%20Fai">Wong Ka Fai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Many students are worried before an examination; that is a common picture worldwide. Health problems from stress before examination were insomnia, tiredness, isolation, stomach upset, and anxiety. Nursing students experienced high stress from the examination. Guided imagery is a healing process of applying imagination to help the body heal, survive, or live well. It can bring about significant physiological and biochemical changes, which can trigger the recovery process. A study of nursing students improving their anxiety before examination with guided imagery was proposed. Aim: The aim of this study was to explore the outcome of guided imagery on nursing students’ anxiety before examination in Hong Kong. Method: The qualitative study method was used. 16 first-year students studying nursing programme were invited to practice guided imagery to improve their anxiety before the examination period. One week before the examination, the semi-structured interviews with these students were carried out by the researcher. Result: From the content analysis of interview data, these nursing students showed considerable similarities in their anxiety perception. Nursing students’ perceived improved anxiety was evidenced by a reduction of stressful feelings, improved physical health, satisfaction with daily activities, and enhanced skills for solving problems and upcoming situations. Conclusion: This study indicated that guided imagery can be used as an alternative measure to improve students’ anxiety and psychological problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nursing%20students" title="nursing students">nursing students</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=anxiety" title=" anxiety"> anxiety</a>, <a href="https://publications.waset.org/abstracts/search?q=guided%20imagery" title=" guided imagery"> guided imagery</a> </p> <a href="https://publications.waset.org/abstracts/172769/students-perception-of-guided-imagery-improving-anxiety-before-examination-a-qualitative-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172769.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21576</span> Separation of Some Pyrethroid Insecticides by High-Performance Liquid Chromatography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fairouz%20Tazerouti">Fairouz Tazerouti</a>, <a href="https://publications.waset.org/abstracts/search?q=Samira%20Ihadadene"> Samira Ihadadene</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pyrethroids are synthetic pesticides that originated from the modification of natural pyrethrins to improve their biological activity and stability. They are a family of chiral pesticides with a large number of stereoisomers. Enantiomers of synthetic pyretroids present different insecticidal activity, toxicity against aquatic invertebrates and persistence in the environment so the development of rapid and sensitive chiral methods for the determination of different enantiomers is necessary. In this study, the separation of enantiomers of pyrethroid insecticides has been systematically studied using three commercially chiral high-performance liquid chromatography columns. Useful resolution was obtained for compounds with a variety of acid and alcohol moieties, and containing one to four chiral centres. The chromatographic behaviour of the diastereomers of some of these insecticides by using normal, polar and reversed mobile phase mode were also examined. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pesticides" title="pesticides">pesticides</a>, <a href="https://publications.waset.org/abstracts/search?q=analysis" title=" analysis"> analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=liquid%20chromatography" title=" liquid chromatography"> liquid chromatography</a>, <a href="https://publications.waset.org/abstracts/search?q=pyrethroids" title=" pyrethroids"> pyrethroids</a> </p> <a href="https://publications.waset.org/abstracts/16635/separation-of-some-pyrethroid-insecticides-by-high-performance-liquid-chromatography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">377</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21575</span> Autonomous Vehicle Detection and Classification in High Resolution Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20J.%20Ghandour">Ali J. Ghandour</a>, <a href="https://publications.waset.org/abstracts/search?q=Houssam%20A.%20Krayem"> Houssam A. Krayem</a>, <a href="https://publications.waset.org/abstracts/search?q=Abedelkarim%20A.%20Jezzini"> Abedelkarim A. Jezzini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High-resolution satellite images and remote sensing can provide global information in a fast way compared to traditional methods of data collection. Under such high resolution, a road is not a thin line anymore. Objects such as cars and trees are easily identifiable. Automatic vehicles enumeration can be considered one of the most important applications in traffic management. In this paper, autonomous vehicle detection and classification approach in highway environment is proposed. This approach consists mainly of three stages: (i) first, a set of preprocessing operations are applied including soil, vegetation, water suppression. (ii) Then, road networks detection and delineation is implemented using built-up area index, followed by several morphological operations. This step plays an important role in increasing the overall detection accuracy since vehicles candidates are objects contained within the road networks only. (iii) Multi-level Otsu segmentation is implemented in the last stage, resulting in vehicle detection and classification, where detected vehicles are classified into cars and trucks. Accuracy assessment analysis is conducted over different study areas to show the great efficiency of the proposed method, especially in highway environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title="remote sensing">remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20identification" title=" object identification"> object identification</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20and%20road%20extraction" title=" vehicle and road extraction"> vehicle and road extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20and%20road%20features-based%20classification" title=" vehicle and road features-based classification"> vehicle and road features-based classification</a> </p> <a href="https://publications.waset.org/abstracts/86230/autonomous-vehicle-detection-and-classification-in-high-resolution-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21574</span> Blind Super-Resolution Reconstruction Based on PSF Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Osama%20A.%20Omer">Osama A. Omer</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20Hamed"> Amal Hamed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind" title="blind">blind</a>, <a href="https://publications.waset.org/abstracts/search?q=PSF" title=" PSF"> PSF</a>, <a href="https://publications.waset.org/abstracts/search?q=super-resolution" title=" super-resolution"> super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=knife-edge" title=" knife-edge"> knife-edge</a>, <a href="https://publications.waset.org/abstracts/search?q=blurring" title=" blurring"> blurring</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral" title=" bilateral"> bilateral</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20norm" title=" L1 norm"> L1 norm</a> </p> <a href="https://publications.waset.org/abstracts/1385/blind-super-resolution-reconstruction-based-on-psf-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1385.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21573</span> Effects of Different Kinds of Combined Action Observation and Motor Imagery on Improving Golf Putting Performance and Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chi%20H.%20Lin">Chi H. Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Chi%20C.%20Lin"> Chi C. Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih%20L.%20Hsieh"> Chih L. Hsieh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motor Imagery (MI) alone or combined with action observation (AO) has been shown to enhance motor performance and skill learning. The most effective way to combine these techniques has received limited scientific scrutiny. In the present study, we examined the effects of simultaneous (i.e., observing an action whilst imagining carrying out the action concurrently), alternate (i.e., observing an action and then doing imagery related to that action consecutively) and synthesis (alternately perform action observation and imagery action and then perform observation and imagery action simultaneously) AOMI combinations on improving golf putting performance and learning. Participants, 45 university students who had no formal experience of using imagery for the study, were randomly allocated to one of four training groups: simultaneous action observation and motor imagery (S-AOMI), alternate action observation and motor imagery (A-AOMI), synthesis action observation and motor imagery (A-S-AOMI), and a control group. And it was applied 'Different Experimental Groups with Pre and Post Measured' designs. Participants underwent eighteen times of different interventions, which were happened three times a week and lasting for six weeks. We analyzed the information we received based on two-factor (group × times) mixed between and within analysis of variance to discuss the real effects on participants' golf putting performance and learning about different intervention methods of different types of combined action observation and motor imagery. After the intervention, we then used imagery questionnaire and journey to understand the condition and suggestion about different motor imagery and action observation intervention from the participants. The results revealed that the three experimental groups both are effective in putting performance and learning but not for the control group, and the A-S-AOMI group is significantly better effect than S-AOMI group on golf putting performance and learning. The results confirmed the effect of motor imagery combined with action observation on the performance and learning of golf putting. In particular, in the groups of synthesis, motor imagery, or action observation were alternately performed first and then performed motor imagery, and action observation simultaneously would have the best effectiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motor%20skill%20learning" title="motor skill learning">motor skill learning</a>, <a href="https://publications.waset.org/abstracts/search?q=motor%20imagery" title=" motor imagery"> motor imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20observation" title=" action observation"> action observation</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/128207/effects-of-different-kinds-of-combined-action-observation-and-motor-imagery-on-improving-golf-putting-performance-and-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21572</span> Bridging Urban Planning and Environmental Conservation: A Regional Analysis of Northern and Central Kolkata</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tanmay%20Bisen">Tanmay Bisen</a>, <a href="https://publications.waset.org/abstracts/search?q=Aastha%20Shayla"> Aastha Shayla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study introduces an advanced approach to tree canopy detection in urban environments and a regional analysis of Northern and Central Kolkata that delves into the intricate relationship between urban development and environmental conservation. Leveraging high-resolution drone imagery from diverse urban green spaces in Kolkata, we fine-tuned the deep forest model to enhance its precision and accuracy. Our results, characterized by an impressive Intersection over Union (IoU) score of 0.90 and a mean average precision (mAP) of 0.87, underscore the model's robustness in detecting and classifying tree crowns amidst the complexities of aerial imagery. This research not only emphasizes the importance of model customization for specific datasets but also highlights the potential of drone-based remote sensing in urban forestry studies. The study investigates the spatial distribution, density, and environmental impact of trees in Northern and Central Kolkata. The findings underscore the significance of urban green spaces in met-ropolitan cities, emphasizing the need for sustainable urban planning that integrates green infrastructure for ecological balance and human well-being. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urban%20greenery" title="urban greenery">urban greenery</a>, <a href="https://publications.waset.org/abstracts/search?q=advanced%20spatial%20distribution%20analysis" title=" advanced spatial distribution analysis"> advanced spatial distribution analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20imagery" title=" drone imagery"> drone imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=tree%20detection" title=" tree detection"> tree detection</a> </p> <a href="https://publications.waset.org/abstracts/182080/bridging-urban-planning-and-environmental-conservation-a-regional-analysis-of-northern-and-central-kolkata" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182080.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">55</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21571</span> Transfer Learning for Protein Structure Classification at Low Resolution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alexander%20Hudson">Alexander Hudson</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaogang%20Gong"> Shaogang Gong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Structure determination is key to understanding protein function at a molecular level. Whilst significant advances have been made in predicting structure and function from amino acid sequence, researchers must still rely on expensive, time-consuming analytical methods to visualise detailed protein conformation. In this study, we demonstrate that it is possible to make accurate (≥80%) predictions of protein class and architecture from structures determined at low (>3A) resolution, using a deep convolutional neural network trained on high-resolution (≤3A) structures represented as 2D matrices. Thus, we provide proof of concept for high-speed, low-cost protein structure classification at low resolution, and a basis for extension to prediction of function. We investigate the impact of the input representation on classification performance, showing that side-chain information may not be necessary for fine-grained structure predictions. Finally, we confirm that high resolution, low-resolution and NMR-determined structures inhabit a common feature space, and thus provide a theoretical foundation for boosting with single-image super-resolution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title="transfer learning">transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=protein%20distance%20maps" title=" protein distance maps"> protein distance maps</a>, <a href="https://publications.waset.org/abstracts/search?q=protein%20structure%20classification" title=" protein structure classification"> protein structure classification</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/129704/transfer-learning-for-protein-structure-classification-at-low-resolution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129704.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=719">719</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=720">720</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=high%20resolution%20synthetic%20imagery&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>