CINXE.COM

Search results for: image restoration

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image restoration</title> <meta name="description" content="Search results for: image restoration"> <meta name="keywords" content="image restoration"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image restoration" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image restoration"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3129</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image restoration</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3129</span> Foggy Image Restoration Using Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khader%20S.%20Al-Aidmat">Khader S. Al-Aidmat</a>, <a href="https://publications.waset.org/abstracts/search?q=Venus%20W.%20Samawi"> Venus W. Samawi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Blurred vision in the misty atmosphere is essential problem which needs to be resolved. To solve this problem, we developed a technique to restore foggy degraded image from its original version using Back-propagation neural network (BP-NN). The suggested technique is based on mapping between foggy scene and its corresponding original scene. Seven different approaches are suggested based on type of features used in image restoration. Features are extracted from spatial and spatial-frequency domain (using DCT). Each of these approaches comes with its own BP-NN architecture depending on type and number of used features. The weight matrix resulted from training each BP-NN represents a fog filter. The performance of these filters are evaluated empirically (using PSNR), and perceptually. By comparing the performance of these filters, the effective features that suits BP-NN technique for restoring foggy images is recognized. This system proved its effectiveness and success in restoring moderate foggy images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20cosine%20transform" title=" discrete cosine transform"> discrete cosine transform</a>, <a href="https://publications.waset.org/abstracts/search?q=feed%20forward%20neural%20network" title=" feed forward neural network"> feed forward neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=foggy%20image%20restoration" title=" foggy image restoration"> foggy image restoration</a> </p> <a href="https://publications.waset.org/abstracts/17476/foggy-image-restoration-using-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17476.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3128</span> Deep Neural Networks for Restoration of Sky Images Affected by Static and Anisotropic Aberrations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Constanza%20A.%20Barriga">Constanza A. Barriga</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Bernardi"> Rafael Bernardi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amokrane%20Berdja"> Amokrane Berdja</a>, <a href="https://publications.waset.org/abstracts/search?q=Christian%20D.%20Guzman"> Christian D. Guzman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most image restoration methods in astronomy rely upon probabilistic tools that infer the best solution for a deconvolution problem. They achieve good performances when the point spread function (PSF) is spatially invariable in the image plane. However, this latter condition is not always satisfied with real optical systems. PSF angular variations cannot be evaluated directly from the observations, neither be corrected at a pixel resolution. We have developed a method for the restoration of images affected by static and anisotropic aberrations using deep neural networks that can be directly applied to sky images. The network is trained using simulated sky images corresponding to the T-80 telescope optical system, an 80 cm survey imager at Cerro Tololo (Chile), which are synthesized using a Zernike polynomial representation of the optical system. Once trained, the network can be used directly on sky images, outputting a corrected version of the image, which has a constant and known PSF across its field-of-view. The method was tested with the T-80 telescope, achieving better results than with PSF deconvolution techniques. We present the method and results on this telescope. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aberrations" title="aberrations">aberrations</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20restoration" title=" image restoration"> image restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=variable%20point%20spread%20function" title=" variable point spread function"> variable point spread function</a>, <a href="https://publications.waset.org/abstracts/search?q=wide%20field%20images" title=" wide field images"> wide field images</a> </p> <a href="https://publications.waset.org/abstracts/112938/deep-neural-networks-for-restoration-of-sky-images-affected-by-static-and-anisotropic-aberrations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112938.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3127</span> Detection of Image Blur and Its Restoration for Image Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image restoration in the process of communication is one of the emerging fields in the image processing. The motion analysis processing is the simplest case to detect motion in an image. Applications of motion analysis widely spread in many areas such as surveillance, remote sensing, film industry, navigation of autonomous vehicles, etc. The scene may contain multiple moving objects, by using motion analysis techniques the blur caused by the movement of the objects can be enhanced by filling-in occluded regions and reconstruction of transparent objects, and it also removes the motion blurring. This paper presents the design and comparison of various motion detection and enhancement filters. Median filter, Linear image deconvolution, Inverse filter, Pseudoinverse filter, Wiener filter, Lucy Richardson filter and Blind deconvolution filters are used to remove the blur. In this work, we have considered different types and different amount of blur for the analysis. Mean Square Error (MSE) and Peak Signal to Noise Ration (PSNR) are used to evaluate the performance of the filters. The designed system has been implemented in Matlab software and tested for synthetic and real-time images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20analysis" title=" motion analysis"> motion analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title=" motion detection"> motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20estimation" title=" motion estimation"> motion estimation</a> </p> <a href="https://publications.waset.org/abstracts/59485/detection-of-image-blur-and-its-restoration-for-image-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">287</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3126</span> Estimation and Restoration of Ill-Posed Parameters for Underwater Motion Blurred Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Vimal%20Raj">M. Vimal Raj</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sakthivel%20Murugan"> S. Sakthivel Murugan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Underwater images degrade their quality due to atmospheric conditions. One of the major problems in an underwater image is motion blur caused by the imaging device or the movement of the object. In order to rectify that in post-imaging, parameters of the blurred image are to be estimated. So, the point spread function is estimated by the properties, using the spectrum of the image. To improve the estimation accuracy of the parameters, Optimized Polynomial Lagrange Interpolation (OPLI) method is implemented after the angle and length measurement of motion-blurred images. Initially, the data were collected from real-time environments in Chennai and processed. The proposed OPLI method shows better accuracy than the existing classical Cepstral, Hough, and Radon transform estimation methods for underwater images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20restoration" title="image restoration">image restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20blur" title=" motion blur"> motion blur</a>, <a href="https://publications.waset.org/abstracts/search?q=parameter%20estimation" title=" parameter estimation"> parameter estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=radon%20transform" title=" radon transform"> radon transform</a>, <a href="https://publications.waset.org/abstracts/search?q=underwater" title=" underwater"> underwater</a> </p> <a href="https://publications.waset.org/abstracts/142445/estimation-and-restoration-of-ill-posed-parameters-for-underwater-motion-blurred-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142445.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3125</span> Enhancer: An Effective Transformer Architecture for Single Image Super Resolution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pitigalage%20Chamath%20Chandira%20Peiris">Pitigalage Chamath Chandira Peiris</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20image%20super%20resolution" title="single image super resolution">single image super resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformers" title=" vision transformers"> vision transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20%20restoration" title=" image restoration"> image restoration</a> </p> <a href="https://publications.waset.org/abstracts/154323/enhancer-an-effective-transformer-architecture-for-single-image-super-resolution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3124</span> A Nonlinear Parabolic Partial Differential Equation Model for Image Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tudor%20Barbu">Tudor Barbu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a robust nonlinear parabolic partial differential equation (PDE)-based denoising scheme in this article. Our approach is based on a second-order anisotropic diffusion model that is described first. Then, a consistent and explicit numerical approximation algorithm is constructed for this continuous model by using the finite-difference method. Finally, our restoration experiments and method comparison, which prove the effectiveness of this proposed technique, are discussed in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anisotropic%20diffusion" title="anisotropic diffusion">anisotropic diffusion</a>, <a href="https://publications.waset.org/abstracts/search?q=finite%20differences" title=" finite differences"> finite differences</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20denoising%20and%20restoration" title=" image denoising and restoration"> image denoising and restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20PDE%20model" title=" nonlinear PDE model"> nonlinear PDE model</a>, <a href="https://publications.waset.org/abstracts/search?q=anisotropic%20diffusion" title=" anisotropic diffusion"> anisotropic diffusion</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20approximation%20schemes" title=" numerical approximation schemes"> numerical approximation schemes</a> </p> <a href="https://publications.waset.org/abstracts/48289/a-nonlinear-parabolic-partial-differential-equation-model-for-image-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">312</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3123</span> The Restoration of the Old District in the Urbanization: The Case Study of Samsen Riverside Community, Dusit District, Bangkok</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tikhanporn%20Punluekdej">Tikhanporn Punluekdej</a>, <a href="https://publications.waset.org/abstracts/search?q=Saowapa%20Phaithayawat"> Saowapa Phaithayawat </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives of this research are: 1) to discover the mechanism in the restoration process of the old district, and 2) to study the people participation in the community with related units. This research utilizes qualitative research method together with the tools used in the study of historical and anthropological disciplines. The research revealed that the restoration process of the old district started with the needs of the local people in the community. These people are considered as a young generation in the community. The leading group of the community played a vital role in the restoration process by igniting the whole idea and followed by the help from those who have lived in the area of more than fifty years. The restoration process is the genuine desire of the local people without the intervention of the local politics. The core group would coordinate with the related units in which there were, for instance, the academic institutions in order to find out the most dominant historical features of the community including its settlement. The Crown Property Bureau, as the sole-owner of the land, joined the restoration in the physical development dimension. The restoration was possible due to the cooperation between local people and related units, under the designated plans, budget, and social activities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=restoration" title="restoration">restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20area" title=" urban area"> urban area</a>, <a href="https://publications.waset.org/abstracts/search?q=old%20district" title=" old district"> old district</a>, <a href="https://publications.waset.org/abstracts/search?q=people%20participation" title=" people participation"> people participation</a> </p> <a href="https://publications.waset.org/abstracts/24725/the-restoration-of-the-old-district-in-the-urbanization-the-case-study-of-samsen-riverside-community-dusit-district-bangkok" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3122</span> Challenges in Adopting 3R Concept in the Heritage Building Restoration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20H.%20Goh">H. H. Goh</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20C.%20Goh"> K. C. Goh</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20W.%20Seow"> T. W. Seow</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20S.%20Said"> N. S. Said</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20E.%20P.%20Ang"> S. E. P. Ang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malaysia is rich with historic buildings, particularly in Penang and Malacca states. Restoration activities are increasingly important as these states are recognized under UNESCO World Heritage Sites. Restoration activities help to maintain the uniqueness and value of a heritage building. However, increasing in restoration activities has resulted in large quantities of waste. To cope with this problem, the 3R concept (reduce, reuse and recycle) is introduced. The 3R concept is one of the waste management hierarchies. This concept is still yet to apply in the building restoration industry compared to the construction industry. Therefore, this study aims to promote the 3R concept in the heritage building restoration industry. This study aims to examine the importance of 3R concept and to identify challenges in applying the 3R concept in the heritage building restoration industry. This study focused on contractors and consultants who are involved in heritage restoration projects in Penang. Literature review and interviews helps to reach the research objective. Data that obtained is analyzed by using content analysis. For the research, application of 3R concept is important to conserve natural resources and reduce pollution problems. However, limited space to organise waste is the obstruction during the implementation of this concept. In conclusion, the 3R concept plays an important role in promoting environmental conservation and helping in reducing the construction waste <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3R%20Concept" title="3R Concept">3R Concept</a>, <a href="https://publications.waset.org/abstracts/search?q=heritage%20building" title=" heritage building"> heritage building</a>, <a href="https://publications.waset.org/abstracts/search?q=restoration%20activities" title=" restoration activities"> restoration activities</a>, <a href="https://publications.waset.org/abstracts/search?q=building%20science" title=" building science"> building science</a> </p> <a href="https://publications.waset.org/abstracts/16832/challenges-in-adopting-3r-concept-in-the-heritage-building-restoration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16832.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3121</span> Liberation as a Method for Monument Valorisation: The Case of the Defence Heritage Restoration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Donatella%20R.%20Fiorino">Donatella R. Fiorino</a>, <a href="https://publications.waset.org/abstracts/search?q=Marzia%20Loddo"> Marzia Loddo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The practice of freeing monuments from subsequent additions crosses the entire history of conservation and it is traditionally connected to the aim of valorisation, both for cultural and educational purpose and recently even for touristic exploitation. Defence heritage has been widely interested by these cultural and technical moods from philological restoration to critic innovations. A renovated critical analysis of Italian episodes and in particular the Sardinian case of the area of San Pancrazio in Cagliari, constitute an important lesson about the limits of this practice and the uncertainty in terms of results, towards the definition of a sustainable good practice in the restoration of military architectures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=defensive%20architecture" title="defensive architecture">defensive architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=liberation" title=" liberation"> liberation</a>, <a href="https://publications.waset.org/abstracts/search?q=Valorisation%20for%20tourism" title=" Valorisation for tourism"> Valorisation for tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=historical%20restoration" title=" historical restoration"> historical restoration</a> </p> <a href="https://publications.waset.org/abstracts/19474/liberation-as-a-method-for-monument-valorisation-the-case-of-the-defence-heritage-restoration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3120</span> Arbitrarily Shaped Blur Kernel Estimation for Single Image Blind Deblurring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aftab%20Khan">Aftab Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashfaq%20Khan"> Ashfaq Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research paper focuses on an interesting challenge faced in Blind Image Deblurring (BID). It relates to the estimation of arbitrarily shaped or non-parametric Point Spread Functions (PSFs) of motion blur caused by camera handshake. These PSFs exhibit much more complex shapes than their parametric counterparts and deblurring in this case requires intricate ways to estimate the blur and effectively remove it. This research work introduces a novel blind deblurring scheme visualized for deblurring images corrupted by arbitrarily shaped PSFs. It is based on Genetic Algorithm (GA) and utilises the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) measure as the fitness function for arbitrarily shaped PSF estimation. The proposed BID scheme has been compared with other single image motion deblurring schemes as benchmark. Validation has been carried out on various blurred images. Results of both benchmark and real images are presented. Non-reference image quality measures were used to quantify the deblurring results. For benchmark images, the proposed BID scheme using BRISQUE converges in close vicinity of the original blurring functions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title="blind deconvolution">blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20image%20deblurring" title=" blind image deblurring"> blind image deblurring</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20restoration" title=" image restoration"> image restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measures" title=" image quality measures"> image quality measures</a> </p> <a href="https://publications.waset.org/abstracts/37142/arbitrarily-shaped-blur-kernel-estimation-for-single-image-blind-deblurring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">443</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3119</span> Geochemistry of Nutrients in the South Lagoon of Tunis, Northeast of Tunisia, Using Multivariable Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abidi%20Myriam">Abidi Myriam</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Amor%20Rim"> Ben Amor Rim</a>, <a href="https://publications.waset.org/abstracts/search?q=Gueddari%20Moncef"> Gueddari Moncef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Understanding ecosystem response to the restoration project is essential to assess its rehabilitation. Indeed, the time elapsed after restoration is a critical indicator to shows the real of the restoration success. In this order, the south lagoon of Tunis, a shallow Mediterranean coastal area, has witnessed several pollutions. To resolve this environmental problem, a large restoration project of the lagoon was undertaken. In this restoration works, the main changes are the decrease of the residence time of the lagoon water and the nutrient concentrations. In this paper, we attempt to evaluate the trophic state of lagoon water for evaluating the risk of eutrophication after almost 16 years of its restoration. To attend this objectives water quality monitoring was untaken. In order to identify and to analyze the natural and anthropogenic factor governing the nutrients concentrations of lagoon water geochemical methods and multivariate statistical tools were used. Results show that nutrients have duel sources due to the discharge of municipal wastewater of Megrine City in the south side of the lagoon. The Carlson index shows that the South lagoon of Tunis Lagoon Tunis is eutrophic, and may show limited summer anoxia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=geochemistry" title="geochemistry">geochemistry</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrients" title=" nutrients"> nutrients</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20south%20lagoon%20of%20Tunis" title=" the south lagoon of Tunis"> the south lagoon of Tunis</a>, <a href="https://publications.waset.org/abstracts/search?q=trophic%20state" title=" trophic state"> trophic state</a> </p> <a href="https://publications.waset.org/abstracts/73188/geochemistry-of-nutrients-in-the-south-lagoon-of-tunis-northeast-of-tunisia-using-multivariable-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73188.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3118</span> Recreating Home: Restoration and Reflections on the Traditional Houses of Kucapungane</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sasala%20Taiban">Sasala Taiban</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper explores the process and reflections on the restoration of traditional slate houses in the Rukai tribe's old settlement of Kucapungane. Designated as a "Class II Historical Site" by the Ministry of the Interior in 1991 and listed by UNESCO's World Monuments Fund in 2016, Kucapungane holds significant historical and cultural value. However, due to government neglect, tribal migration, and the passing of elders, the traditional knowledge and techniques for constructing slate houses face severe discontinuity. Over the past decades, residents have strived to preserve and transmit these traditional skills through the restoration and reconstruction of their homes. This study employs a qualitative methodology, combining ethnographic fieldwork, historical analysis, and participatory observation. The research includes in-depth interviews, focus group discussions, and hands-on participation in restoration activities to gather comprehensive data. The paper reviews the historical evolution of Kucapungane, the restoration process, and the challenges encountered, such as insufficient resources, technical preservation issues, material acquisition problems, and lack of community recognition. Furthermore, it highlights the importance of house restoration in indigenous consciousness and cultural revival, proposing strategies to address current issues and promote preservation. Through these efforts, the cultural heritage of the Rukai tribe can be sustained and carried forward into the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rukai" title="rukai">rukai</a>, <a href="https://publications.waset.org/abstracts/search?q=kucapungane" title=" kucapungane"> kucapungane</a>, <a href="https://publications.waset.org/abstracts/search?q=slate%20house%20restoration" title=" slate house restoration"> slate house restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=cultural%20heritage" title=" cultural heritage"> cultural heritage</a> </p> <a href="https://publications.waset.org/abstracts/188216/recreating-home-restoration-and-reflections-on-the-traditional-houses-of-kucapungane" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188216.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">37</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3117</span> New Restoration Reagent for Development of Erased Serial Number on Copper Metal Surface</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lav%20Kesharwani">Lav Kesharwani</a>, <a href="https://publications.waset.org/abstracts/search?q=Nalini%20Shankar"> Nalini Shankar</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20K.%20Gupta"> A. K. Gupta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A serial number is a unique code assigned for identification of a single unit. Serial number are present on many objects. In an attempt to hide the identity of the numbered item, the numbers are often obliterated or removed by mechanical methods. The present work was carried out with an objective to develop less toxic, less time consuming, more result oriented chemical etching reagent for restoration of serial number on the copper metal plate. Around nine different reagents were prepared using different combination of reagent along with standard reagent and it was applied over 50 erased samples of copper metal and compared it with the standard reagent for restoration of erased marks. After experiment, it was found that the prepared Etching reagent no. 3 (10 g FeCl3 + 20 ml glacial acetic acid + 100 ml distilled H2O) showed the best result for restoration of erased serial number on the copper metal plate .The reagent was also less toxic and less time consuming as compared to standard reagent (19 g FeCl3 + 6 ml cans. HCl + 100 ml distilled H2O). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=serial%20number%20restoration" title="serial number restoration">serial number restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=copper%20plate" title=" copper plate"> copper plate</a>, <a href="https://publications.waset.org/abstracts/search?q=obliteration" title=" obliteration"> obliteration</a>, <a href="https://publications.waset.org/abstracts/search?q=chemical%20method" title=" chemical method"> chemical method</a> </p> <a href="https://publications.waset.org/abstracts/29117/new-restoration-reagent-for-development-of-erased-serial-number-on-copper-metal-surface" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29117.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">556</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3116</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3115</span> A Method of the Semantic on Image Auto-Annotation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Huo">Lin Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianwei%20Liu"> Xianwei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingxiong%20Zhou"> Jingxiong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20auto-annotation" title="image auto-annotation">image auto-annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20correlograms" title=" color correlograms"> color correlograms</a>, <a href="https://publications.waset.org/abstracts/search?q=Hash%20code" title=" Hash code"> Hash code</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/15628/a-method-of-the-semantic-on-image-auto-annotation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3114</span> Novel Algorithm for Restoration of Retina Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Subbuthai">P. Subbuthai</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Muruganand"> S. Muruganand</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy is one of the complicated diseases and it is caused by the changes in the blood vessels of the retina. Extraction of retina image through Fundus camera sometimes produced poor contrast and noises. Because of this noise, detection of blood vessels in the retina is very complicated. So preprocessing is needed, in this paper, a novel algorithm is implemented to remove the noisy pixel in the retina image. The proposed algorithm is Extended Median Filter and it is applied to the green channel of the retina because green channel vessels are brighter than the background. Proposed extended median filter is compared with the existing standard median filter by performance metrics such as PSNR, MSE and RMSE. Experimental results show that the proposed Extended Median Filter algorithm gives a better result than the existing standard median filter in terms of noise suppression and detail preservation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fundus%20retina%20image" title="fundus retina image">fundus retina image</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20filter" title=" median filter"> median filter</a>, <a href="https://publications.waset.org/abstracts/search?q=microaneurysms" title=" microaneurysms"> microaneurysms</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a> </p> <a href="https://publications.waset.org/abstracts/20819/novel-algorithm-for-restoration-of-retina-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20819.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3113</span> Restoration of Digital Design Using Row and Column Major Parsing Technique from the Old/Used Jacquard Punched Cards</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Kumaravelu">R. Kumaravelu</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Poornima"> S. Poornima</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunil%20Kumar%20Kashyap"> Sunil Kumar Kashyap</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The optimized and digitalized restoration of the information from the old and used manual jacquard punched card in textile industry is referred to as Jacquard Punch Card (JPC) reader. In this paper, we present a novel design and development of photo electronics based system for reading old and used punched cards and storing its binary information for transforming them into an effective image file format. In our textile industry the jacquard punched cards holes diameters having the sizes of 3mm, 5mm and 5.5mm pitch. Before the adaptation of computing systems in the field of textile industry those punched cards were prepared manually without digital design source, but those punched cards are having rich woven designs. Now, the idea is to retrieve binary information from the jacquard punched cards and store them in digital (Non-Graphics) format before processing it. After processing the digital format (Non-Graphics) it is converted into an effective image file format through either by Row major or Column major parsing technique.To accomplish these activities, an embedded system based device and software integration is developed. As part of the test and trial activity the device was tested and installed for industrial service at Weavers Service Centre, Kanchipuram, Tamilnadu in India. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=file%20system" title="file system">file system</a>, <a href="https://publications.waset.org/abstracts/search?q=SPI.%20UART" title=" SPI. UART"> SPI. UART</a>, <a href="https://publications.waset.org/abstracts/search?q=ARM%20controller" title=" ARM controller"> ARM controller</a>, <a href="https://publications.waset.org/abstracts/search?q=jacquard" title=" jacquard"> jacquard</a>, <a href="https://publications.waset.org/abstracts/search?q=punched%20card" title=" punched card"> punched card</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20LED" title=" photo LED"> photo LED</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20diode" title=" photo diode"> photo diode</a> </p> <a href="https://publications.waset.org/abstracts/96597/restoration-of-digital-design-using-row-and-column-major-parsing-technique-from-the-oldused-jacquard-punched-cards" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96597.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3112</span> Branding Tourism Destinations; The Trending Initiatives for Edifice Image Choices of Foreign Policy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mehtab%20Alam">Mehtab Alam</a>, <a href="https://publications.waset.org/abstracts/search?q=Mudiarasan%20Kuppusamy"> Mudiarasan Kuppusamy</a>, <a href="https://publications.waset.org/abstracts/search?q=Puvaneswaran%20Kunaserkaran"> Puvaneswaran Kunaserkaran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to bridge the gap and complete the relationship between tourism destinations and image branding as a choice of edifice foreign policy. Such options became a crucial component for individuals interested in leisure and travel activities. The destination management factors have been evaluated and analyzed using the primary and secondary data in a mixed-methods approach (quantitative sample of 384 and qualitative 8 semi-structured interviews at saturated point). The study chose the Environmental Management Accounting (EMA) and Image Restoration (IR) theories, along with a schematic diagram and an analytical framework supported by NVivo software 12, for two locations in Abbottabad, KPK, Pakistan: Shimla Hill and Thandiani. This incorporates the use of PLS-SEM model for assessing validity of data while SPSS for data screening of descriptive statistics. The results show that destination management's promotion of tourism has significantly improved Pakistan's state image. The use of institutional setup, environmental drivers, immigration, security, and hospitality as well as recreational initiatives on destination management is encouraged. The practical ramifications direct the heads of tourism projects, diplomats, directors, and policymakers to complete destination projects before inviting people to Pakistan. The paper provides the extent of knowledge for academic tourism circles to use tourism destinations as brand ambassadors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tourism" title="tourism">tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=management" title=" management"> management</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20image" title=" state image"> state image</a>, <a href="https://publications.waset.org/abstracts/search?q=foreign%20policy" title=" foreign policy"> foreign policy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20branding" title=" image branding"> image branding</a> </p> <a href="https://publications.waset.org/abstracts/170282/branding-tourism-destinations-the-trending-initiatives-for-edifice-image-choices-of-foreign-policy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170282.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">69</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3111</span> Deployment of Matrix Transpose in Digital Image Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Okike%20Benjamin">Okike Benjamin</a>, <a href="https://publications.waset.org/abstracts/search?q=Garba%20E%20J.%20D."> Garba E J. D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=matrices" title=" matrices"> matrices</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel" title=" pixel"> pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20transpose" title=" matrix transpose "> matrix transpose </a> </p> <a href="https://publications.waset.org/abstracts/48717/deployment-of-matrix-transpose-in-digital-image-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3110</span> Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hangsik%20Shin">Hangsik Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=peak%20detection" title="peak detection">peak detection</a>, <a href="https://publications.waset.org/abstracts/search?q=photoplethysmography" title=" photoplethysmography"> photoplethysmography</a>, <a href="https://publications.waset.org/abstracts/search?q=sampling" title=" sampling"> sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20reconstruction" title=" signal reconstruction"> signal reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/53409/feature-location-restoration-for-under-sampled-photoplethysmogram-using-spline-interpolation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53409.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3109</span> QCARNet: Networks for Quality-Adaptive Compression Artifact</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seung%20Ho%20Park">Seung Ho Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Young%20Su%20Moon"> Young Su Moon</a>, <a href="https://publications.waset.org/abstracts/search?q=Nam%20Ik%20Cho"> Nam Ik Cho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a convolution neural network (CNN) for quality adaptive compression artifact reduction named QCARNet. The proposed method is different from the existing discriminative models that learn a specific model at a certain quality level. The method is composed of a quality estimation CNN (QECNN) and a compression artifact reduction CNN (CARCNN), which are two functionally separate CNNs. By connecting the QECNN and CARCNN, each CARCNN layer is able to adaptively reduce compression artifacts and preserve details depending on the estimated quality level map generated by the QECNN. We experimentally demonstrate that the proposed method achieves better performance compared to other state-of-the-art blind compression artifact reduction methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compression%20artifact%20reduction" title="compression artifact reduction">compression artifact reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=deblocking" title=" deblocking"> deblocking</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20denoising" title=" image denoising"> image denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20restoration" title=" image restoration"> image restoration</a> </p> <a href="https://publications.waset.org/abstracts/108816/qcarnet-networks-for-quality-adaptive-compression-artifact" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3108</span> Performance of Hybrid Image Fusion: Implementation of Dual-Tree Complex Wavelet Transform Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Gupta">Manoj Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nirmendra%20Singh%20Bhadauria"> Nirmendra Singh Bhadauria</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of the applications in image processing require high spatial and high spectral resolution in a single image. For example satellite image system, the traffic monitoring system, and long range sensor fusion system all use image processing. However, most of the available equipment is not capable of providing this type of data. The sensor in the surveillance system can only cover the view of a small area for a particular focus, yet the demanding application of this system requires a view with a high coverage of the field. Image fusion provides the possibility of combining different sources of information. In this paper, we have decomposed the image using DTCWT and then fused using average and hybrid of (maxima and average) pixel level techniques and then compared quality of both the images using PSNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/abstracts/search?q=DT-CWT" title=" DT-CWT"> DT-CWT</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20image%20fusion" title=" average image fusion"> average image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20image%20fusion" title=" hybrid image fusion"> hybrid image fusion</a> </p> <a href="https://publications.waset.org/abstracts/19207/performance-of-hybrid-image-fusion-implementation-of-dual-tree-complex-wavelet-transform-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3107</span> Assessment of Image Databases Used for Human Skin Detection Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saleh%20Alshehri">Saleh Alshehri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection is a vital step in many applications. Some of the applications are critical especially those related to security. This leverages the importance of a high-performance detection algorithm. To validate the accuracy of the algorithm, image databases are usually used. However, the suitability of these image databases is still questionable. It is suggested that the suitability can be measured mainly by the span the database covers of the color space. This research investigates the validity of three famous image databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20databases" title="image databases">image databases</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/87836/assessment-of-image-databases-used-for-human-skin-detection-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">271</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3106</span> A Novel Combination Method for Computing the Importance Map of Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Absetan">Ahmad Absetan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Nooshyar"> Mahdi Nooshyar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-aware%20image%20resizing" title="content-aware image resizing">content-aware image resizing</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20saliency" title=" visual saliency"> visual saliency</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20density" title=" edge density"> edge density</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20warping" title=" image warping"> image warping</a> </p> <a href="https://publications.waset.org/abstracts/35692/a-novel-combination-method-for-computing-the-importance-map-of-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">582</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3105</span> Blind Data Hiding Technique Using Interpolation of Subsampled Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Singara%20Singh%20Kasana">Singara Singh Kasana</a>, <a href="https://publications.waset.org/abstracts/search?q=Pankaj%20Garg"> Pankaj Garg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a blind data hiding technique based on interpolation of sub sampled versions of a cover image is proposed. Sub sampled image is taken as a reference image and an interpolated image is generated from this reference image. Then difference between original cover image and interpolated image is used to embed secret data. Comparisons with the existing interpolation based techniques show that proposed technique provides higher embedding capacity and better visual quality marked images. Moreover, the performance of the proposed technique is more stable for different images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interpolation" title="interpolation">interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subsampling" title=" image subsampling"> image subsampling</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=SIM" title=" SIM"> SIM</a> </p> <a href="https://publications.waset.org/abstracts/18926/blind-data-hiding-technique-using-interpolation-of-subsampled-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">578</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3104</span> Self-Image of Police Officers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leo%20Carlo%20B.%20Rondina">Leo Carlo B. Rondina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=police%20image" title="police image">police image</a>, <a href="https://publications.waset.org/abstracts/search?q=exploratory%20factor%20analysis" title=" exploratory factor analysis"> exploratory factor analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ordinal%20regression" title=" ordinal regression"> ordinal regression</a>, <a href="https://publications.waset.org/abstracts/search?q=Galatea%20effect" title=" Galatea effect"> Galatea effect</a> </p> <a href="https://publications.waset.org/abstracts/75550/self-image-of-police-officers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75550.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">287</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3103</span> Evaluating Classification with Efficacy Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guofan%20Shao">Guofan Shao</a>, <a href="https://publications.waset.org/abstracts/search?q=Lina%20Tang"> Lina Tang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Zhang"> Hao Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The values of image classification accuracy are affected by class size distributions and classification schemes, making it difficult to compare the performance of classification algorithms across different remote sensing data sources and classification systems. Based on the term efficacy from medicine and pharmacology, we have developed the metrics of image classification efficacy at the map and class levels. The novelty of this approach is that a baseline classification is involved in computing image classification efficacies so that the effects of class statistics are reduced. Furthermore, the image classification efficacies are interpretable and comparable, and thus, strengthen the assessment of image data classification methods. We use real-world and hypothetical examples to explain the use of image classification efficacies. The metrics of image classification efficacy meet the critical need to rectify the strategy for the assessment of image classification performance as image classification methods are becoming more diversified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy%20assessment" title="accuracy assessment">accuracy assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=efficacy" title=" efficacy"> efficacy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty" title=" uncertainty"> uncertainty</a> </p> <a href="https://publications.waset.org/abstracts/142555/evaluating-classification-with-efficacy-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142555.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3102</span> A Thematic Analysis on the Drivers of Community Participation for River Restoration Projects, the Case of Kerala, India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alvin%20Manuel%20Vazhayil">Alvin Manuel Vazhayil</a>, <a href="https://publications.waset.org/abstracts/search?q=Chaozhong%20Tan"> Chaozhong Tan</a>, <a href="https://publications.waset.org/abstracts/search?q=Karl%20M.%20Wantzen"> Karl M. Wantzen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As local community participation in river restoration projects is increasingly recognized to be crucial for sustainable outcomes, researchers are exploring factors that motivate community participation globally. In India, while there is consensus in literature on the importance of community engagement in river restoration projects, research on what drives local communities to participate is limited, especially given the societal and economic challenges common in the Global South. This study addresses this gap by exploring the drivers of community participation in the local river restoration initiatives of the "Now Let Me Flow" campaign in Kerala, India. The project aimed to restore 87,000 kilometers of streams through the middle-ground governance approach that integrated bottom-up community efforts with top-down governmental support. The fieldwork involved interviews with 26 key agents, including local leaders, policy practitioners, politicians, and environmental activists associated with the project, and the collection of secondary data from 12 documents including project reports and news articles. The data was analyzed in NVivo (NVivo 11 Plus for Windows, version 11.3.0.773) using thematic analysis which included two cycles of systematic coding. The findings reveal two main drivers influencing community participation: top-down actions from local governments, and bottom-up drivers within the community. The study highlights the importance of local stakeholder collaboration, support of local governments, and local community engagement in successful river restoration projects. These findings are consistent with other empirical studies on participatory environmental problem-solving globally. The results offer crucial insights for policymakers and governments to better design and implement effective and sustainable participatory river restoration projects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=community%20initiatives" title="community initiatives">community initiatives</a>, <a href="https://publications.waset.org/abstracts/search?q=drivers%20of%20participation" title=" drivers of participation"> drivers of participation</a>, <a href="https://publications.waset.org/abstracts/search?q=environmental%20governance" title=" environmental governance"> environmental governance</a>, <a href="https://publications.waset.org/abstracts/search?q=river%20restoration" title=" river restoration"> river restoration</a> </p> <a href="https://publications.waset.org/abstracts/189315/a-thematic-analysis-on-the-drivers-of-community-participation-for-river-restoration-projects-the-case-of-kerala-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189315.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3101</span> Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyohun%20Kim">Hyohun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongwha%20Shin"> Dongwha Shin</a>, <a href="https://publications.waset.org/abstracts/search?q=Yeonseok%20Kim"> Yeonseok Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ji-Su%20Ahn"> Ji-Su Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Kensuke%20Nakamura"> Kensuke Nakamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongeun%20Choi"> Dongeun Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Byung-Woo%20Hong"> Byung-Woo Hong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auto-encoder%20neural%20network" title="auto-encoder neural network">auto-encoder neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=CT%20image%20artefact" title=" CT image artefact"> CT image artefact</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intrinsic%20image%20representation" title=" intrinsic image representation"> intrinsic image representation</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20reduction" title=" noise reduction"> noise reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20variation" title=" total variation"> total variation</a> </p> <a href="https://publications.waset.org/abstracts/75915/deep-learning-based-on-image-decomposition-for-restoration-of-intrinsic-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3100</span> Texture Analysis of Grayscale Co-Occurrence Matrix on Mammographic Indexed Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sushma">S. Sushma</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Balasubramanian"> S. Balasubramanian</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20C.%20Latha"> K. C. Latha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The mammographic image of breast cancer compressed and synthesized to get co-efficient values which will be converted (5x5) matrix to get ROI image where we get the highest value of effected region and with the same ideology the technique has been extended to differentiate between Calcification and normal cell image using mean value derived from 5x5 matrix values <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20analysis" title="texture analysis">texture analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=mammographic%20image" title=" mammographic image"> mammographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=partitioned%20gray%20scale%20co-oocurance%20matrix" title=" partitioned gray scale co-oocurance matrix"> partitioned gray scale co-oocurance matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=co-efficient" title=" co-efficient "> co-efficient </a> </p> <a href="https://publications.waset.org/abstracts/17516/texture-analysis-of-grayscale-co-occurrence-matrix-on-mammographic-indexed-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">533</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=104">104</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=105">105</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20restoration&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10