CINXE.COM

Search results for: destination image

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: destination image</title> <meta name="description" content="Search results for: destination image"> <meta name="keywords" content="destination image"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="destination image" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="destination image"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3189</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: destination image</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3039</span> Investigation of the Speckle Pattern Effect for Displacement Assessments by Digital Image Correlation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Salim%20%C3%87al%C4%B1%C5%9Fkan">Salim Çalışkan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hakan%20Aky%C3%BCz"> Hakan Akyüz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital image correlation has been accustomed as a versatile and efficient method for measuring displacements on the article surfaces by comparing reference subsets in undeformed images with the define target subset in the distorted image. The theoretical model points out that the accuracy of the digital image correlation displacement data can be exactly anticipated based on the divergence of the image noise and the sum of the squares of the subset intensity gradients. The digital image correlation procedure locates each subset of the original image in the distorted image. The software then determines the displacement values of the centers of the subassemblies, providing the complete displacement measures. In this paper, the effect of the speckle distribution and its effect on displacements measured out plane displacement data as a function of the size of the subset was investigated. Nine groups of speckle patterns were used in this study: samples are sprayed randomly by pre-manufactured patterns of three different hole diameters, each with three coverage ratios, on a computer numerical control punch press. The resulting displacement values, referenced at the center of the subset, are evaluated based on the average of the displacements of the pixel’s interior the subset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20correlation" title="digital image correlation">digital image correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=speckle%20pattern" title=" speckle pattern"> speckle pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=experimental%20mechanics" title=" experimental mechanics"> experimental mechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=tensile%20test" title=" tensile test"> tensile test</a>, <a href="https://publications.waset.org/abstracts/search?q=aluminum%20alloy" title=" aluminum alloy"> aluminum alloy</a> </p> <a href="https://publications.waset.org/abstracts/171900/investigation-of-the-speckle-pattern-effect-for-displacement-assessments-by-digital-image-correlation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171900.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3038</span> A User Interface for Easiest Way Image Encryption with Chaos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20L%C3%B3pez-Mancilla">D. López-Mancilla</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20M.%20Roblero-Villa"> J. M. Roblero-Villa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=chaos" title=" chaos"> chaos</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20communications" title=" secure communications"> secure communications</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20interface" title=" user interface"> user interface</a> </p> <a href="https://publications.waset.org/abstracts/28022/a-user-interface-for-easiest-way-image-encryption-with-chaos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28022.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">489</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3037</span> Active Contours for Image Segmentation Based on Complex Domain Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sajid%20Hussain">Sajid Hussain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The complex domain approach for image segmentation based on active contour has been designed, which deforms step by step to partition an image into numerous expedient regions. A novel region-based trigonometric complex pressure force function is proposed, which propagates around the region of interest using image forces. The signed trigonometric force function controls the propagation of the active contour and the active contour stops on the exact edges of the object accurately. The proposed model makes the level set function binary and uses Gaussian smoothing kernel to adjust and escape the re-initialization procedure. The working principle of the proposed model is as follows: The real image data is transformed into complex data by iota (i) times of image data and the average iota (i) times of horizontal and vertical components of the gradient of image data is inserted in the proposed model to catch complex gradient of the image data. A simple finite difference mathematical technique has been used to implement the proposed model. The efficiency and robustness of the proposed model have been verified and compared with other state-of-the-art models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=active%20contour" title=" active contour"> active contour</a>, <a href="https://publications.waset.org/abstracts/search?q=level%20set" title=" level set"> level set</a>, <a href="https://publications.waset.org/abstracts/search?q=Mumford%20and%20Shah%20model" title=" Mumford and Shah model"> Mumford and Shah model</a> </p> <a href="https://publications.waset.org/abstracts/161606/active-contours-for-image-segmentation-based-on-complex-domain-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3036</span> Structural Analysis of Kamaluddin Behzad&#039;s Works Based on Roland Barthes&#039; Theory of Communication, &#039;Text and Image&#039;</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20Khani%20Oushani">Mahsa Khani Oushani</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Kazem%20Hasanvand"> Mohammad Kazem Hasanvand</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text and image have always been two important components in Iranian layout. The interactive connection between text and image has shaped the art of book design with multiple patterns. In this research, first the structure and visual elements in the research data were analyzed and then the position of the text element and the image element in relation to each other based on Roland Barthes theory on the three theories of text and image, were studied and analyzed and the results were compared, and interpreted. The purpose of this study is to investigate the pattern of text and image in the works of Kamaluddin Behzad based on three Roland Barthes communication theories, 1. Descriptive communication, 2. Reference communication, 3. Matched communication. The questions of this research are what is the relationship between text and image in Behzad's works? And how is it defined according to Roland Barthes theory? The method of this research has been done with a structuralist approach with a descriptive-analytical method in a library collection method. The information has been collected in the form of documents (library) and is a tool for collecting online databases. Findings show that the dominant element in Behzad's drawings is with the image and has created a reference relationship in the layout of the drawings, but in some cases it achieves a different relationship that despite the preference of the image on the page, the text is dispersed proportionally on the page and plays a more active role, played within the image. The text and the image support each other equally on the page; Roland Barthes equates this connection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text" title="text">text</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamaluddin%20Behzad" title=" Kamaluddin Behzad"> Kamaluddin Behzad</a>, <a href="https://publications.waset.org/abstracts/search?q=Roland%20Barthes" title=" Roland Barthes"> Roland Barthes</a>, <a href="https://publications.waset.org/abstracts/search?q=communication%20theory" title=" communication theory"> communication theory</a> </p> <a href="https://publications.waset.org/abstracts/138346/structural-analysis-of-kamaluddin-behzads-works-based-on-roland-barthes-theory-of-communication-text-and-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">192</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3035</span> Lossless Secret Image Sharing Based on Integer Discrete Cosine Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li%20Li">Li Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20A.%20Abd%20El-Latif"> Ahmed A. Abd El-Latif</a>, <a href="https://publications.waset.org/abstracts/search?q=Aya%20El-Fatyany"> Aya El-Fatyany</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Amin"> Mohamed Amin </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a new secret image sharing method based on integer discrete cosine transform (IntDCT). It first transforms the original image into the frequency domain (DCT coefficients) using IntDCT, which are operated on each block with size 8*8. Then, it generates shares among each DCT coefficients in the same place of each block, that is, all the DC components are used to generate DC shares, the ith AC component in each block are utilized to generate ith AC shares, and so on. The DC and AC shares components with the same number are combined together to generate DCT shadows. Experimental results and analyses show that the proposed method can recover the original image lossless than those methods based on traditional DCT and is more sensitive to tiny change in both the coefficients and the content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=secret%20image%20sharing" title="secret image sharing">secret image sharing</a>, <a href="https://publications.waset.org/abstracts/search?q=integer%20DCT" title=" integer DCT"> integer DCT</a>, <a href="https://publications.waset.org/abstracts/search?q=lossless%20recovery" title=" lossless recovery"> lossless recovery</a>, <a href="https://publications.waset.org/abstracts/search?q=sensitivity" title=" sensitivity"> sensitivity</a> </p> <a href="https://publications.waset.org/abstracts/36824/lossless-secret-image-sharing-based-on-integer-discrete-cosine-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36824.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3034</span> New Approaches for the Handwritten Digit Image Features Extraction for Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20Ravi%20Babu">U. Ravi Babu</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Mastan"> Mohd Mastan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present paper proposes a novel approach for handwritten digit recognition system. The present paper extract digit image features based on distance measure and derives an algorithm to classify the digit images. The distance measure can be performing on the thinned image. Thinning is the one of the preprocessing technique in image processing. The present paper mainly concentrated on an extraction of features from digit image for effective recognition of the numeral. To find the effectiveness of the proposed method tested on MNIST database, CENPARMI, CEDAR, and newly collected data. The proposed method is implemented on more than one lakh digit images and it gets good comparative recognition results. The percentage of the recognition is achieved about 97.32%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=handwritten%20digit%20recognition" title="handwritten digit recognition">handwritten digit recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20measure" title=" distance measure"> distance measure</a>, <a href="https://publications.waset.org/abstracts/search?q=MNIST%20database" title=" MNIST database"> MNIST database</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20features" title=" image features"> image features</a> </p> <a href="https://publications.waset.org/abstracts/40518/new-approaches-for-the-handwritten-digit-image-features-extraction-for-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">461</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3033</span> Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Z.%20Mortezaie">Z. Mortezaie</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Hassanpour"> H. Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Asadi%20Amiri"> S. Asadi Amiri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unsharp%20masking" title="unsharp masking">unsharp masking</a>, <a href="https://publications.waset.org/abstracts/search?q=blur%20image" title=" blur image"> blur image</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-region%20gradient" title=" sub-region gradient"> sub-region gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a> </p> <a href="https://publications.waset.org/abstracts/73795/contrast-enhancement-in-digital-images-using-an-adaptive-unsharp-masking-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">214</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3032</span> Cloud Shield: Model to Secure User Data While Using Content Delivery Network Services</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rachna%20Jain">Rachna Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=Sushila%20Madan"> Sushila Madan</a>, <a href="https://publications.waset.org/abstracts/search?q=Bindu%20Garg"> Bindu Garg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cloud computing is the key powerhouse in numerous organizations due to shifting of their data to the cloud environment. In recent years it has been observed that cloud-based-services are being used on large scale for content storage, distribution and processing. Various issues have been observed in cloud computing environment that need to be addressed. Security and privacy are found topmost concern area. In this paper, a novel security model is proposed to secure data by utilizing CDN services like image to icon conversion. CDN Service is a content delivery service which converts an image to icon, word to pdf & Latex to pdf etc. Presented model is used to convert an image into icon by keeping image secret. Here security of image is imparted so that image should be encrypted and decrypted by data owners only. It is also discussed in the paper that how server performs multiplication and selection on encrypted data without decryption. The data can be image file, word file, audio or video file. Moreover, the proposed model is capable enough to multiply images, encrypt them and send to a server application for conversion. Eventually, the prime objective is to encrypt an image and convert the encrypted image to image Icon by utilizing homomorphic encryption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cloud%20computing" title="cloud computing">cloud computing</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20data%20security" title=" user data security"> user data security</a>, <a href="https://publications.waset.org/abstracts/search?q=homomorphic%20encryption" title=" homomorphic encryption"> homomorphic encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20multiplication" title=" image multiplication"> image multiplication</a>, <a href="https://publications.waset.org/abstracts/search?q=CDN%20service" title=" CDN service"> CDN service</a> </p> <a href="https://publications.waset.org/abstracts/37699/cloud-shield-model-to-secure-user-data-while-using-content-delivery-network-services" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37699.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">334</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3031</span> How Cultural Tourists Perceive Authenticity in World Heritage Historic Centers: An Empirical Research</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Odete%20Paiva">Odete Paiva</a>, <a href="https://publications.waset.org/abstracts/search?q=Cl%C3%A1udia%20Seabra"> Cláudia Seabra</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Lu%C3%ADs%20Abrantes"> José Luís Abrantes</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernanda%20Cravid%C3%A3o"> Fernanda Cravidão</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is a clear ‘cult of authenticity’, at least in modern Western society. So, there is a need to analyze the tourist perception of authenticity, bearing in mind the destination, its attractions, motivations, cultural distance, and contact with other tourists. Our study seeks to investigate the relationship among cultural values, image, sense of place, perception of authenticity and behavior intentions at World Heritage Historic Centers. From a theoretical perspective, few researches focus on the impact of cultural values, image and sense of place on authenticity and intentions behavior in tourists. The intention of this study is to help close this gap. A survey was applied to collect data from tourists visiting two World Heritage Historic Centers – Guimarães in Portugal and Cordoba in Spain. Data was analyzed in order to establish a structural equation model (SEM). Discussion centers on the implications of model to theory and managerial development of tourism strategies. Recommendations for destinations managers and promoters and tourist organizations administrators are addressed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=authenticity%20perception" title="authenticity perception">authenticity perception</a>, <a href="https://publications.waset.org/abstracts/search?q=behavior%20intentions" title=" behavior intentions"> behavior intentions</a>, <a href="https://publications.waset.org/abstracts/search?q=cultural%20tourism" title=" cultural tourism"> cultural tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=cultural%20values" title=" cultural values"> cultural values</a>, <a href="https://publications.waset.org/abstracts/search?q=world%20heritage%20historic%20centers" title=" world heritage historic centers"> world heritage historic centers</a> </p> <a href="https://publications.waset.org/abstracts/49133/how-cultural-tourists-perceive-authenticity-in-world-heritage-historic-centers-an-empirical-research" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3030</span> Optimizing Machine Learning Through Python Based Image Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Srinidhi.%20A">Srinidhi. A</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Twinkle%20Hareendran"> Twinkle Hareendran</a>, <a href="https://publications.waset.org/abstracts/search?q=Vriksha%20Prakash"> Vriksha Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20applications" title=" machine learning applications"> machine learning applications</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a> </p> <a href="https://publications.waset.org/abstracts/193107/optimizing-machine-learning-through-python-based-image-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">13</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3029</span> Post-Processing Method for Performance Improvement of Aerial Image Parcel Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Donghee%20Noh">Donghee Noh</a>, <a href="https://publications.waset.org/abstracts/search?q=Seonhyeong%20Kim"> Seonhyeong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Junhwan%20Choi"> Junhwan Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Heegon%20Kim"> Heegon Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sooho%20Jung"> Sooho Jung</a>, <a href="https://publications.waset.org/abstracts/search?q=Keunho%20Park"> Keunho Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we describe an image post-processing method to enhance the performance of the parcel segmentation method using deep learning-based aerial images conducted in previous studies. The study results were evaluated using a confusion matrix, IoU, Precision, Recall, and F1-Score. In the case of the confusion matrix, it was observed that the false positive value, which is the result of misclassification, was greatly reduced as a result of image post-processing. The average IoU was 0.9688 in the image post-processing, which is higher than the deep learning result of 0.8362, and the F1-Score was also 0.9822 in the image post-processing, which was higher than the deep learning result of 0.8850. As a result of the experiment, it was found that the proposed technique positively complements the deep learning results in segmenting the parcel of interest. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20image" title="aerial image">aerial image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20process" title=" image process"> image process</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=open%20field%20smart%20farm" title=" open field smart farm"> open field smart farm</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/170203/post-processing-method-for-performance-improvement-of-aerial-image-parcel-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170203.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3028</span> Analyzing Strategic Alliances of Museums: The Case of Girona (Spain)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raquel%20Camprub%C3%AD">Raquel Camprubí</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cultural tourism has been postulated as relevant motivation for tourist over the world during the last decades. In this context, museums are the main attraction for cultural tourists who are seeking to connect with the history and culture of the visited place. From the point of view of an urban destination, museums and other cultural resources are essential to have a strong tourist supply at the destination, in order to be capable of catching attention and interest of cultural tourists. In particular, museums’ challenge is to be prepared to offer the best experience to their visitors without to forget their mission-based mainly on protection of its collection and other social goals. Thus, museums individually want to be competitive and have good positioning to achieve their strategic goals. The life cycle of the destination and the level of maturity of its tourism product influence the need of tourism agents to cooperate and collaborate among them, in order to rejuvenate their product and become more competitive as a destination. Additionally, prior studies have considered an approach of different models of a public and private partnership, and collaborative and cooperative relations developed among the agents of a tourism destination. However, there are no studies that pay special attention to museums and the strategic alliances developed to obtain mutual benefits. Considering this background, the purpose of this study is to analyze in what extent museums of a given urban destination have established strategic links and relations among them, in order to improve their competitive position at both individual and destination level. In order to achieve the aim of this study, the city of Girona (Spain) and the museums located in this city are taken as a case study. Data collection was conducted using in-depth interviews, in order to collect all the qualitative data related to nature, strengthen and purpose of the relational ties established among the museums of the city or other relevant tourism agents of the city. To conduct data analysis, a Social Network Analysis (SNA) approach was taken using UCINET software. Position of the agents in the network and structure of the network was analyzed, and qualitative data from interviews were used to interpret SNA results. Finding reveals the existence of strong ties among some of the museums of the city, particularly to create and promote joint products. Nevertheless, there were detected outsiders who have an individual strategy, without collaboration and cooperation with other museums or agents of the city. Results also show that some relational ties have an institutional origin, while others are the result of a long process of cooperation with common projects. Conclusions put in evidence that collaboration and cooperation of museums had been positive to increase the attractiveness of the museum and the city as a cultural destination. Future research and managerial implications are also mentioned. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cultural%20tourism" title="cultural tourism">cultural tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=competitiveness" title=" competitiveness"> competitiveness</a>, <a href="https://publications.waset.org/abstracts/search?q=museums" title=" museums"> museums</a>, <a href="https://publications.waset.org/abstracts/search?q=Social%20Network%20analysis" title=" Social Network analysis"> Social Network analysis</a> </p> <a href="https://publications.waset.org/abstracts/106310/analyzing-strategic-alliances-of-museums-the-case-of-girona-spain" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/106310.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3027</span> GPU Accelerated Fractal Image Compression for Medical Imaging in Parallel Computing Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Md.%20Enamul%20Haque">Md. Enamul Haque</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdullah%20Al%20Kaisan"> Abdullah Al Kaisan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmudur%20R.%20Saniat"> Mahmudur R. Saniat</a>, <a href="https://publications.waset.org/abstracts/search?q=Aminur%20Rahman"> Aminur Rahman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we have implemented both sequential and parallel version of fractal image compression algorithms using CUDA (Compute Unified Device Architecture) programming model for parallelizing the program in Graphics Processing Unit for medical images, as they are highly similar within the image itself. There is several improvements in the implementation of the algorithm as well. Fractal image compression is based on the self similarity of an image, meaning an image having similarity in majority of the regions. We take this opportunity to implement the compression algorithm and monitor the effect of it using both parallel and sequential implementation. Fractal compression has the property of high compression rate and the dimensionless scheme. Compression scheme for fractal image is of two kinds, one is encoding and another is decoding. Encoding is very much computational expensive. On the other hand decoding is less computational. The application of fractal compression to medical images would allow obtaining much higher compression ratios. While the fractal magnification an inseparable feature of the fractal compression would be very useful in presenting the reconstructed image in a highly readable form. However, like all irreversible methods, the fractal compression is connected with the problem of information loss, which is especially troublesome in the medical imaging. A very time consuming encoding process, which can last even several hours, is another bothersome drawback of the fractal compression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accelerated%20GPU" title="accelerated GPU">accelerated GPU</a>, <a href="https://publications.waset.org/abstracts/search?q=CUDA" title=" CUDA"> CUDA</a>, <a href="https://publications.waset.org/abstracts/search?q=parallel%20computing" title=" parallel computing"> parallel computing</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20image%20compression" title=" fractal image compression"> fractal image compression</a> </p> <a href="https://publications.waset.org/abstracts/5645/gpu-accelerated-fractal-image-compression-for-medical-imaging-in-parallel-computing-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">335</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3026</span> A Technique for Image Segmentation Using K-Means Clustering Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadia%20Basar">Sadia Basar</a>, <a href="https://publications.waset.org/abstracts/search?q=Naila%20Habib"> Naila Habib</a>, <a href="https://publications.waset.org/abstracts/search?q=Awais%20Adnan"> Awais Adnan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents the Technique for Image Segmentation Using K-Means Clustering Classification. The presented algorithms were specific, however, missed the neighboring information and required high-speed computerized machines to run the segmentation algorithms. Clustering is the process of partitioning a group of data points into a small number of clusters. The proposed method is content-aware and feature extraction method which is able to run on low-end computerized machines, simple algorithm, required low-quality streaming, efficient and used for security purpose. It has the capability to highlight the boundary and the object. At first, the user enters the data in the representation of the input. Then in the next step, the digital image is converted into groups clusters. Clusters are divided into many regions. The same categories with same features of clusters are assembled within a group and different clusters are placed in other groups. Finally, the clusters are combined with respect to similar features and then represented in the form of segments. The clustered image depicts the clear representation of the digital image in order to highlight the regions and boundaries of the image. At last, the final image is presented in the form of segments. All colors of the image are separated in clusters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=K-means%20function" title=" K-means function"> K-means function</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20and%20global%20minimum" title=" local and global minimum"> local and global minimum</a>, <a href="https://publications.waset.org/abstracts/search?q=region" title=" region"> region</a> </p> <a href="https://publications.waset.org/abstracts/25635/a-technique-for-image-segmentation-using-k-means-clustering-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">376</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3025</span> Performance Evaluation of Content Based Image Retrieval Using Indexed Views </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tahir%20Iqbal">Tahir Iqbal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mumtaz%20Ali"> Mumtaz Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Wajahat%20Kareem"> Syed Wajahat Kareem</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Harris"> Muhammad Harris </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital information is expanding in exponential order in our life. Information that is residing online and offline are stored in huge repositories relating to every aspect of our lives. Getting the required information is a task of retrieval systems. Content based image retrieval (CBIR) is a retrieval system that retrieves the required information from repositories on the basis of the contents of the image. Time is a critical factor in retrieval system and using indexed views with CBIR system improves the time efficiency of retrieved results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content%20based%20image%20retrieval%20%28CBIR%29" title="content based image retrieval (CBIR)">content based image retrieval (CBIR)</a>, <a href="https://publications.waset.org/abstracts/search?q=indexed%20view" title=" indexed view"> indexed view</a>, <a href="https://publications.waset.org/abstracts/search?q=color" title=" color"> color</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=cross%20correlation" title=" cross correlation"> cross correlation</a> </p> <a href="https://publications.waset.org/abstracts/11165/performance-evaluation-of-content-based-image-retrieval-using-indexed-views" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11165.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3024</span> Image Distortion Correction Method of 2-MHz Side Scan Sonar for Underwater Structure Inspection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Youngseok%20Kim">Youngseok Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chul%20Park"> Chul Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonghwa%20Yi"> Jonghwa Yi</a>, <a href="https://publications.waset.org/abstracts/search?q=Sangsik%20Choi"> Sangsik Choi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 2-MHz Side Scan SONAR (SSS) attached to the boat for inspection of underwater structures is affected by shaking. It is difficult to determine the exact scale of damage of structure. In this study, a motion sensor is attached to the inside of the 2-MHz SSS to get roll, pitch, and yaw direction data, and developed the image stabilization tool to correct the sonar image. We checked that reliable data can be obtained with an average error rate of 1.99% between the measured value and the actual distance through experiment. It is possible to get the accurate sonar data to inspect damage in underwater structure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20stabilization" title="image stabilization">image stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20sensor" title=" motion sensor"> motion sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=safety%20inspection" title=" safety inspection"> safety inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=sonar%20image" title=" sonar image"> sonar image</a>, <a href="https://publications.waset.org/abstracts/search?q=underwater%20structure" title=" underwater structure"> underwater structure</a> </p> <a href="https://publications.waset.org/abstracts/84612/image-distortion-correction-method-of-2-mhz-side-scan-sonar-for-underwater-structure-inspection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84612.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3023</span> Change Detection Method Based on Scale-Invariant Feature Transformation Keypoints and Segmentation for Synthetic Aperture Radar Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lan%20Du">Lan Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hui%20Dai"> Hui Dai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Synthetic aperture radar (SAR) image change detection has recently become a challenging problem owing to the existence of speckle noises. In this paper, an unsupervised distribution-free change detection for SAR image based on scale-invariant feature transform (SIFT) keypoints and segmentation is proposed. Firstly, the noise-robust SIFT keypoints which reveal the blob-like structures in an image are extracted in the log-ratio image to reduce the detection range. Then, different from the traditional change detection which directly obtains the change-detection map from the difference image, segmentation is made around the extracted keypoints in the two original multitemporal SAR images to obtain accurate changed region. At last, the change-detection map is generated by comparing the two segmentations. Experimental results on the real SAR image dataset demonstrate the effectiveness of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title="change detection">change detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Synthetic%20Aperture%20Radar%20%28SAR%29" title=" Synthetic Aperture Radar (SAR)"> Synthetic Aperture Radar (SAR)</a>, <a href="https://publications.waset.org/abstracts/search?q=Scale-Invariant%20Feature%20Transformation%20%28SIFT%29" title=" Scale-Invariant Feature Transformation (SIFT)"> Scale-Invariant Feature Transformation (SIFT)</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/66992/change-detection-method-based-on-scale-invariant-feature-transformation-keypoints-and-segmentation-for-synthetic-aperture-radar-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3022</span> Pre-Processing of Ultrasonography Image Quality Improvement in Cases of Cervical Cancer Using Image Enhancement </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Retno%20Supriyanti">Retno Supriyanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Teguh%20Budiono"> Teguh Budiono</a>, <a href="https://publications.waset.org/abstracts/search?q=Yogi%20Ramadhani"> Yogi Ramadhani</a>, <a href="https://publications.waset.org/abstracts/search?q=Haris%20B.%20Widodo"> Haris B. Widodo</a>, <a href="https://publications.waset.org/abstracts/search?q=Arwita%20Mulyawati"> Arwita Mulyawati</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cervical cancer is the leading cause of mortality in cancer-related diseases. In this diagnosis doctors usually perform several tests to determine the presence of cervical cancer in a patient. However, these checks require support equipment to get the results in more detail. One is by using ultrasonography. However, for the developing countries most of the existing ultrasonography has a low resolution. The goal of this research is to obtain abnormalities on low-resolution ultrasound images especially for cervical cancer case. In this paper, we emphasize our work to use Image Enhancement for pre-processing image quality improvement. The result shows that pre-processing stage is promising to support further analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cervical%20cancer" title="cervical cancer">cervical cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=mortality" title=" mortality"> mortality</a>, <a href="https://publications.waset.org/abstracts/search?q=low-resolution" title=" low-resolution"> low-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement." title=" image enhancement. "> image enhancement. </a> </p> <a href="https://publications.waset.org/abstracts/26523/pre-processing-of-ultrasonography-image-quality-improvement-in-cases-of-cervical-cancer-using-image-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26523.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">636</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3021</span> Traffic Light Detection Using Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaishnavi%20Shivde">Vaishnavi Shivde</a>, <a href="https://publications.waset.org/abstracts/search?q=Shrishti%20Sinha"> Shrishti Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Trapti%20Mishra"> Trapti Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20light%20detection" title="traffic light detection">traffic light detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/137254/traffic-light-detection-using-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">173</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3020</span> Image Captioning with Vision-Language Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Promise%20Ekpo%20Osaine">Promise Ekpo Osaine</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Melesse"> Daniel Melesse</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-modal%20AI%20systems" title="multi-modal AI systems">multi-modal AI systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20captioning" title=" image captioning"> image captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=decoder" title=" decoder"> decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=BLUE%20score" title=" BLUE score"> BLUE score</a> </p> <a href="https://publications.waset.org/abstracts/181849/image-captioning-with-vision-language-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3019</span> Embedded Digital Image System </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dawei%20Li">Dawei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Liu"> Cheng Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yiteng%20Liu"> Yiteng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an embedded digital image system for Chinese space environment vertical exploration sounding rocket. In order to record the flight status of the sounding rocket as well as the payloads, an onboard embedded image processing system based on ADV212, a JPEG2000 compression chip, is designed in this paper. Since the sounding rocket is not designed to be recovered, all image data should be transmitted to the ground station before the re-entry while the downlink band used for the image transmission is only about 600 kbps. Under the same condition of compression ratio compared with other algorithm, JPEG2000 standard algorithm can achieve better image quality. So JPEG2000 image compression is applied under this condition with a limited downlink data band. This embedded image system supports lossless to 200:1 real time compression, with two cameras to monitor nose ejection and motor separation, and two cameras to monitor boom deployment. The encoder, ADV7182, receives PAL signal from the camera, then output the ITU-R BT.656 signal to ADV212. ADV7182 switches between four input video channels as the program sequence. Two SRAMs are used for Ping-pong operation and one 512 Mb SDRAM for buffering high frame-rate images. The whole image system has the characteristics of low power dissipation, low cost, small size and high reliability, which is rather suitable for this sounding rocket application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ADV212" title="ADV212">ADV212</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20system" title=" image system"> image system</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG2000" title=" JPEG2000"> JPEG2000</a>, <a href="https://publications.waset.org/abstracts/search?q=sounding%20rocket" title=" sounding rocket"> sounding rocket</a> </p> <a href="https://publications.waset.org/abstracts/37615/embedded-digital-image-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37615.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3018</span> A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Takanori%20Tanaka">Takanori Tanaka</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Kitao"> Daisuke Kitao</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Ikeda"> Daisuke Ikeda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data-intensive%20science" title="data-intensive science">data-intensive science</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=aurora" title=" aurora"> aurora</a> </p> <a href="https://publications.waset.org/abstracts/19532/a-similar-image-retrieval-system-for-auroral-all-sky-images-based-on-local-features-and-color-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3017</span> Image Inpainting Model with Small-Sample Size Based on Generative Adversary Network and Genetic Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiawen%20Wang">Jiawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Qijun%20Chen"> Qijun Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The performance of most machine-learning methods for image inpainting depends on the quantity and quality of the training samples. However, it is very expensive or even impossible to obtain a great number of training samples in many scenarios. In this paper, an image inpainting model based on a generative adversary network (GAN) is constructed for the cases when the number of training samples is small. Firstly, a feature extraction network (F-net) is incorporated into the GAN network to utilize the available information of the inpainting image. The weighted sum of the extracted feature and the random noise acts as the input to the generative network (G-net). The proposed network can be trained well even when the sample size is very small. Secondly, in the phase of the completion for each damaged image, a genetic algorithm is designed to search an optimized noise input for G-net; based on this optimized input, the parameters of the G-net and F-net are further learned (Once the completion for a certain damaged image ends, the parameters restore to its original values obtained in the training phase) to generate an image patch that not only can fill the missing part of the damaged image smoothly but also has visual semantics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20inpainting" title="image inpainting">image inpainting</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversary%20nets" title=" generative adversary nets"> generative adversary nets</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=small-sample%20size" title=" small-sample size"> small-sample size</a> </p> <a href="https://publications.waset.org/abstracts/126552/image-inpainting-model-with-small-sample-size-based-on-generative-adversary-network-and-genetic-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3016</span> Brainbow Image Segmentation Using Bayesian Sequential Partitioning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yayun%20Hsu">Yayun Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Henry%20Horng-Shing%20Lu"> Henry Horng-Shing Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a data-driven, biology-inspired neural segmentation method of 3D drosophila Brainbow images. We use Bayesian Sequential Partitioning algorithm for probabilistic modeling, which can be used to detect somas and to eliminate cross talk effects. This work attempts to develop an automatic methodology for neuron image segmentation, which nowadays still lacks a complete solution due to the complexity of the image. The proposed method does not need any predetermined, risk-prone thresholds since biological information is inherently included in the image processing procedure. Therefore, it is less sensitive to variations in neuron morphology; meanwhile, its flexibility would be beneficial for tracing the intertwining structure of neurons. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brainbow" title="brainbow">brainbow</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20imaging" title=" 3D imaging"> 3D imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=neuron%20morphology" title=" neuron morphology"> neuron morphology</a>, <a href="https://publications.waset.org/abstracts/search?q=biological%20data%20mining" title=" biological data mining"> biological data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=non-parametric%20learning" title=" non-parametric learning"> non-parametric learning</a> </p> <a href="https://publications.waset.org/abstracts/2189/brainbow-image-segmentation-using-bayesian-sequential-partitioning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2189.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">487</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3015</span> Image Compression on Region of Interest Based on SPIHT Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sudeepti%20Dayal">Sudeepti Dayal</a>, <a href="https://publications.waset.org/abstracts/search?q=Neelesh%20Gupta"> Neelesh Gupta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. Storage of medical images is a most researched area in the current scenario. To store a medical image, there are two parameters on which the image is divided, regions of interest and non-regions of interest. The best way to store an image is to compress it in such a way that no important information is lost. Compression can be done in two ways, namely lossy, and lossless compression. Under that, several compression algorithms are applied. In the paper, two algorithms are used which are, discrete cosine transform, applied to non-region of interest (lossy), and discrete wavelet transform, applied to regions of interest (lossless). The paper introduces SPIHT (set partitioning hierarchical tree) algorithm which is applied onto the wavelet transform to obtain good compression ratio from which an image can be stored efficiently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Compression%20ratio" title="Compression ratio">Compression ratio</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/abstracts/search?q=SPIHT" title=" SPIHT"> SPIHT</a>, <a href="https://publications.waset.org/abstracts/search?q=DCT" title=" DCT"> DCT</a> </p> <a href="https://publications.waset.org/abstracts/43377/image-compression-on-region-of-interest-based-on-spiht-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43377.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3014</span> CERD: Cost Effective Route Discovery in Mobile Ad Hoc Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anuradha%20Banerjee">Anuradha Banerjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A mobile ad hoc network is an infrastructure less network, where nodes are free to move independently in any direction. The nodes have limited battery power; hence, we require energy efficient route discovery technique to enhance their lifetime and network performance. In this paper, we propose an energy-efficient route discovery technique CERD that greatly reduces the number of route requests flooded into the network and also gives priority to the route request packets sent from the routers that has communicated with the destination very recently, in single or multi-hop paths. This does not only enhance the lifetime of nodes but also decreases the delay in tracking the destination. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ad%20hoc%20network" title="ad hoc network">ad hoc network</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20efficiency" title=" energy efficiency"> energy efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=flooding" title=" flooding"> flooding</a>, <a href="https://publications.waset.org/abstracts/search?q=node%20lifetime" title=" node lifetime"> node lifetime</a>, <a href="https://publications.waset.org/abstracts/search?q=route%20discovery" title=" route discovery"> route discovery</a> </p> <a href="https://publications.waset.org/abstracts/20336/cerd-cost-effective-route-discovery-in-mobile-ad-hoc-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3013</span> Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siraa%20Ben%20Ftima">Siraa Ben Ftima</a>, <a href="https://publications.waset.org/abstracts/search?q=Mourad%20Talbi"> Mourad Talbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tahar%20Ezzedine"> Tahar Ezzedine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lifting%20wavelet%20transform%20%28LWT%29" title="lifting wavelet transform (LWT)">lifting wavelet transform (LWT)</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-space%20vectorial%20decomposition" title=" sub-space vectorial decomposition"> sub-space vectorial decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=secure" title=" secure"> secure</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20watermarking" title=" image watermarking"> image watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=watermark" title=" watermark"> watermark</a> </p> <a href="https://publications.waset.org/abstracts/70998/lifting-wavelet-transform-and-singular-values-decomposition-for-secure-image-watermarking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">276</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3012</span> Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Alavianmehr">M. A. Alavianmehr</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tashk"> A. Tashk</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sodagaran"> A. Sodagaran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20models" title=" background models"> background models</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=foreground%20detection" title=" foreground detection"> foreground detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a> </p> <a href="https://publications.waset.org/abstracts/16364/video-foreground-detection-based-on-adaptive-mixture-gaussian-model-for-video-surveillance-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16364.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3011</span> Research Approaches for Identifying Images of the Past in the Built Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Al-Zoabi">Ahmad Al-Zoabi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Development of research approaches for identifying images of the past in the built environment is at a beginning stage, and a review of the current literature reveals a limited body of research in this area. This study seeks to make a contribution to fill this void. It investigates the theoretical and empirical studies that examine the built environment as a medium for communicating the past in order to understand how images of the past are operationalized in these studies. Findings revealed that image could be operationalized in several ways depending on the focus of the study. Three concerns were addressed in this study when defining the image of the past: (a) to investigate an 'everyday' popular image of the past; (b) to look at the building's image as an integrated part of a larger image for the city; and (c) to find patterns within residents' images of the past. This study concludes that a future study is needed to address the effects of different scales (size and depth of history) of cities and of different cultural backgrounds of images of the past. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=architecture" title="architecture">architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=built%20environment" title=" built environment"> built environment</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past" title=" image of the past"> image of the past</a>, <a href="https://publications.waset.org/abstracts/search?q=research%20approaches" title=" research approaches"> research approaches</a> </p> <a href="https://publications.waset.org/abstracts/66594/research-approaches-for-identifying-images-of-the-past-in-the-built-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66594.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3010</span> Improvement of Bone Scintography Image Using Image Texture Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Eltayeb%20Wagallah"> Eltayeb Wagallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image enhancement allows the observer to see details in images that may not be immediately observable in the original image. Image enhancement is the transformation or mapping of one image to another. The enhancement of certain features in images is accompanied by undesirable effects. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian scale mixture model and median filter were presented, which accomplishes nonlinearities from scattering a new nonlinear approach for contrast enhancement of bones in bone scan images using both gamma correction and negative transform methods. The usual assumption of a distribution of gamma and Poisson statistics only lead to overestimation of the noise variance in regions of low intensity but to underestimation in regions of high intensity and therefore to non-optional results. The contrast enhancement results were obtained and evaluated using MatLab program in nuclear medicine images of the bones. The optimal number of bins, in particular the number of gray-levels, is chosen automatically using entropy and average distance between the histogram of the original gray-level distribution and the contrast enhancement function’s curve. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bone%20scan" title="bone scan">bone scan</a>, <a href="https://publications.waset.org/abstracts/search?q=nuclear%20medicine" title=" nuclear medicine"> nuclear medicine</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing%20technique" title=" image processing technique"> image processing technique</a> </p> <a href="https://publications.waset.org/abstracts/13956/improvement-of-bone-scintography-image-using-image-texture-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13956.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=5" rel="prev">&lsaquo;</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=5">5</a></li> <li class="page-item active"><span class="page-link">6</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=106">106</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=107">107</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=destination%20image&amp;page=7" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10