CINXE.COM
Search results for: image storage and retrieval
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image storage and retrieval</title> <meta name="description" content="Search results for: image storage and retrieval"> <meta name="keywords" content="image storage and retrieval"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image storage and retrieval" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image storage and retrieval"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 5027</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image storage and retrieval</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5027</span> Performance Evaluation of Content Based Image Retrieval Using Indexed Views </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tahir%20Iqbal">Tahir Iqbal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mumtaz%20Ali"> Mumtaz Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Wajahat%20Kareem"> Syed Wajahat Kareem</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Harris"> Muhammad Harris </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital information is expanding in exponential order in our life. Information that is residing online and offline are stored in huge repositories relating to every aspect of our lives. Getting the required information is a task of retrieval systems. Content based image retrieval (CBIR) is a retrieval system that retrieves the required information from repositories on the basis of the contents of the image. Time is a critical factor in retrieval system and using indexed views with CBIR system improves the time efficiency of retrieved results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content%20based%20image%20retrieval%20%28CBIR%29" title="content based image retrieval (CBIR)">content based image retrieval (CBIR)</a>, <a href="https://publications.waset.org/abstracts/search?q=indexed%20view" title=" indexed view"> indexed view</a>, <a href="https://publications.waset.org/abstracts/search?q=color" title=" color"> color</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=cross%20correlation" title=" cross correlation"> cross correlation</a> </p> <a href="https://publications.waset.org/abstracts/11165/performance-evaluation-of-content-based-image-retrieval-using-indexed-views" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11165.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5026</span> Quantum Entangled States and Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sanjay%20%20Singh">Sanjay Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sushil%20Kumar"> Sushil Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Rashmi%20Jain"> Rashmi Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Quantum registering is another pattern in computational hypothesis and a quantum mechanical framework has a few helpful properties like Entanglement. We plan to store data concerning the structure and substance of a basic picture in a quantum framework. Consider a variety of n qubits which we propose to use as our memory stockpiling. In recent years classical processing is switched to quantum image processing. Quantum image processing is an elegant approach to overcome the problems of its classical counter parts. Image storage, retrieval and its processing on quantum machines is an emerging area. Although quantum machines do not exist in physical reality but theoretical algorithms developed based on quantum entangled states gives new insights to process the classical images in quantum domain. Here in the present work, we give the brief overview, such that how entangled states can be useful for quantum image storage and retrieval. We discuss the properties of tripartite Greenberger-Horne-Zeilinger and W states and their usefulness to store the shapes which may consist three vertices. We also propose the techniques to store shapes having more than three vertices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Greenberger-Horne-Zeilinger" title="Greenberger-Horne-Zeilinger">Greenberger-Horne-Zeilinger</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval" title=" image storage and retrieval"> image storage and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20entanglement" title=" quantum entanglement"> quantum entanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=W%20states" title=" W states"> W states</a> </p> <a href="https://publications.waset.org/abstracts/67732/quantum-entangled-states-and-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67732.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5025</span> Secure Image Retrieval Based on Orthogonal Decomposition under Cloud Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Xu">Y. Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Xiong"> L. Xiong</a>, <a href="https://publications.waset.org/abstracts/search?q=Z.%20Xu"> Z. Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to protect data privacy, image with sensitive or private information needs to be encrypted before being outsourced to the cloud. However, this causes difficulties in image retrieval and data management. A secure image retrieval method based on orthogonal decomposition is proposed in the paper. The image is divided into two different components, for which encryption and feature extraction are executed separately. As a result, cloud server can extract features from an encrypted image directly and compare them with the features of the queried images, so that the user can thus obtain the image. Different from other methods, the proposed method has no special requirements to encryption algorithms. Experimental results prove that the proposed method can achieve better security and better retrieval precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=secure%20image%20retrieval" title="secure image retrieval">secure image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20search" title=" secure search"> secure search</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20decomposition" title=" orthogonal decomposition"> orthogonal decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20cloud%20computing" title=" secure cloud computing"> secure cloud computing</a> </p> <a href="https://publications.waset.org/abstracts/29115/secure-image-retrieval-based-on-orthogonal-decomposition-under-cloud-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29115.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5024</span> Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=W.%20S.%20Besbas">W. S. Besbas</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Artemi"> M. A. Artemi</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20M.%20Salman"> R. M. Salman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Content%20Based%20Image%20Retrieval%20%28CBIR%29" title="Content Based Image Retrieval (CBIR)">Content Based Image Retrieval (CBIR)</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20sketch%20image%20retrieval" title=" face sketch image retrieval"> face sketch image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20selection%20for%20CBIR" title=" features selection for CBIR"> features selection for CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval%20in%20transform%20domain" title=" image retrieval in transform domain"> image retrieval in transform domain</a> </p> <a href="https://publications.waset.org/abstracts/8251/content-based-face-sketch-images-retrieval-in-wht-dct-and-dwt-transform-domain" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8251.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5023</span> Local Texture and Global Color Descriptors for Content Based Image Retrieval</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tajinder%20Kaur">Tajinder Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Anu%20Bala"> Anu Bala</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An image retrieval system is a computer system for browsing, searching, and retrieving images from a large database of digital images a new algorithm meant for content-based image retrieval (CBIR) is presented in this paper. The proposed method combines the color and texture features which are extracted the global and local information of the image. The local texture feature is extracted by using local binary patterns (LBP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors. For the global color feature, the color histogram (CH) is used which is calculated by RGB (red, green, and blue) spaces separately. In this paper, the combination of color and texture features are proposed for content-based image retrieval. The performance of the proposed method is tested on Corel 1000 database which is the natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP and CH. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color" title="color">color</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20patterns" title=" local binary patterns"> local binary patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/25503/local-texture-and-global-color-descriptors-for-content-based-image-retrieval" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25503.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">366</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5022</span> Content-Based Image Retrieval Using HSV Color Space Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Qazanfari">Hamed Qazanfari</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Hassanpour"> Hamid Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazem%20Qazanfari"> Kazem Qazanfari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title="content-based image retrieval">content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20difference%20histogram" title=" color difference histogram"> color difference histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=efficient%20features%20selection" title=" efficient features selection"> efficient features selection</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a> </p> <a href="https://publications.waset.org/abstracts/75068/content-based-image-retrieval-using-hsv-color-space-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75068.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5021</span> Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20W.%20U.%20D.%20Chathurani">N. W. U. D. Chathurani</a>, <a href="https://publications.waset.org/abstracts/search?q=Shlomo%20Geva"> Shlomo Geva</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinod%20Chandran"> Vinod Chandran</a>, <a href="https://publications.waset.org/abstracts/search?q=Proboda%20Rajapaksha"> Proboda Rajapaksha </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title="feature fusion">feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=membership%20function" title=" membership function"> membership function</a>, <a href="https://publications.waset.org/abstracts/search?q=normalization" title=" normalization"> normalization</a> </p> <a href="https://publications.waset.org/abstracts/52968/image-retrieval-based-on-multi-feature-fusion-for-heterogeneous-image-databases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">345</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5020</span> Augmented Reality Technology for a User Interface in an Automated Storage and Retrieval System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wen-Jye%20Shyr">Wen-Jye Shyr</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Yuan%20Chang"> Chun-Yuan Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Bo-Lin%20Wei"> Bo-Lin Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Chia-Ming%20Lin"> Chia-Ming Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The task of creating an augmented reality technology was described in this study to give operators a user interface that might be a part of an automated storage and retrieval system. Its objective was to give graduate engineering and technology students a system of tools with which to experiment with the creation of augmented reality technologies. To collect and analyze data for maintenance applications, the students used augmented reality technology. Our findings support the evolution of artificial intelligence towards Industry 4.0 practices and the planned Industry 4.0 research stream. Important first insights into the study's effects on student learning were presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality" title="augmented reality">augmented reality</a>, <a href="https://publications.waset.org/abstracts/search?q=storage%20and%20retrieval%20system" title=" storage and retrieval system"> storage and retrieval system</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20interface" title=" user interface"> user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=programmable%20logic%20controller" title=" programmable logic controller"> programmable logic controller</a> </p> <a href="https://publications.waset.org/abstracts/173840/augmented-reality-technology-for-a-user-interface-in-an-automated-storage-and-retrieval-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5019</span> Improved Performance in Content-Based Image Retrieval Using Machine Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Ramesh%20Naik">B. Ramesh Naik</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Venugopal"> T. Venugopal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a novel approach which improves the high-level semantics of images based on machine learning approach. The contemporary approaches for image retrieval and object recognition includes Fourier transforms, Wavelets, SIFT and HoG. Though these descriptors helpful in a wide range of applications, they exploit zero order statistics, and this lacks high descriptiveness of image features. These descriptors usually take benefit of primitive visual features such as shape, color, texture and spatial locations to describe images. These features do not adequate to describe high-level semantics of the images. This leads to a gap in semantic content caused to unacceptable performance in image retrieval system. A novel method has been proposed referred as discriminative learning which is derived from machine learning approach that efficiently discriminates image features. The analysis and results of proposed approach were validated thoroughly on WANG and Caltech-101 Databases. The results proved that this approach is very competitive in content-based image retrieval. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CBIR" title="CBIR">CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=discriminative%20learning" title=" discriminative learning"> discriminative learning</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20weight%20learning" title=" region weight learning"> region weight learning</a>, <a href="https://publications.waset.org/abstracts/search?q=scale%20invariant%20feature%20transforms" title=" scale invariant feature transforms"> scale invariant feature transforms</a> </p> <a href="https://publications.waset.org/abstracts/88331/improved-performance-in-content-based-image-retrieval-using-machine-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">181</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5018</span> A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Takanori%20Tanaka">Takanori Tanaka</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Kitao"> Daisuke Kitao</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Ikeda"> Daisuke Ikeda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data-intensive%20science" title="data-intensive science">data-intensive science</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=aurora" title=" aurora"> aurora</a> </p> <a href="https://publications.waset.org/abstracts/19532/a-similar-image-retrieval-system-for-auroral-all-sky-images-based-on-local-features-and-color-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5017</span> Utilization of CD-ROM Database as a Storage and Retrieval System by Students of Nasarawa State University Keffi</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Suleiman%20Musa">Suleiman Musa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The utilization of CD-ROM as a storage and retrieval system by Nasarawa State University Keffi (NSUK) Library is crucial in preserving and dissemination of information to students and staff. This study investigated the utilization of CD-ROM Database storage and retrieval system by students of NUSK. Data was generated using structure questionnaire. One thousand and fifty two (1052) respondents were randomly selected among post-graduate and under-graduate students. Eight hundred and ten (810) questionnaires were returned, but only five hundred and ninety three (593) questionnaires were well completed and useful. The study found that post-graduate students use CD-ROM Databases more often than the under-graduate students in NSUK. The result of the study revealed that knowledge about CD-ROM Database 33.22% got it through library staff. 29.69% use CD-ROM once a month. Large number of users 45.70% purposely uses CD-ROM Databases for study and research. In fact, lack of users’ orientation amount to 58.35% of problems faced, while 31.20% lack of trained staff make it more difficult for utilization of CD-ROM Database. Major numbers of users 38.28% are neither satisfied nor dissatisfied, while a good number of them 27.99% are satisfied. Then 1.52% is highly dissatisfied but could not give reasons why. However, to ensure effective utilization of CD-ROM Database storage and retrieval system by students of NSUK, the following recommendations are made: effort should be made to encourage under-graduate in using CD-ROM Database. The institution should conduct orientation/induction course for students on CD-ROM Databases in the library. There is need for NSUK to produce in house databases on their CD-ROM for easy access by users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=utilization" title="utilization">utilization</a>, <a href="https://publications.waset.org/abstracts/search?q=CD-ROM%20databases" title=" CD-ROM databases"> CD-ROM databases</a>, <a href="https://publications.waset.org/abstracts/search?q=storage" title=" storage"> storage</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval" title=" retrieval"> retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=students" title=" students"> students</a> </p> <a href="https://publications.waset.org/abstracts/14420/utilization-of-cd-rom-database-as-a-storage-and-retrieval-system-by-students-of-nasarawa-state-university-keffi" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14420.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">445</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5016</span> Design and Implementation of Flexible Metadata Editing System for Digital Contents</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20W.%20Nam">K. W. Nam</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20J.%20Kim"> B. J. Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Lee"> S. J. Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Along with the development of network infrastructures, such as high-speed Internet and mobile environment, the explosion of multimedia data is expanding the range of multimedia services beyond voice and data services. Amid this flow, research is actively being done on the creation, management, and transmission of metadata on digital content to provide different services to users. This paper proposes a system for the insertion, storage, and retrieval of metadata about digital content. The metadata server with Binary XML was implemented for efficient storage space and retrieval speeds, and the transport data size required for metadata retrieval was simplified. With the proposed system, the metadata could be inserted into the moving objects in the video, and the unnecessary overlap could be minimized by improving the storage structure of the metadata. The proposed system can assemble metadata into one relevant topic, even if it is expressed in different media or in different forms. It is expected that the proposed system will handle complex network types of data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=multimedia" title=" multimedia"> multimedia</a>, <a href="https://publications.waset.org/abstracts/search?q=metadata" title=" metadata"> metadata</a>, <a href="https://publications.waset.org/abstracts/search?q=editing%20tool" title=" editing tool"> editing tool</a>, <a href="https://publications.waset.org/abstracts/search?q=XML" title=" XML"> XML</a> </p> <a href="https://publications.waset.org/abstracts/94443/design-and-implementation-of-flexible-metadata-editing-system-for-digital-contents" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">171</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5015</span> Similarity Based Retrieval in Case Based Reasoning for Analysis of Medical Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Dasgupta">M. Dasgupta</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Banerjee"> S. Banerjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Content Based Image Retrieval (CBIR) coupled with Case Based Reasoning (CBR) is a paradigm that is becoming increasingly popular in the diagnosis and therapy planning of medical ailments utilizing the digital content of medical images. This paper presents a survey of some of the promising approaches used in the detection of abnormalities in retina images as well in mammographic screening and detection of regions of interest in MRI scans of the brain. We also describe our proposed algorithm to detect hard exudates in fundus images of the retina of Diabetic Retinopathy patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=case%20based%20reasoning" title="case based reasoning">case based reasoning</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=retina%20image" title=" retina image"> retina image</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20based%20retrieval" title=" similarity based retrieval"> similarity based retrieval</a> </p> <a href="https://publications.waset.org/abstracts/2992/similarity-based-retrieval-in-case-based-reasoning-for-analysis-of-medical-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5014</span> A Framework of Product Information Service System Using Mobile Image Retrieval and Text Mining Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mei-Yi%20Wu">Mei-Yi Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Shang-Ming%20Huang"> Shang-Ming Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The online shoppers nowadays often search the product information on the Internet using some keywords of products. To use this kind of information searching model, shoppers should have a preliminary understanding about their interesting products and choose the correct keywords. However, if the products are first contact (for example, the worn clothes or backpack of passengers which you do not have any idea about the brands), these products cannot be retrieved due to insufficient information. In this paper, we discuss and study the applications in E-commerce using image retrieval and text mining techniques. We design a reasonable E-commerce application system containing three layers in the architecture to provide users product information. The system can automatically search and retrieval similar images and corresponding web pages on Internet according to the target pictures which taken by users. Then text mining techniques are applied to extract important keywords from these retrieval web pages and search the prices on different online shopping stores with these keywords using a web crawler. Finally, the users can obtain the product information including photos and prices of their favorite products. The experiments shows the efficiency of proposed system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobile%20image%20retrieval" title="mobile image retrieval">mobile image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20mining" title=" text mining"> text mining</a>, <a href="https://publications.waset.org/abstracts/search?q=product%20information%20service%20system" title=" product information service system"> product information service system</a>, <a href="https://publications.waset.org/abstracts/search?q=online%20marketing" title=" online marketing"> online marketing</a> </p> <a href="https://publications.waset.org/abstracts/33483/a-framework-of-product-information-service-system-using-mobile-image-retrieval-and-text-mining-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33483.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5013</span> A Method of the Semantic on Image Auto-Annotation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Huo">Lin Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianwei%20Liu"> Xianwei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingxiong%20Zhou"> Jingxiong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20auto-annotation" title="image auto-annotation">image auto-annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20correlograms" title=" color correlograms"> color correlograms</a>, <a href="https://publications.waset.org/abstracts/search?q=Hash%20code" title=" Hash code"> Hash code</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/15628/a-method-of-the-semantic-on-image-auto-annotation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5012</span> Interactive Image Search for Mobile Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Komal%20V.%20Aher">Komal V. Aher</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanjay%20B.%20Waykar"> Sanjay B. Waykar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays every individual having mobile device with them. In both computer vision and information retrieval Image search is currently hot topic with many applications. The proposed intelligent image search system is fully utilizing multimodal and multi-touch functionalities of smart phones which allows search with Image, Voice, and Text on mobile phones. The system will be more useful for users who already have pictures in their minds but have no proper descriptions or names to address them. The paper gives system with ability to form composite visual query to express user’s intention more clearly which helps to give more precise or appropriate results to user. The proposed algorithm will considerably get better in different aspects. System also uses Context based Image retrieval scheme to give significant outcomes. So system is able to achieve gain in terms of search performance, accuracy and user satisfaction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20device" title=" mobile device"> mobile device</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20visual%20search" title=" mobile visual search"> mobile visual search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20search" title=" multimodal search "> multimodal search </a> </p> <a href="https://publications.waset.org/abstracts/33265/interactive-image-search-for-mobile-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">369</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5011</span> Plagiarism Detection for Flowchart and Figures in Texts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmadu%20Maidorawa">Ahmadu Maidorawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Idrissa%20Djibo"> Idrissa Djibo</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Tella"> Muhammad Tella </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a method for detecting flow chart and figure plagiarism based on shape of image processing and multimedia retrieval. The method managed to retrieve flowcharts with ranked similarity according to different matching sets. Plagiarism detection is well known phenomenon in the academic arena. Copying other people is considered as serious offense that needs to be checked. There are many plagiarism detection systems such as turn-it-in that has been developed to provide these checks. Most, if not all, discard the figures and charts before checking for plagiarism. Discarding the figures and charts result in look holes that people can take advantage. That means people can plagiarize figures and charts easily without the current plagiarism systems detecting it. There are very few papers which talks about flowcharts plagiarism detection. Therefore, there is a need to develop a system that will detect plagiarism in figures and charts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=flowchart" title="flowchart">flowchart</a>, <a href="https://publications.waset.org/abstracts/search?q=multimedia%20retrieval" title=" multimedia retrieval"> multimedia retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=figures%20similarity" title=" figures similarity"> figures similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20comparison" title=" image comparison"> image comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=figure%20retrieval" title=" figure retrieval"> figure retrieval</a> </p> <a href="https://publications.waset.org/abstracts/32548/plagiarism-detection-for-flowchart-and-figures-in-texts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5010</span> A General Framework for Knowledge Discovery from Echocardiographic and Natural Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Nandagopalan">S. Nandagopalan</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Pradeep"> N. Pradeep</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20contour" title="active contour">active contour</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian" title=" Bayesian"> Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=echocardiographic%20image" title=" echocardiographic image"> echocardiographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20vector" title=" feature vector"> feature vector</a> </p> <a href="https://publications.waset.org/abstracts/42868/a-general-framework-for-knowledge-discovery-from-echocardiographic-and-natural-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42868.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">445</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5009</span> Retrieving Similar Segmented Objects Using Motion Descriptors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Konstantinos%20C.%20Kartsakalis">Konstantinos C. Kartsakalis</a>, <a href="https://publications.waset.org/abstracts/search?q=Angeliki%20Skoura"> Angeliki Skoura</a>, <a href="https://publications.waset.org/abstracts/search?q=Vasileios%20Megalooikonomou"> Vasileios Megalooikonomou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The fuzzy composition of objects depicted in images acquired through MR imaging or the use of bio-scanners has often been a point of controversy for field experts attempting to effectively delineate between the visualized objects. Modern approaches in medical image segmentation tend to consider fuzziness as a characteristic and inherent feature of the depicted object, instead of an undesirable trait. In this paper, a novel technique for efficient image retrieval in the context of images in which segmented objects are either crisp or fuzzily bounded is presented. Moreover, the proposed method is applied in the case of multiple, even conflicting, segmentations from field experts. Experimental results demonstrate the efficiency of the suggested method in retrieving similar objects from the aforementioned categories while taking into account the fuzzy nature of the depicted data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20object" title="fuzzy object">fuzzy object</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20image%20segmentation" title=" fuzzy image segmentation"> fuzzy image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20descriptors" title=" motion descriptors"> motion descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20imaging" title=" MRI imaging"> MRI imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval" title=" object-based image retrieval"> object-based image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/22736/retrieving-similar-segmented-objects-using-motion-descriptors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22736.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5008</span> A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Nandagopalan">S. Nandagopalan</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Pradeep"> N. Pradeep</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20contour" title="active contour">active contour</a>, <a href="https://publications.waset.org/abstracts/search?q=bayesian" title=" bayesian"> bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=echocardiographic%20image" title=" echocardiographic image"> echocardiographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20vector" title=" feature vector"> feature vector</a> </p> <a href="https://publications.waset.org/abstracts/42632/a-general-framework-for-knowledge-discovery-using-high-performance-machine-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42632.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">420</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5007</span> Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Peter%20U.%20Eze">Peter U. Eze</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Udaya"> P. Udaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Robin%20J.%20Evans"> Robin J. Evans</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, <em>p.</em> The constant correlation <em>p</em>, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from <em>p</em>. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Constant%20Correlation" title="Constant Correlation">Constant Correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=Medical%20Image" title=" Medical Image"> Medical Image</a>, <a href="https://publications.waset.org/abstracts/search?q=Spread%20Spectrum" title=" Spread Spectrum"> Spread Spectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=Tamper%20Detection" title=" Tamper Detection"> Tamper Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Watermarking" title=" Watermarking"> Watermarking</a> </p> <a href="https://publications.waset.org/abstracts/84629/medical-image-watermark-and-tamper-detection-using-constant-correlation-spread-spectrum-watermarking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84629.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5006</span> Content Based Video Retrieval System Using Principal Object Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Thinh%20Bui">Van Thinh Bui</a>, <a href="https://publications.waset.org/abstracts/search?q=Anh%20Tuan%20Tran"> Anh Tuan Tran</a>, <a href="https://publications.waset.org/abstracts/search?q=Quoc%20Viet%20Ngo"> Quoc Viet Ngo</a>, <a href="https://publications.waset.org/abstracts/search?q=The%20Bao%20Pham"> The Bao Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video retrieval is a searching problem on videos or clips based on content in which they are relatively close to an input image or video. The application of this retrieval consists of selecting video in a folder or recognizing a human in security camera. However, some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. In order to overcome all obstacles, we propose a content-based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is performed on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show. The performance is evaluated in promising comparison to the other approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title="video retrieval">video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20objects" title=" principal objects"> principal objects</a>, <a href="https://publications.waset.org/abstracts/search?q=keyframe" title=" keyframe"> keyframe</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20of%20aggregating%20superpixels" title=" segmentation of aggregating superpixels"> segmentation of aggregating superpixels</a>, <a href="https://publications.waset.org/abstracts/search?q=speeded%20up%20robust%20features" title=" speeded up robust features"> speeded up robust features</a>, <a href="https://publications.waset.org/abstracts/search?q=bag-of-words" title=" bag-of-words"> bag-of-words</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/59753/content-based-video-retrieval-system-using-principal-object-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59753.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5005</span> Retrieval-Induced Forgetting Effects in Retrospective and Prospective Memory in Normal Aging: An Experimental Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Merve%20Akca">Merve Akca</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retrieval-induced forgetting (RIF) refers to the phenomenon that selective retrieval of some information impairs memory for related, but not previously retrieved information. Despite age differences in retrieval-induced forgetting regarding retrospective memory being documented, this research aimed to highlight age differences in RIF of the prospective memory tasks for the first time. By using retrieval-practice paradigm, this study comparatively examined RIF effects in retrospective memory and event-based prospective memory in young and old adults. In this experimental study, a mixed factorial design with age group (Young, Old) as a between-subject variable, and memory type (Prospective, Retrospective) and item type (Practiced, Non-practiced) as within-subject variables was employed. Retrieval-induced forgetting was observed in the retrospective but not in the prospective memory task. Therefore, the results indicated that selective retrieval of past events led to suppression of other related past events in both age groups but not the suppression of memory for future intentions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prospective%20memory" title="prospective memory">prospective memory</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval-induced%20forgetting" title=" retrieval-induced forgetting"> retrieval-induced forgetting</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval%20inhibition" title=" retrieval inhibition"> retrieval inhibition</a>, <a href="https://publications.waset.org/abstracts/search?q=retrospective%20memory" title=" retrospective memory"> retrospective memory</a> </p> <a href="https://publications.waset.org/abstracts/57915/retrieval-induced-forgetting-effects-in-retrospective-and-prospective-memory-in-normal-aging-an-experimental-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5004</span> Processing Big Data: An Approach Using Feature Selection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nikat%20Parveen">Nikat Parveen</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Ananthi"> M. Ananthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Big data is one of the emerging technology, which collects the data from various sensors and those data will be used in many fields. Data retrieval is one of the major issue where there is a need to extract the exact data as per the need. In this paper, large amount of data set is processed by using the feature selection. Feature selection helps to choose the data which are actually needed to process and execute the task. The key value is the one which helps to point out exact data available in the storage space. Here the available data is streamed and R-Center is proposed to achieve this task. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20value" title=" key value"> key value</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval" title=" retrieval"> retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a> </p> <a href="https://publications.waset.org/abstracts/74596/processing-big-data-an-approach-using-feature-selection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74596.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">341</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5003</span> Urdu Text Extraction Method from Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samabia%20Tehsin">Samabia Tehsin</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Kausar"> Sumaira Kausar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=caption%20text" title="caption text">caption text</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20analysis" title=" document analysis"> document analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20extraction" title=" text extraction"> text extraction</a> </p> <a href="https://publications.waset.org/abstracts/9566/urdu-text-extraction-method-from-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5002</span> Information Retrieval for Kafficho Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mareye%20Zeleke%20Mekonen">Mareye Zeleke Mekonen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Kafficho language has distinct issues in information retrieval because of its restricted resources and dearth of standardized methods. In this endeavor, with the cooperation and support of linguists and native speakers, we investigate the creation of information retrieval systems specifically designed for the Kafficho language. The Kafficho information retrieval system allows Kafficho speakers to access information easily in an efficient and effective way. Our objective is to conduct an information retrieval experiment using 220 Kafficho text files, including fifteen sample questions. Tokenization, normalization, stop word removal, stemming, and other data pre-processing chores, together with additional tasks like term weighting, were prerequisites for the vector space model to represent each page and a particular query. The three well-known measurement metrics we used for our word were Precision, Recall, and and F-measure, with values of 87%, 28%, and 35%, respectively. This demonstrates how well the Kaffiho information retrieval system performed well while utilizing the vector space paradigm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kafficho" title="Kafficho">Kafficho</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20retrieval" title=" information retrieval"> information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=stemming" title=" stemming"> stemming</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20space" title=" vector space"> vector space</a> </p> <a href="https://publications.waset.org/abstracts/184199/information-retrieval-for-kafficho-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184199.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">57</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5001</span> Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saad%20M.%20Darwish">Saad M. Darwish</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20A.%20El-Iskandarani"> Mohamed A. El-Iskandarani</a>, <a href="https://publications.waset.org/abstracts/search?q=Guitar%20M.%20Shawkat"> Guitar M. Shawkat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20annotation" title=" image annotation"> image annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/18552/automatic-multi-label-image-annotation-system-guided-by-firefly-algorithm-and-bayesian-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5000</span> A Comparative Study of Approaches in User-Centred Health Information Retrieval</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Harsh%20Thakkar">Harsh Thakkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Iyer"> Ganesh Iyer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we survey various user-centered or context-based biomedical health information retrieval systems. We present and discuss the performance of systems submitted in CLEF eHealth 2014 Task 3 for this purpose. We classify and focus on comparing the two most prevalent retrieval models in biomedical information retrieval namely: Language Model (LM) and Vector Space Model (VSM). We also report on the effectiveness of using external medical resources and ontologies like MeSH, Metamap, UMLS, etc. We observed that the LM based retrieval systems outperform VSM based systems on various fronts. From the results we conclude that the state-of-art system scores for MAP was 0.4146, P@10 was 0.7560 and NDCG@10 was 0.7445, respectively. All of these score were reported by systems built on language modeling approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clinical%20document%20retrieval" title="clinical document retrieval">clinical document retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=concept-based%20information%20retrieval" title=" concept-based information retrieval"> concept-based information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=query%20expansion" title=" query expansion"> query expansion</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20models" title=" language models"> language models</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20space%20models" title=" vector space models"> vector space models</a> </p> <a href="https://publications.waset.org/abstracts/57392/a-comparative-study-of-approaches-in-user-centred-health-information-retrieval" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57392.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4999</span> Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sourour%20Khouaja">Sourour Khouaja</a>, <a href="https://publications.waset.org/abstracts/search?q=Hejer%20Jlassi"> Hejer Jlassi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Feddaoui"> Nadia Feddaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamel%20Hamrouni"> Kamel Hamrouni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BEMD" title="BEMD">BEMD</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20density" title=" breast density"> breast density</a>, <a href="https://publications.waset.org/abstracts/search?q=contend-based" title=" contend-based"> contend-based</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a> </p> <a href="https://publications.waset.org/abstracts/59187/content-based-mammograms-retrieval-based-on-breast-density-criteria-using-bidimensional-empirical-mode-decomposition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4998</span> Adaptive Dehazing Using Fusion Strategy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Ramesh%20Kanthan">M. Ramesh Kanthan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Naga%20Nandini%20Sujatha"> S. Naga Nandini Sujatha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20image" title="single image">single image</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=dehazing" title=" dehazing"> dehazing</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20fusion" title=" multi-scale fusion"> multi-scale fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=per-pixel" title=" per-pixel"> per-pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20map" title=" weight map"> weight map</a> </p> <a href="https://publications.waset.org/abstracts/32544/adaptive-dehazing-using-fusion-strategy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">465</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=167">167</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=168">168</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>