CINXE.COM
Search results for: object-based image retrieval
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: object-based image retrieval</title> <meta name="description" content="Search results for: object-based image retrieval"> <meta name="keywords" content="object-based image retrieval"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="object-based image retrieval" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="object-based image retrieval"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3040</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: object-based image retrieval</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3040</span> Performance Evaluation of Content Based Image Retrieval Using Indexed Views </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tahir%20Iqbal">Tahir Iqbal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mumtaz%20Ali"> Mumtaz Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Syed%20Wajahat%20Kareem"> Syed Wajahat Kareem</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Harris"> Muhammad Harris </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital information is expanding in exponential order in our life. Information that is residing online and offline are stored in huge repositories relating to every aspect of our lives. Getting the required information is a task of retrieval systems. Content based image retrieval (CBIR) is a retrieval system that retrieves the required information from repositories on the basis of the contents of the image. Time is a critical factor in retrieval system and using indexed views with CBIR system improves the time efficiency of retrieved results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content%20based%20image%20retrieval%20%28CBIR%29" title="content based image retrieval (CBIR)">content based image retrieval (CBIR)</a>, <a href="https://publications.waset.org/abstracts/search?q=indexed%20view" title=" indexed view"> indexed view</a>, <a href="https://publications.waset.org/abstracts/search?q=color" title=" color"> color</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=cross%20correlation" title=" cross correlation"> cross correlation</a> </p> <a href="https://publications.waset.org/abstracts/11165/performance-evaluation-of-content-based-image-retrieval-using-indexed-views" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11165.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3039</span> Secure Image Retrieval Based on Orthogonal Decomposition under Cloud Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Xu">Y. Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Xiong"> L. Xiong</a>, <a href="https://publications.waset.org/abstracts/search?q=Z.%20Xu"> Z. Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to protect data privacy, image with sensitive or private information needs to be encrypted before being outsourced to the cloud. However, this causes difficulties in image retrieval and data management. A secure image retrieval method based on orthogonal decomposition is proposed in the paper. The image is divided into two different components, for which encryption and feature extraction are executed separately. As a result, cloud server can extract features from an encrypted image directly and compare them with the features of the queried images, so that the user can thus obtain the image. Different from other methods, the proposed method has no special requirements to encryption algorithms. Experimental results prove that the proposed method can achieve better security and better retrieval precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=secure%20image%20retrieval" title="secure image retrieval">secure image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20search" title=" secure search"> secure search</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20decomposition" title=" orthogonal decomposition"> orthogonal decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20cloud%20computing" title=" secure cloud computing"> secure cloud computing</a> </p> <a href="https://publications.waset.org/abstracts/29115/secure-image-retrieval-based-on-orthogonal-decomposition-under-cloud-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29115.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3038</span> Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=W.%20S.%20Besbas">W. S. Besbas</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Artemi"> M. A. Artemi</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20M.%20Salman"> R. M. Salman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Content%20Based%20Image%20Retrieval%20%28CBIR%29" title="Content Based Image Retrieval (CBIR)">Content Based Image Retrieval (CBIR)</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20sketch%20image%20retrieval" title=" face sketch image retrieval"> face sketch image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20selection%20for%20CBIR" title=" features selection for CBIR"> features selection for CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval%20in%20transform%20domain" title=" image retrieval in transform domain"> image retrieval in transform domain</a> </p> <a href="https://publications.waset.org/abstracts/8251/content-based-face-sketch-images-retrieval-in-wht-dct-and-dwt-transform-domain" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8251.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3037</span> Local Texture and Global Color Descriptors for Content Based Image Retrieval</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tajinder%20Kaur">Tajinder Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Anu%20Bala"> Anu Bala</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An image retrieval system is a computer system for browsing, searching, and retrieving images from a large database of digital images a new algorithm meant for content-based image retrieval (CBIR) is presented in this paper. The proposed method combines the color and texture features which are extracted the global and local information of the image. The local texture feature is extracted by using local binary patterns (LBP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors. For the global color feature, the color histogram (CH) is used which is calculated by RGB (red, green, and blue) spaces separately. In this paper, the combination of color and texture features are proposed for content-based image retrieval. The performance of the proposed method is tested on Corel 1000 database which is the natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP and CH. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color" title="color">color</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20patterns" title=" local binary patterns"> local binary patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/25503/local-texture-and-global-color-descriptors-for-content-based-image-retrieval" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25503.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">366</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3036</span> Content-Based Image Retrieval Using HSV Color Space Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Qazanfari">Hamed Qazanfari</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Hassanpour"> Hamid Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazem%20Qazanfari"> Kazem Qazanfari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title="content-based image retrieval">content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20difference%20histogram" title=" color difference histogram"> color difference histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=efficient%20features%20selection" title=" efficient features selection"> efficient features selection</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a> </p> <a href="https://publications.waset.org/abstracts/75068/content-based-image-retrieval-using-hsv-color-space-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75068.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3035</span> Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20W.%20U.%20D.%20Chathurani">N. W. U. D. Chathurani</a>, <a href="https://publications.waset.org/abstracts/search?q=Shlomo%20Geva"> Shlomo Geva</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinod%20Chandran"> Vinod Chandran</a>, <a href="https://publications.waset.org/abstracts/search?q=Proboda%20Rajapaksha"> Proboda Rajapaksha </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title="feature fusion">feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=membership%20function" title=" membership function"> membership function</a>, <a href="https://publications.waset.org/abstracts/search?q=normalization" title=" normalization"> normalization</a> </p> <a href="https://publications.waset.org/abstracts/52968/image-retrieval-based-on-multi-feature-fusion-for-heterogeneous-image-databases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">345</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3034</span> Improved Performance in Content-Based Image Retrieval Using Machine Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Ramesh%20Naik">B. Ramesh Naik</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Venugopal"> T. Venugopal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a novel approach which improves the high-level semantics of images based on machine learning approach. The contemporary approaches for image retrieval and object recognition includes Fourier transforms, Wavelets, SIFT and HoG. Though these descriptors helpful in a wide range of applications, they exploit zero order statistics, and this lacks high descriptiveness of image features. These descriptors usually take benefit of primitive visual features such as shape, color, texture and spatial locations to describe images. These features do not adequate to describe high-level semantics of the images. This leads to a gap in semantic content caused to unacceptable performance in image retrieval system. A novel method has been proposed referred as discriminative learning which is derived from machine learning approach that efficiently discriminates image features. The analysis and results of proposed approach were validated thoroughly on WANG and Caltech-101 Databases. The results proved that this approach is very competitive in content-based image retrieval. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CBIR" title="CBIR">CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=discriminative%20learning" title=" discriminative learning"> discriminative learning</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20weight%20learning" title=" region weight learning"> region weight learning</a>, <a href="https://publications.waset.org/abstracts/search?q=scale%20invariant%20feature%20transforms" title=" scale invariant feature transforms"> scale invariant feature transforms</a> </p> <a href="https://publications.waset.org/abstracts/88331/improved-performance-in-content-based-image-retrieval-using-machine-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">181</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3033</span> A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Takanori%20Tanaka">Takanori Tanaka</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Kitao"> Daisuke Kitao</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Ikeda"> Daisuke Ikeda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data-intensive%20science" title="data-intensive science">data-intensive science</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=aurora" title=" aurora"> aurora</a> </p> <a href="https://publications.waset.org/abstracts/19532/a-similar-image-retrieval-system-for-auroral-all-sky-images-based-on-local-features-and-color-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3032</span> Quantum Entangled States and Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sanjay%20%20Singh">Sanjay Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sushil%20Kumar"> Sushil Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Rashmi%20Jain"> Rashmi Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Quantum registering is another pattern in computational hypothesis and a quantum mechanical framework has a few helpful properties like Entanglement. We plan to store data concerning the structure and substance of a basic picture in a quantum framework. Consider a variety of n qubits which we propose to use as our memory stockpiling. In recent years classical processing is switched to quantum image processing. Quantum image processing is an elegant approach to overcome the problems of its classical counter parts. Image storage, retrieval and its processing on quantum machines is an emerging area. Although quantum machines do not exist in physical reality but theoretical algorithms developed based on quantum entangled states gives new insights to process the classical images in quantum domain. Here in the present work, we give the brief overview, such that how entangled states can be useful for quantum image storage and retrieval. We discuss the properties of tripartite Greenberger-Horne-Zeilinger and W states and their usefulness to store the shapes which may consist three vertices. We also propose the techniques to store shapes having more than three vertices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Greenberger-Horne-Zeilinger" title="Greenberger-Horne-Zeilinger">Greenberger-Horne-Zeilinger</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20storage%20and%20retrieval" title=" image storage and retrieval"> image storage and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20entanglement" title=" quantum entanglement"> quantum entanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=W%20states" title=" W states"> W states</a> </p> <a href="https://publications.waset.org/abstracts/67732/quantum-entangled-states-and-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67732.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3031</span> Similarity Based Retrieval in Case Based Reasoning for Analysis of Medical Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Dasgupta">M. Dasgupta</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Banerjee"> S. Banerjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Content Based Image Retrieval (CBIR) coupled with Case Based Reasoning (CBR) is a paradigm that is becoming increasingly popular in the diagnosis and therapy planning of medical ailments utilizing the digital content of medical images. This paper presents a survey of some of the promising approaches used in the detection of abnormalities in retina images as well in mammographic screening and detection of regions of interest in MRI scans of the brain. We also describe our proposed algorithm to detect hard exudates in fundus images of the retina of Diabetic Retinopathy patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=case%20based%20reasoning" title="case based reasoning">case based reasoning</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=retina%20image" title=" retina image"> retina image</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20based%20retrieval" title=" similarity based retrieval"> similarity based retrieval</a> </p> <a href="https://publications.waset.org/abstracts/2992/similarity-based-retrieval-in-case-based-reasoning-for-analysis-of-medical-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3030</span> A Framework of Product Information Service System Using Mobile Image Retrieval and Text Mining Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mei-Yi%20Wu">Mei-Yi Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Shang-Ming%20Huang"> Shang-Ming Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The online shoppers nowadays often search the product information on the Internet using some keywords of products. To use this kind of information searching model, shoppers should have a preliminary understanding about their interesting products and choose the correct keywords. However, if the products are first contact (for example, the worn clothes or backpack of passengers which you do not have any idea about the brands), these products cannot be retrieved due to insufficient information. In this paper, we discuss and study the applications in E-commerce using image retrieval and text mining techniques. We design a reasonable E-commerce application system containing three layers in the architecture to provide users product information. The system can automatically search and retrieval similar images and corresponding web pages on Internet according to the target pictures which taken by users. Then text mining techniques are applied to extract important keywords from these retrieval web pages and search the prices on different online shopping stores with these keywords using a web crawler. Finally, the users can obtain the product information including photos and prices of their favorite products. The experiments shows the efficiency of proposed system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobile%20image%20retrieval" title="mobile image retrieval">mobile image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20mining" title=" text mining"> text mining</a>, <a href="https://publications.waset.org/abstracts/search?q=product%20information%20service%20system" title=" product information service system"> product information service system</a>, <a href="https://publications.waset.org/abstracts/search?q=online%20marketing" title=" online marketing"> online marketing</a> </p> <a href="https://publications.waset.org/abstracts/33483/a-framework-of-product-information-service-system-using-mobile-image-retrieval-and-text-mining-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33483.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3029</span> A Method of the Semantic on Image Auto-Annotation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Huo">Lin Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianwei%20Liu"> Xianwei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingxiong%20Zhou"> Jingxiong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20auto-annotation" title="image auto-annotation">image auto-annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20correlograms" title=" color correlograms"> color correlograms</a>, <a href="https://publications.waset.org/abstracts/search?q=Hash%20code" title=" Hash code"> Hash code</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/15628/a-method-of-the-semantic-on-image-auto-annotation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3028</span> Interactive Image Search for Mobile Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Komal%20V.%20Aher">Komal V. Aher</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanjay%20B.%20Waykar"> Sanjay B. Waykar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays every individual having mobile device with them. In both computer vision and information retrieval Image search is currently hot topic with many applications. The proposed intelligent image search system is fully utilizing multimodal and multi-touch functionalities of smart phones which allows search with Image, Voice, and Text on mobile phones. The system will be more useful for users who already have pictures in their minds but have no proper descriptions or names to address them. The paper gives system with ability to form composite visual query to express user’s intention more clearly which helps to give more precise or appropriate results to user. The proposed algorithm will considerably get better in different aspects. System also uses Context based Image retrieval scheme to give significant outcomes. So system is able to achieve gain in terms of search performance, accuracy and user satisfaction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20device" title=" mobile device"> mobile device</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20visual%20search" title=" mobile visual search"> mobile visual search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20search" title=" multimodal search "> multimodal search </a> </p> <a href="https://publications.waset.org/abstracts/33265/interactive-image-search-for-mobile-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3027</span> Plagiarism Detection for Flowchart and Figures in Texts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmadu%20Maidorawa">Ahmadu Maidorawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Idrissa%20Djibo"> Idrissa Djibo</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Tella"> Muhammad Tella </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a method for detecting flow chart and figure plagiarism based on shape of image processing and multimedia retrieval. The method managed to retrieve flowcharts with ranked similarity according to different matching sets. Plagiarism detection is well known phenomenon in the academic arena. Copying other people is considered as serious offense that needs to be checked. There are many plagiarism detection systems such as turn-it-in that has been developed to provide these checks. Most, if not all, discard the figures and charts before checking for plagiarism. Discarding the figures and charts result in look holes that people can take advantage. That means people can plagiarize figures and charts easily without the current plagiarism systems detecting it. There are very few papers which talks about flowcharts plagiarism detection. Therefore, there is a need to develop a system that will detect plagiarism in figures and charts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=flowchart" title="flowchart">flowchart</a>, <a href="https://publications.waset.org/abstracts/search?q=multimedia%20retrieval" title=" multimedia retrieval"> multimedia retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=figures%20similarity" title=" figures similarity"> figures similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20comparison" title=" image comparison"> image comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=figure%20retrieval" title=" figure retrieval"> figure retrieval</a> </p> <a href="https://publications.waset.org/abstracts/32548/plagiarism-detection-for-flowchart-and-figures-in-texts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3026</span> Retrieving Similar Segmented Objects Using Motion Descriptors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Konstantinos%20C.%20Kartsakalis">Konstantinos C. Kartsakalis</a>, <a href="https://publications.waset.org/abstracts/search?q=Angeliki%20Skoura"> Angeliki Skoura</a>, <a href="https://publications.waset.org/abstracts/search?q=Vasileios%20Megalooikonomou"> Vasileios Megalooikonomou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The fuzzy composition of objects depicted in images acquired through MR imaging or the use of bio-scanners has often been a point of controversy for field experts attempting to effectively delineate between the visualized objects. Modern approaches in medical image segmentation tend to consider fuzziness as a characteristic and inherent feature of the depicted object, instead of an undesirable trait. In this paper, a novel technique for efficient image retrieval in the context of images in which segmented objects are either crisp or fuzzily bounded is presented. Moreover, the proposed method is applied in the case of multiple, even conflicting, segmentations from field experts. Experimental results demonstrate the efficiency of the suggested method in retrieving similar objects from the aforementioned categories while taking into account the fuzzy nature of the depicted data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20object" title="fuzzy object">fuzzy object</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20image%20segmentation" title=" fuzzy image segmentation"> fuzzy image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20descriptors" title=" motion descriptors"> motion descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20imaging" title=" MRI imaging"> MRI imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval" title=" object-based image retrieval"> object-based image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/22736/retrieving-similar-segmented-objects-using-motion-descriptors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22736.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3025</span> Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Peter%20U.%20Eze">Peter U. Eze</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Udaya"> P. Udaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Robin%20J.%20Evans"> Robin J. Evans</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, <em>p.</em> The constant correlation <em>p</em>, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from <em>p</em>. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Constant%20Correlation" title="Constant Correlation">Constant Correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=Medical%20Image" title=" Medical Image"> Medical Image</a>, <a href="https://publications.waset.org/abstracts/search?q=Spread%20Spectrum" title=" Spread Spectrum"> Spread Spectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=Tamper%20Detection" title=" Tamper Detection"> Tamper Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Watermarking" title=" Watermarking"> Watermarking</a> </p> <a href="https://publications.waset.org/abstracts/84629/medical-image-watermark-and-tamper-detection-using-constant-correlation-spread-spectrum-watermarking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84629.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3024</span> Content Based Video Retrieval System Using Principal Object Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Thinh%20Bui">Van Thinh Bui</a>, <a href="https://publications.waset.org/abstracts/search?q=Anh%20Tuan%20Tran"> Anh Tuan Tran</a>, <a href="https://publications.waset.org/abstracts/search?q=Quoc%20Viet%20Ngo"> Quoc Viet Ngo</a>, <a href="https://publications.waset.org/abstracts/search?q=The%20Bao%20Pham"> The Bao Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video retrieval is a searching problem on videos or clips based on content in which they are relatively close to an input image or video. The application of this retrieval consists of selecting video in a folder or recognizing a human in security camera. However, some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. In order to overcome all obstacles, we propose a content-based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is performed on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show. The performance is evaluated in promising comparison to the other approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title="video retrieval">video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20objects" title=" principal objects"> principal objects</a>, <a href="https://publications.waset.org/abstracts/search?q=keyframe" title=" keyframe"> keyframe</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20of%20aggregating%20superpixels" title=" segmentation of aggregating superpixels"> segmentation of aggregating superpixels</a>, <a href="https://publications.waset.org/abstracts/search?q=speeded%20up%20robust%20features" title=" speeded up robust features"> speeded up robust features</a>, <a href="https://publications.waset.org/abstracts/search?q=bag-of-words" title=" bag-of-words"> bag-of-words</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/59753/content-based-video-retrieval-system-using-principal-object-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59753.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3023</span> Retrieval-Induced Forgetting Effects in Retrospective and Prospective Memory in Normal Aging: An Experimental Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Merve%20Akca">Merve Akca</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retrieval-induced forgetting (RIF) refers to the phenomenon that selective retrieval of some information impairs memory for related, but not previously retrieved information. Despite age differences in retrieval-induced forgetting regarding retrospective memory being documented, this research aimed to highlight age differences in RIF of the prospective memory tasks for the first time. By using retrieval-practice paradigm, this study comparatively examined RIF effects in retrospective memory and event-based prospective memory in young and old adults. In this experimental study, a mixed factorial design with age group (Young, Old) as a between-subject variable, and memory type (Prospective, Retrospective) and item type (Practiced, Non-practiced) as within-subject variables was employed. Retrieval-induced forgetting was observed in the retrospective but not in the prospective memory task. Therefore, the results indicated that selective retrieval of past events led to suppression of other related past events in both age groups but not the suppression of memory for future intentions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prospective%20memory" title="prospective memory">prospective memory</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval-induced%20forgetting" title=" retrieval-induced forgetting"> retrieval-induced forgetting</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval%20inhibition" title=" retrieval inhibition"> retrieval inhibition</a>, <a href="https://publications.waset.org/abstracts/search?q=retrospective%20memory" title=" retrospective memory"> retrospective memory</a> </p> <a href="https://publications.waset.org/abstracts/57915/retrieval-induced-forgetting-effects-in-retrospective-and-prospective-memory-in-normal-aging-an-experimental-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3022</span> Urdu Text Extraction Method from Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samabia%20Tehsin">Samabia Tehsin</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Kausar"> Sumaira Kausar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=caption%20text" title="caption text">caption text</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20analysis" title=" document analysis"> document analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20extraction" title=" text extraction"> text extraction</a> </p> <a href="https://publications.waset.org/abstracts/9566/urdu-text-extraction-method-from-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3021</span> Information Retrieval for Kafficho Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mareye%20Zeleke%20Mekonen">Mareye Zeleke Mekonen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Kafficho language has distinct issues in information retrieval because of its restricted resources and dearth of standardized methods. In this endeavor, with the cooperation and support of linguists and native speakers, we investigate the creation of information retrieval systems specifically designed for the Kafficho language. The Kafficho information retrieval system allows Kafficho speakers to access information easily in an efficient and effective way. Our objective is to conduct an information retrieval experiment using 220 Kafficho text files, including fifteen sample questions. Tokenization, normalization, stop word removal, stemming, and other data pre-processing chores, together with additional tasks like term weighting, were prerequisites for the vector space model to represent each page and a particular query. The three well-known measurement metrics we used for our word were Precision, Recall, and and F-measure, with values of 87%, 28%, and 35%, respectively. This demonstrates how well the Kaffiho information retrieval system performed well while utilizing the vector space paradigm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kafficho" title="Kafficho">Kafficho</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20retrieval" title=" information retrieval"> information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=stemming" title=" stemming"> stemming</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20space" title=" vector space"> vector space</a> </p> <a href="https://publications.waset.org/abstracts/184199/information-retrieval-for-kafficho-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184199.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">57</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3020</span> Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saad%20M.%20Darwish">Saad M. Darwish</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20A.%20El-Iskandarani"> Mohamed A. El-Iskandarani</a>, <a href="https://publications.waset.org/abstracts/search?q=Guitar%20M.%20Shawkat"> Guitar M. Shawkat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20annotation" title=" image annotation"> image annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/18552/automatic-multi-label-image-annotation-system-guided-by-firefly-algorithm-and-bayesian-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3019</span> A Comparative Study of Approaches in User-Centred Health Information Retrieval</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Harsh%20Thakkar">Harsh Thakkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Iyer"> Ganesh Iyer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we survey various user-centered or context-based biomedical health information retrieval systems. We present and discuss the performance of systems submitted in CLEF eHealth 2014 Task 3 for this purpose. We classify and focus on comparing the two most prevalent retrieval models in biomedical information retrieval namely: Language Model (LM) and Vector Space Model (VSM). We also report on the effectiveness of using external medical resources and ontologies like MeSH, Metamap, UMLS, etc. We observed that the LM based retrieval systems outperform VSM based systems on various fronts. From the results we conclude that the state-of-art system scores for MAP was 0.4146, P@10 was 0.7560 and NDCG@10 was 0.7445, respectively. All of these score were reported by systems built on language modeling approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clinical%20document%20retrieval" title="clinical document retrieval">clinical document retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=concept-based%20information%20retrieval" title=" concept-based information retrieval"> concept-based information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=query%20expansion" title=" query expansion"> query expansion</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20models" title=" language models"> language models</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20space%20models" title=" vector space models"> vector space models</a> </p> <a href="https://publications.waset.org/abstracts/57392/a-comparative-study-of-approaches-in-user-centred-health-information-retrieval" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57392.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3018</span> Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sourour%20Khouaja">Sourour Khouaja</a>, <a href="https://publications.waset.org/abstracts/search?q=Hejer%20Jlassi"> Hejer Jlassi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Feddaoui"> Nadia Feddaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamel%20Hamrouni"> Kamel Hamrouni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BEMD" title="BEMD">BEMD</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20density" title=" breast density"> breast density</a>, <a href="https://publications.waset.org/abstracts/search?q=contend-based" title=" contend-based"> contend-based</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a> </p> <a href="https://publications.waset.org/abstracts/59187/content-based-mammograms-retrieval-based-on-breast-density-criteria-using-bidimensional-empirical-mode-decomposition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3017</span> Adaptive Dehazing Using Fusion Strategy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Ramesh%20Kanthan">M. Ramesh Kanthan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Naga%20Nandini%20Sujatha"> S. Naga Nandini Sujatha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20image" title="single image">single image</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=dehazing" title=" dehazing"> dehazing</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20fusion" title=" multi-scale fusion"> multi-scale fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=per-pixel" title=" per-pixel"> per-pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20map" title=" weight map"> weight map</a> </p> <a href="https://publications.waset.org/abstracts/32544/adaptive-dehazing-using-fusion-strategy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3016</span> Tongue Image Retrieval Based Using Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20FAROOQ">Ahmad FAROOQ</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinfeng%20Zhang"> Xinfeng Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahad%20Sabah"> Fahad Sabah</a>, <a href="https://publications.waset.org/abstracts/search?q=Raheem%20Sarwar"> Raheem Sarwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Traditional Chinese Medicine, tongue diagnosis is a vital inspection tool (TCM). In this study, we explore the potential of machine learning in tongue diagnosis. It begins with the cataloguing of the various classifications and characteristics of the human tongue. We infer 24 kinds of tongues from the material and coating of the tongue, and we identify 21 attributes of the tongue. The next step is to apply machine learning methods to the tongue dataset. We use the Weka machine learning platform to conduct the experiment for performance analysis. The 457 instances of the tongue dataset are used to test the performance of five different machine learning methods, including SVM, Random Forests, Decision Trees, and Naive Bayes. Based on accuracy and Area under the ROC Curve, the Support Vector Machine algorithm was shown to be the most effective for tongue diagnosis (AUC). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20imaging" title="medical imaging">medical imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=tongue" title=" tongue"> tongue</a> </p> <a href="https://publications.waset.org/abstracts/176849/tongue-image-retrieval-based-using-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3015</span> Improved Image Retrieval for Efficient Localization in Urban Areas Using Location Uncertainty Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Salarian">Mahdi Salarian</a>, <a href="https://publications.waset.org/abstracts/search?q=Xi%20Xu"> Xi Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Rashid%20Ansari"> Rashid Ansari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate localization of mobile devices based on camera-acquired visual media information usually requires a search over a very large GPS-referenced image database. This paper proposes an efficient method for limiting the search space for image retrieval engine by extracting and leveraging additional media information about Estimated Positional Error (EP E) to address complexity and accuracy issues in the search, especially to be used for compensating GPS location inaccuracy in dense urban areas. The improved performance is achieved by up to a hundred-fold reduction in the search area used in available reference methods while providing improved accuracy. To test our procedure we created a database by acquiring Google Street View (GSV) images for down town of Chicago. Other available databases are not suitable for our approach due to lack of EP E for the query images. We tested the procedure using more than 200 query images along with EP E acquired mostly in the densest areas of Chicago with different phones and in different conditions such as low illumination and from under rail tracks. The effectiveness of our approach and the effect of size and sector angle of the search area are discussed and experimental results demonstrate how our proposed method can improve performance just by utilizing a data that is available for mobile systems such as smart phones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=localization" title="localization">localization</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval" title=" retrieval"> retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS%20uncertainty" title=" GPS uncertainty"> GPS uncertainty</a>, <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20word" title=" bag of word"> bag of word</a> </p> <a href="https://publications.waset.org/abstracts/51760/improved-image-retrieval-for-efficient-localization-in-urban-areas-using-location-uncertainty-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51760.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3014</span> Bag of Words Representation Based on Weighting Useful Visual Words</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Abdedayem">Fatma Abdedayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most effective and efficient methods in image categorization are almost based on bag-of-words (BOW) which presents image by a histogram of occurrence of visual words. In this paper, we propose a novel extension to this method. Firstly, we extract features in multi-scales by applying a color local descriptor named opponent-SIFT. Secondly, in order to represent image we use Spatial Pyramid Representation (SPR) and an extension to the BOW method which based on weighting visual words. Typically, the visual words are weighted during histogram assignment by computing the ratio of their occurrences in the image to the occurrences in the background. Finally, according to classical BOW retrieval framework, only a few words of the vocabulary is useful for image representation. Therefore, we select the useful weighted visual words that respect the threshold value. Experimentally, the algorithm is tested by using different image classes of PASCAL VOC 2007 and is compared against the classical bag-of-visual-words algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BOW" title="BOW">BOW</a>, <a href="https://publications.waset.org/abstracts/search?q=useful%20visual%20words" title=" useful visual words"> useful visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=weighted%20visual%20words" title=" weighted visual words"> weighted visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20visual%20words" title=" bag of visual words"> bag of visual words</a> </p> <a href="https://publications.waset.org/abstracts/14009/bag-of-words-representation-based-on-weighting-useful-visual-words" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3013</span> Automatic Extraction of Water Bodies Using Whole-R Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nikhat%20Nawaz">Nikhat Nawaz</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Srinivasulu"> S. Srinivasulu</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Kesava%20Rao"> P. Kesava Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Feature extraction plays an important role in many remote sensing applications. Automatic extraction of water bodies is of great significance in many remote sensing applications like change detection, image retrieval etc. This paper presents a procedure for automatic extraction of water information from remote sensing images. The algorithm uses the relative location of R-colour component of the chromaticity diagram. This method is then integrated with the effectiveness of the spatial scale transformation of whole method. The whole method is based on water index fitted from spectral library. Experimental results demonstrate the improved accuracy and effectiveness of the integrated method for automatic extraction of water bodies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=chromaticity" title=" chromaticity"> chromaticity</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20index" title=" water index"> water index</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20library" title=" spectral library"> spectral library</a>, <a href="https://publications.waset.org/abstracts/search?q=integrated%20method" title=" integrated method "> integrated method </a> </p> <a href="https://publications.waset.org/abstracts/2097/automatic-extraction-of-water-bodies-using-whole-r-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">385</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3012</span> Comparison of Crossover Types to Obtain Optimal Queries Using Adaptive Genetic Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wafa%E2%80%99%20Alma%27Aitah">Wafa’ Alma'Aitah</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaled%20Almakadmeh"> Khaled Almakadmeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> this study presents an information retrieval system of using genetic algorithm to increase information retrieval efficiency. Using vector space model, information retrieval is based on the similarity measurement between query and documents. Documents with high similarity to query are judge more relevant to the query and should be retrieved first. Using genetic algorithms, each query is represented by a chromosome; these chromosomes are fed into genetic operator process: selection, crossover, and mutation until an optimized query chromosome is obtained for document retrieval. Results show that information retrieval with adaptive crossover probability and single point type crossover and roulette wheel as selection type give the highest recall. The proposed approach is verified using (242) proceedings abstracts collected from the Saudi Arabian national conference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title="genetic algorithm">genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20retrieval" title=" information retrieval"> information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20queries" title=" optimal queries"> optimal queries</a>, <a href="https://publications.waset.org/abstracts/search?q=crossover" title=" crossover"> crossover</a> </p> <a href="https://publications.waset.org/abstracts/59109/comparison-of-crossover-types-to-obtain-optimal-queries-using-adaptive-genetic-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3011</span> An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ziad%20Abdallah">Ziad Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamad%20Oueidat"> Mohamad Oueidat</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20El-Zaart"> Ali El-Zaart</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20retrieval%20system" title=" information retrieval system"> information retrieval system</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-label" title=" multi-label"> multi-label</a>, <a href="https://publications.waset.org/abstracts/search?q=problem%20transformation" title=" problem transformation"> problem transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20of%20gradients" title=" histogram of gradients"> histogram of gradients</a> </p> <a href="https://publications.waset.org/abstracts/66645/an-improvement-of-multi-label-image-classification-method-based-on-histogram-of-oriented-gradient" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66645.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">374</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=101">101</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=102">102</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=object-based%20image%20retrieval&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>