CINXE.COM
Search results for: big images
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: big images</title> <meta name="description" content="Search results for: big images"> <meta name="keywords" content="big images"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="big images" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="big images"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2391</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: big images</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2391</span> Rhetoric and Renarrative Structure of Digital Images in Trans-Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Geng">Yang Geng</a>, <a href="https://publications.waset.org/abstracts/search?q=Anqi%20Zhao"> Anqi Zhao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The misreading theory of Harold Bloom provides a new diachronic perspective as an approach to the consistency between rhetoric of digital technology, dynamic movement of digital images and uncertain meaning of text. Reinterpreting the diachroneity of 'intertextuality' in the context of misreading theory extended the range of the 'intermediality' of transmedia to the intense tension between digital images and symbolic images throughout history of images. With the analogy between six categories of revisionary ratios and six steps of digital transformation, digital rhetoric might be illustrated as a linear process reflecting dynamic, intensive relations between digital moving images and original static images. Finally, it was concluded that two-way framework of the rhetoric of transformation of digital images and reversed served as a renarrative structure to revive static images by reconnecting them with digital moving images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rhetoric" title="rhetoric">rhetoric</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20art" title=" digital art"> digital art</a>, <a href="https://publications.waset.org/abstracts/search?q=intermediality" title=" intermediality"> intermediality</a>, <a href="https://publications.waset.org/abstracts/search?q=misreading%20theory" title=" misreading theory"> misreading theory</a> </p> <a href="https://publications.waset.org/abstracts/100230/rhetoric-and-renarrative-structure-of-digital-images-in-trans-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2390</span> Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20images" title="big images">big images</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20images" title=" binary images"> binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20similarity" title=" image similarity"> image similarity</a> </p> <a href="https://publications.waset.org/abstracts/89963/quick-similarity-measurement-of-binary-images-via-probabilistic-pixel-mapping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2389</span> 3D Guided Image Filtering to Improve Quality of Short-Time Binned Dynamic PET Images Using MRI Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tabassum%20Husain">Tabassum Husain</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Peng%20Li"> Shen Peng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhaolin%20Chen"> Zhaolin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper evaluates the usability of 3D Guided Image Filtering to enhance the quality of short-time binned dynamic PET images by using MRI images. Guided image filtering is an edge-preserving filter proposed to enhance 2D images. The 3D filter is applied on 1 and 5-minute binned images. The results are compared with 15-minute binned images and the Gaussian filtering. The guided image filter enhances the quality of dynamic PET images while also preserving important information of the voxels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images" title="dynamic PET images">dynamic PET images</a>, <a href="https://publications.waset.org/abstracts/search?q=guided%20image%20filter" title=" guided image filter"> guided image filter</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20preservation%20filtering" title=" information preservation filtering"> information preservation filtering</a> </p> <a href="https://publications.waset.org/abstracts/152864/3d-guided-image-filtering-to-improve-quality-of-short-time-binned-dynamic-pet-images-using-mri-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2388</span> Reduction of Speckle Noise in Echocardiographic Images: A Survey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fathi%20Kallel">Fathi Kallel</a>, <a href="https://publications.waset.org/abstracts/search?q=Saida%20Khachira"> Saida Khachira</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Ben%20Slima"> Mohamed Ben Slima</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ben%20Hamida"> Ahmed Ben Hamida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speckle noise is a main characteristic of cardiac ultrasound images, it corresponding to grainy appearance that degrades the image quality. For this reason, the ultrasound images are difficult to use automatically in clinical use, then treatments are required for this type of images. Then a filtering procedure of these images is necessary to eliminate the speckle noise and to improve the quality of ultrasound images which will be then segmented to extract the necessary forms that exist. In this paper, we present the importance of the pre-treatment step for segmentation. This work is applied to cardiac ultrasound images. In a first step, a comparative study of speckle filtering method will be presented and then we use a segmentation algorithm to locate and extract cardiac structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20processing" title="medical image processing">medical image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound%20images" title=" ultrasound images"> ultrasound images</a>, <a href="https://publications.waset.org/abstracts/search?q=Speckle%20noise" title=" Speckle noise"> Speckle noise</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=speckle%20filtering" title=" speckle filtering"> speckle filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=snakes" title=" snakes"> snakes</a> </p> <a href="https://publications.waset.org/abstracts/19064/reduction-of-speckle-noise-in-echocardiographic-images-a-survey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">530</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2387</span> Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emhimed%20Saffor">Emhimed Saffor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection "> edge detection </a> </p> <a href="https://publications.waset.org/abstracts/44926/subjective-evaluation-of-mathematical-morphology-edge-detection-on-computed-tomography-ct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2386</span> Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nidhal%20K.%20Azawi">Nidhal K. Azawi</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20M.%20Gauch"> John M. Gauch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colonoscopy%20classification" title="colonoscopy classification">colonoscopy classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20alignment" title=" image alignment"> image alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/92461/automatic-method-for-classification-of-informative-and-noninformative-images-in-colonoscopy-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2385</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2384</span> Effective Texture Features for Segmented Mammogram Images Based on Multi-Region of Interest Segmentation Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramayanam%20Suresh">Ramayanam Suresh</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Nagaraja%20Rao"> A. Nagaraja Rao</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Eswara%20Reddy"> B. Eswara Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture features of mammogram images are useful for finding masses or cancer cases in mammography, which have been used by radiologists. Textures are greatly succeeded for segmented images rather than normal images. It is necessary to perform segmentation for exclusive specification of cancer and non-cancer regions separately. Region of interest (ROI) is most commonly used technique for mammogram segmentation. Limitation of this method is that it is unable to explore segmentation for large collection of mammogram images. Therefore, this paper is proposed multi-ROI segmentation for addressing the above limitation. It supports greatly in finding the best texture features of mammogram images. Experimental study demonstrates the effectiveness of proposed work using benchmarked images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20features" title="texture features">texture features</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-ROI%20segmentation" title=" multi-ROI segmentation"> multi-ROI segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmarked%20images" title=" benchmarked images "> benchmarked images </a> </p> <a href="https://publications.waset.org/abstracts/88666/effective-texture-features-for-segmented-mammogram-images-based-on-multi-region-of-interest-segmentation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88666.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">311</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2383</span> Optimization Query Image Using Search Relevance Re-Ranking Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20G.%20Asmitha%20Chandini">T. G. Asmitha Chandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Web-based image search re-ranking, as an successful method to get better the results. In a query keyword, the first stair is store the images is first retrieve based on the text-based information. The user to select a query keywordimage, by using this query keyword other images are re-ranked based on their visual properties with images.Now a day to day, people projected to match images in a semantic space which is used attributes or reference classes closely related to the basis of semantic image. though, understanding a worldwide visual semantic space to demonstrate highly different images from the web is difficult and inefficient. The re-ranking images, which automatically offline part learns dissimilar semantic spaces for different query keywords. The features of images are projected into their related semantic spaces to get particular images. At the online stage, images are re-ranked by compare their semantic signatures obtained the semantic précised by the query keyword image. The query-specific semantic signatures extensively improve both the proper and efficiency of image re-ranking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Query" title="Query">Query</a>, <a href="https://publications.waset.org/abstracts/search?q=keyword" title=" keyword"> keyword</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=re-ranking" title=" re-ranking"> re-ranking</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic" title=" semantic"> semantic</a>, <a href="https://publications.waset.org/abstracts/search?q=signature" title=" signature"> signature</a> </p> <a href="https://publications.waset.org/abstracts/28398/optimization-query-image-using-search-relevance-re-ranking-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28398.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">552</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2382</span> Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edgardo%20V.%20Gubatanga%20Jr.">Edgardo V. Gubatanga Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joshua%20Salvacion"> Mark Joshua Salvacion</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20LiDAR" title="aerial LiDAR">aerial LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=colorization" title=" colorization"> colorization</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20images" title=" intensity images"> intensity images</a> </p> <a href="https://publications.waset.org/abstracts/94116/application-of-deep-learning-in-colorization-of-lidar-derived-intensity-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94116.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2381</span> Comparison of Vessel Detection in Standard vs Ultra-WideField Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maher%20un%20Nisa">Maher un Nisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahsan%20Khawaja"> Ahsan Khawaja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal imaging with Ultra-WideField (UWF) view technology has opened up new avenues in the field of retinal pathology detection. Recent developments in retinal imaging such as Optos California Imaging Device helps in acquiring high resolution images of the retina to help the Ophthalmologists in diagnosing and analyzing eye related pathologies more accurately. This paper investigates the acquired retinal details by comparing vessel detection in standard 450 color fundus images with the state of the art 2000 UWF retinal images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fundus" title="color fundus">color fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=ultra-widefield" title=" ultra-widefield"> ultra-widefield</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel%20detection" title=" vessel detection"> vessel detection</a> </p> <a href="https://publications.waset.org/abstracts/33520/comparison-of-vessel-detection-in-standard-vs-ultra-widefield-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2380</span> Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Razan%20Manofely"> Razan Manofely</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajab%20M.%20Ben%20Yousef"> Rajab M. Ben Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enhancement" title="enhancement">enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=x-rays" title=" x-rays"> x-rays</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20intensity%20values" title=" pixel intensity values"> pixel intensity values</a>, <a href="https://publications.waset.org/abstracts/search?q=MatLab" title=" MatLab"> MatLab</a> </p> <a href="https://publications.waset.org/abstracts/31031/enhancement-of-x-rays-images-intensity-using-pixel-values-adjustments-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2379</span> Filtering and Reconstruction System for Grey-Level Forensic Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahd%20Aljarf">Ahd Aljarf</a>, <a href="https://publications.waset.org/abstracts/search?q=Saad%20Amin"> Saad Amin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20filtering" title="image filtering">image filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=forensic%20images" title=" forensic images"> forensic images</a> </p> <a href="https://publications.waset.org/abstracts/15654/filtering-and-reconstruction-system-for-grey-level-forensic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">366</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2378</span> Mage Fusion Based Eye Tumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ashit">Ahmed Ashit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20tumor" title=" eye tumor"> eye tumor</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20operators" title=" canny operators"> canny operators</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposed" title=" superimposed"> superimposed</a> </p> <a href="https://publications.waset.org/abstracts/30750/mage-fusion-based-eye-tumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30750.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2377</span> Using Deep Learning in Lyme Disease Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teja%20Koduru">Teja Koduru</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lyme" title="Lyme">Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=untreated%20Lyme" title=" untreated Lyme"> untreated Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=erythema%20migrans%20rash" title=" erythema migrans rash"> erythema migrans rash</a>, <a href="https://publications.waset.org/abstracts/search?q=EM%20rash" title=" EM rash"> EM rash</a> </p> <a href="https://publications.waset.org/abstracts/135383/using-deep-learning-in-lyme-disease-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135383.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">240</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2376</span> Clustering-Based Detection of Alzheimer's Disease Using Brain MR Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Matoug">Sofia Matoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Amr%20Abdel-Dayem"> Amr Abdel-Dayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a comprehensive survey of recent research studies to segment and classify brain MR (magnetic resonance) images in order to detect significant changes to brain ventricles. The paper also presents a general framework for detecting regions that atrophy, which can help neurologists in detecting and staging Alzheimer. Furthermore, a prototype was implemented to segment brain MR images in order to extract the region of interest (ROI) and then, a classifier was employed to differentiate between normal and abnormal brain tissues. Experimental results show that the proposed scheme can provide a reliable second opinion that neurologists can benefit from. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alzheimer" title="Alzheimer">Alzheimer</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20images" title=" brain images"> brain images</a>, <a href="https://publications.waset.org/abstracts/search?q=classification%20techniques" title=" classification techniques"> classification techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=Magnetic%20Resonance%20Images%20MRI" title=" Magnetic Resonance Images MRI"> Magnetic Resonance Images MRI</a> </p> <a href="https://publications.waset.org/abstracts/49930/clustering-based-detection-of-alzheimers-disease-using-brain-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2375</span> Subjective versus Objective Assessment for Magnetic Resonance (MR) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heshalini%20Rajagopal">Heshalini Rajagopal</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Sze%20Chow"> Li Sze Chow</a>, <a href="https://publications.waset.org/abstracts/search?q=Raveendran%20Paramesran"> Raveendran Paramesran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Magnetic Resonance Imaging (MRI) is one of the most important medical imaging modality. Subjective assessment of the image quality is regarded as the gold standard to evaluate MR images. In this study, a database of 210 MR images which contains ten reference images and 200 distorted images is presented. The reference images were distorted with four types of distortions: Rician Noise, Gaussian White Noise, Gaussian Blur and DCT compression. The 210 images were assessed by ten subjects. The subjective scores were presented in Difference Mean Opinion Score (DMOS). The DMOS values were compared with four FR-IQA metrics. We have used Pearson Linear Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) to validate the DMOS values. The high correlation values of PLCC and SROCC shows that the DMOS values are close to the objective FR-IQA metrics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20resonance%20%28MR%29%20images" title="medical resonance (MR) images">medical resonance (MR) images</a>, <a href="https://publications.waset.org/abstracts/search?q=difference%20mean%20opinion%20score%20%28DMOS%29" title=" difference mean opinion score (DMOS)"> difference mean opinion score (DMOS)</a>, <a href="https://publications.waset.org/abstracts/search?q=full%20reference%20image%20quality%20assessment%20%28FR-IQA%29" title=" full reference image quality assessment (FR-IQA)"> full reference image quality assessment (FR-IQA)</a> </p> <a href="https://publications.waset.org/abstracts/39606/subjective-versus-objective-assessment-for-magnetic-resonance-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">458</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2374</span> Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haoqi%20Gao">Haoqi Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Koichi%20Ogawara"> Koichi Ogawara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20vascular%20segmentations" title="retinal vascular segmentations">retinal vascular segmentations</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20ad-versarial%20network" title=" generative ad-versarial network"> generative ad-versarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclegan" title=" cyclegan"> cyclegan</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a> </p> <a href="https://publications.waset.org/abstracts/110591/generative-adversarial-network-for-bidirectional-mappings-between-retinal-fundus-images-and-vessel-segmented-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2373</span> Timescape-Based Panoramic View for Historic Landmarks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Ali">H. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Whitehead"> A. Whitehead</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cultural%20heritage" title="cultural heritage">cultural heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title=" image registration"> image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subset%20selection" title=" image subset selection"> image subset selection</a>, <a href="https://publications.waset.org/abstracts/search?q=registered%20image%20similarity" title=" registered image similarity"> registered image similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20panorama" title=" temporal panorama"> temporal panorama</a>, <a href="https://publications.waset.org/abstracts/search?q=timescapes" title=" timescapes"> timescapes</a> </p> <a href="https://publications.waset.org/abstracts/101930/timescape-based-panoramic-view-for-historic-landmarks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2372</span> Security Analysis and Implementation of Achterbahn-128 for Images Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aissa%20Belmeguenai">Aissa Belmeguenai</a>, <a href="https://publications.waset.org/abstracts/search?q=Oulaya%20Berrak"> Oulaya Berrak</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaled%20Mansouri"> Khaled Mansouri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, efficiency implementation and security evaluation of the keystream generator of Achterbahn-128 for images encryption and decryption was introduced. The implementation for this simulated project is written with MATLAB.7.5. First of all, two different original images are used to validate the proposed design. The developed program is used to transform the original images data into digital image file. Finally, the proposed program is implemented to encrypt and decrypt images data. Several tests are done to prove the design performance, including visual tests and security evaluation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Achterbahn-128" title="Achterbahn-128">Achterbahn-128</a>, <a href="https://publications.waset.org/abstracts/search?q=keystream%20generator" title=" keystream generator"> keystream generator</a>, <a href="https://publications.waset.org/abstracts/search?q=stream%20cipher" title=" stream cipher"> stream cipher</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title=" image encryption"> image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20analysis" title=" security analysis"> security analysis</a> </p> <a href="https://publications.waset.org/abstracts/38107/security-analysis-and-implementation-of-achterbahn-128-for-images-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2371</span> Integral Image-Based Differential Filters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Inoue">Kohei Inoue</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenji%20Hara"> Kenji Hara</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiichi%20Urahama"> Kiichi Urahama</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=integral%20images" title="integral images">integral images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20images" title=" differential images"> differential images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20filters" title=" differential filters"> differential filters</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title=" image fusion"> image fusion</a> </p> <a href="https://publications.waset.org/abstracts/8531/integral-image-based-differential-filters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2370</span> Multimodal Database of Retina Images for Africa: The First Open Access Digital Repository for Retina Images in Sub Saharan Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simon%20Arunga">Simon Arunga</a>, <a href="https://publications.waset.org/abstracts/search?q=Teddy%20Kwaga"> Teddy Kwaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Rita%20Kageni"> Rita Kageni</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Gichangi"> Michael Gichangi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nyawira%20Mwangi"> Nyawira Mwangi</a>, <a href="https://publications.waset.org/abstracts/search?q=Fred%20Kagwa"> Fred Kagwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Rogers%20Mwavu"> Rogers Mwavu</a>, <a href="https://publications.waset.org/abstracts/search?q=Amos%20Baryashaba"> Amos Baryashaba</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20F.%20Nakayama"> Luis F. Nakayama</a>, <a href="https://publications.waset.org/abstracts/search?q=Katharine%20Morley"> Katharine Morley</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Morley"> Michael Morley</a>, <a href="https://publications.waset.org/abstracts/search?q=Leo%20A.%20Celi"> Leo A. Celi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jessica%20Haberer"> Jessica Haberer</a>, <a href="https://publications.waset.org/abstracts/search?q=Celestino%20Obua"> Celestino Obua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: The main aim for creating the Multimodal Database of Retinal Images for Africa (MoDRIA) was to provide a publicly available repository of retinal images for responsible researchers to conduct algorithm development in a bid to curb the challenges of ophthalmic artificial intelligence (AI) in Africa. Methods: Data and retina images were ethically sourced from sites in Uganda and Kenya. Data on medical history, visual acuity, ocular examination, blood pressure, and blood sugar were collected. Retina images were captured using fundus cameras (Foru3-nethra and Canon CR-Mark-1). Images were stored on a secure online database. Results: The database consists of 7,859 retinal images in portable network graphics format from 1,988 participants. Images from patients with human immunodeficiency virus were 18.9%, 18.2% of images were from hypertensive patients, 12.8% from diabetic patients, and the rest from normal’ participants. Conclusion: Publicly available data repositories are a valuable asset in the development of AI technology. Therefore, is a need for the expansion of MoDRIA so as to provide larger datasets that are more representative of Sub-Saharan data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retina%20images" title="retina images">retina images</a>, <a href="https://publications.waset.org/abstracts/search?q=MoDRIA" title=" MoDRIA"> MoDRIA</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20repository" title=" image repository"> image repository</a>, <a href="https://publications.waset.org/abstracts/search?q=African%20database" title=" African database"> African database</a> </p> <a href="https://publications.waset.org/abstracts/169515/multimodal-database-of-retina-images-for-africa-the-first-open-access-digital-repository-for-retina-images-in-sub-saharan-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169515.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2369</span> Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdolvahab%20Ehsani%20Rad">Abdolvahab Ehsani Rad</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Shafry%20Mohd%20Rahim"> Mohd Shafry Mohd Rahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Alireza%20Norouzi"> Alireza Norouzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=integral%20production" title="integral production">integral production</a>, <a href="https://publications.waset.org/abstracts/search?q=level%20set%20method" title=" level set method"> level set method</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20operation" title=" morphological operation"> morphological operation</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/3681/level-set-and-morphological-operation-techniques-in-application-of-dental-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3681.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">317</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2368</span> An Algorithm for Removal of Noise from X-Ray Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sajidullah%20Khan">Sajidullah Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Najeeb%20Ullah"> Najeeb Ullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Yin%20Chai"> Wang Yin Chai</a>, <a href="https://publications.waset.org/abstracts/search?q=Chai%20Soo%20See"> Chai Soo See</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an approach to remove impulse and Poisson noise from X-ray images. Many filters have been used for impulse noise removal from color and gray scale images with their own strengths and weaknesses but X-ray images contain Poisson noise and unfortunately there is no intelligent filter which can detect impulse and Poisson noise from X-ray images. Our proposed filter uses the upgraded layer discrimination approach to detect both Impulse and Poisson noise corrupted pixels in X-ray images and then restores only those detected pixels with a simple efficient and reliable one line equation. Our Proposed algorithms are very effective and much more efficient than all existing filters used only for Impulse noise removal. The proposed method uses a new powerful and efficient noise detection method to determine whether the pixel under observation is corrupted or noise free. Results from computer simulations are used to demonstrate pleasing performance of our proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=X-ray%20image%20de-noising" title="X-ray image de-noising">X-ray image de-noising</a>, <a href="https://publications.waset.org/abstracts/search?q=impulse%20noise" title=" impulse noise"> impulse noise</a>, <a href="https://publications.waset.org/abstracts/search?q=poisson%20noise" title=" poisson noise"> poisson noise</a>, <a href="https://publications.waset.org/abstracts/search?q=PRWF" title=" PRWF"> PRWF</a> </p> <a href="https://publications.waset.org/abstracts/54256/an-algorithm-for-removal-of-noise-from-x-ray-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54256.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">383</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2367</span> Implementation of Achterbahn-128 for Images Encryption and Decryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aissa%20Belmeguenai">Aissa Belmeguenai</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaled%20Mansouri"> Khaled Mansouri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, an efficient implementation of Achterbahn-128 for images encryption and decryption was introduced. The implementation for this simulated project is written by MATLAB.7.5. At first two different original images are used for validate the proposed design. Then our developed program was used to transform the original images data into image digits file. Finally, we used our implemented program to encrypt and decrypt images data. Several tests are done for proving the design performance including visual tests and security analysis; we discuss the security analysis of the proposed image encryption scheme including some important ones like key sensitivity analysis, key space analysis, and statistical attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Achterbahn-128" title="Achterbahn-128">Achterbahn-128</a>, <a href="https://publications.waset.org/abstracts/search?q=stream%20cipher" title=" stream cipher"> stream cipher</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title=" image encryption"> image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20analysis" title=" security analysis"> security analysis</a> </p> <a href="https://publications.waset.org/abstracts/20401/implementation-of-achterbahn-128-for-images-encryption-and-decryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20401.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">532</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2366</span> Development of Web-Based Iceberg Detection Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Kavya%20Sri">A. Kavya Sri</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Sai%20Vineela"> K. Sai Vineela</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Vanitha"> R. Vanitha</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Rohith"> S. Rohith</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Large pieces of ice that break from the glaciers are known as icebergs. The threat that icebergs pose to navigation, production of offshore oil and gas services, and underwater pipelines makes their detection crucial. In this project, an automated iceberg tracking method using deep learning techniques and satellite images of icebergs is to be developed. With a temporal resolution of 12 days and a spatial resolution of 20 m, Sentinel-1 (SAR) images can be used to track iceberg drift over the Southern Ocean. In contrast to multispectral images, SAR images are used for analysis in meteorological conditions. This project develops a web-based graphical user interface to detect and track icebergs using sentinel-1 images. To track the movement of the icebergs by using temporal images based on their latitude and longitude values and by comparing the center and area of all detected icebergs. Testing the accuracy is done by precision and recall measures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=synthetic%20aperture%20radar%20%28SAR%29" title="synthetic aperture radar (SAR)">synthetic aperture radar (SAR)</a>, <a href="https://publications.waset.org/abstracts/search?q=icebergs" title=" icebergs"> icebergs</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20resolution" title=" spatial resolution"> spatial resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20resolution" title=" temporal resolution"> temporal resolution</a> </p> <a href="https://publications.waset.org/abstracts/162740/development-of-web-based-iceberg-detection-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2365</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/abstracts/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title="image forensics">image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/95266/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2364</span> Timing Equation for Capturing Satellite Thermal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Toufic%20Abd%20El-Latif%20Sadek">Toufic Abd El-Latif Sadek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Asphalt object represents the asphalted areas, like roads. The best original data of thermal images occurred at a specific time during the days of the year, by preventing the gaps in times which give the close and same brightness from different objects, using seven sample objects, asphalt, concrete, metal, rock, dry soil, vegetation, and water. It has been found in this study a general timing equation for capturing satellite thermal images at different locations, depends on a fixed time the sunrise and sunset; Capture Time= Tcap =(TM*TSR) ±TS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=asphalt" title="asphalt">asphalt</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite" title=" satellite"> satellite</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20images" title=" thermal images"> thermal images</a>, <a href="https://publications.waset.org/abstracts/search?q=timing%20equation" title=" timing equation"> timing equation</a> </p> <a href="https://publications.waset.org/abstracts/51769/timing-equation-for-capturing-satellite-thermal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51769.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">350</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2363</span> Clustering Based Level Set Evaluation for Low Contrast Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bikshalu%20Kalagadda">Bikshalu Kalagadda</a>, <a href="https://publications.waset.org/abstracts/search?q=Srikanth%20Rangu"> Srikanth Rangu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The important object of images segmentation is to extract objects with respect to some input features. One of the important methods for image segmentation is Level set method. Generally medical images and synthetic images with low contrast of pixel profile, for such images difficult to locate interested features in images. In conventional level set function, develops irregularity during its process of evaluation of contour of objects, this destroy the stability of evolution process. For this problem a remedy is proposed, a new hybrid algorithm is Clustering Level Set Evolution. Kernel fuzzy particles swarm optimization clustering with the Distance Regularized Level Set (DRLS) and Selective Binary, and Gaussian Filtering Regularized Level Set (SBGFRLS) methods are used. The ability of identifying different regions becomes easy with improved speed. Efficiency of the modified method can be evaluated by comparing with the previous method for similar specifications. Comparison can be carried out by considering medical and synthetic images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=level%20set%20function" title=" level set function"> level set function</a>, <a href="https://publications.waset.org/abstracts/search?q=re-initialization" title=" re-initialization"> re-initialization</a>, <a href="https://publications.waset.org/abstracts/search?q=Kernel%20fuzzy" title=" Kernel fuzzy"> Kernel fuzzy</a>, <a href="https://publications.waset.org/abstracts/search?q=swarm%20optimization" title=" swarm optimization"> swarm optimization</a> </p> <a href="https://publications.waset.org/abstracts/65723/clustering-based-level-set-evaluation-for-low-contrast-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65723.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2362</span> Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binary%20image" title="binary image">binary image</a>, <a href="https://publications.waset.org/abstracts/search?q=dissimilarity%20detection" title=" dissimilarity detection"> dissimilarity detection</a>, <a href="https://publications.waset.org/abstracts/search?q=probabilistic%20matching%20model%20for%20binary%20images" title=" probabilistic matching model for binary images"> probabilistic matching model for binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20mapping" title=" image mapping"> image mapping</a> </p> <a href="https://publications.waset.org/abstracts/113778/comparative-analysis-of-dissimilarity-detection-between-binary-images-based-on-equivalency-and-non-equivalency-of-image-inversion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/113778.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=79">79</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=80">80</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20images&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>