CINXE.COM

Search results for: fundus images

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: fundus images</title> <meta name="description" content="Search results for: fundus images"> <meta name="keywords" content="fundus images"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="fundus images" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="fundus images"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2411</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: fundus images</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2411</span> Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haoqi%20Gao">Haoqi Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Koichi%20Ogawara"> Koichi Ogawara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20vascular%20segmentations" title="retinal vascular segmentations">retinal vascular segmentations</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20ad-versarial%20network" title=" generative ad-versarial network"> generative ad-versarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclegan" title=" cyclegan"> cyclegan</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a> </p> <a href="https://publications.waset.org/abstracts/110591/generative-adversarial-network-for-bidirectional-mappings-between-retinal-fundus-images-and-vessel-segmented-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2410</span> Comparison of Vessel Detection in Standard vs Ultra-WideField Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maher%20un%20Nisa">Maher un Nisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahsan%20Khawaja"> Ahsan Khawaja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal imaging with Ultra-WideField (UWF) view technology has opened up new avenues in the field of retinal pathology detection. Recent developments in retinal imaging such as Optos California Imaging Device helps in acquiring high resolution images of the retina to help the Ophthalmologists in diagnosing and analyzing eye related pathologies more accurately. This paper investigates the acquired retinal details by comparing vessel detection in standard 450 color fundus images with the state of the art 2000 UWF retinal images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fundus" title="color fundus">color fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=ultra-widefield" title=" ultra-widefield"> ultra-widefield</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel%20detection" title=" vessel detection"> vessel detection</a> </p> <a href="https://publications.waset.org/abstracts/33520/comparison-of-vessel-detection-in-standard-vs-ultra-widefield-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2409</span> Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Biran">A. Biran</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sobhe%20Bidari"> P. Sobhe Bidari</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Raahemifar"> K. Raahemifar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus" title=" fundus"> fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=CHT" title=" CHT"> CHT</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=hemorrhages" title=" hemorrhages"> hemorrhages</a> </p> <a href="https://publications.waset.org/abstracts/52591/automatic-method-for-exudates-and-hemorrhages-detection-from-fundus-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2408</span> Attention Based Fully Convolutional Neural Network for Simultaneous Detection and Segmentation of Optic Disc in Retinal Fundus Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sandip%20Sadhukhan">Sandip Sadhukhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Arpita%20Sarkar"> Arpita Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Debprasad%20Sinha"> Debprasad Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Goutam%20Kumar%20Ghorai"> Goutam Kumar Ghorai</a>, <a href="https://publications.waset.org/abstracts/search?q=Gautam%20Sarkar"> Gautam Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashis%20K.%20Dhara"> Ashis K. Dhara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate segmentation of the optic disc is very important for computer-aided diagnosis of several ocular diseases such as glaucoma, diabetic retinopathy, and hypertensive retinopathy. The paper presents an accurate and fast optic disc detection and segmentation method using an attention based fully convolutional network. The network is trained from scratch using the fundus images of extended MESSIDOR database and the trained model is used for segmentation of optic disc. The false positives are removed based on morphological operation and shape features. The result is evaluated using three-fold cross-validation on six public fundus image databases such as DIARETDB0, DIARETDB1, DRIVE, AV-INSPIRE, CHASE DB1 and MESSIDOR. The attention based fully convolutional network is robust and effective for detection and segmentation of optic disc in the images affected by diabetic retinopathy and it outperforms existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention-based%20fully%20convolutional%20network" title="attention-based fully convolutional network">attention-based fully convolutional network</a>, <a href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation" title=" optic disc detection and segmentation"> optic disc detection and segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20fundus%20image" title=" retinal fundus image"> retinal fundus image</a>, <a href="https://publications.waset.org/abstracts/search?q=screening%20of%20ocular%20diseases" title=" screening of ocular diseases"> screening of ocular diseases</a> </p> <a href="https://publications.waset.org/abstracts/112293/attention-based-fully-convolutional-neural-network-for-simultaneous-detection-and-segmentation-of-optic-disc-in-retinal-fundus-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2407</span> Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rita%20Magdalena">Rita Magdalena</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20K.%20Caecar%20Pratiwi"> N. K. Caecar Pratiwi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunendah%20Nur%20Fuadah"> Yunendah Nur Fuadah</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Saidah"> Sofia Saidah</a>, <a href="https://publications.waset.org/abstracts/search?q=Bima%20Sakti"> Bima Sakti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20retina" title=" fundus retina"> fundus retina</a>, <a href="https://publications.waset.org/abstracts/search?q=morphology%20operation" title=" morphology operation"> morphology operation</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel" title=" vessel"> vessel</a> </p> <a href="https://publications.waset.org/abstracts/105620/morphology-operation-and-discrete-wavelet-transform-for-blood-vessels-segmentation-in-retina-fundus" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2406</span> Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Biran">A. Biran</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sobhe%20Bidari"> P. Sobhe Bidari</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Almazroe"> A. Almazroe</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Lakshminarayanan"> V. Lakshminarayanan</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Raahemifar"> K. Raahemifar </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=STARE" title=" STARE"> STARE</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/49824/automatic-detection-and-classification-of-diabetic-retinopathy-using-retinal-fundus-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49824.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2405</span> Glaucoma Detection in Retinal Tomography Using the Vision Transformer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sushish%20Baral">Sushish Baral</a>, <a href="https://publications.waset.org/abstracts/search?q=Pratibha%20Joshi"> Pratibha Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yaman%20Maharjan"> Yaman Maharjan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=glaucoma" title="glaucoma">glaucoma</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20architectures" title=" convolutional architectures"> convolutional architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20fundus%20images" title=" retinal fundus images"> retinal fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/141224/glaucoma-detection-in-retinal-tomography-using-the-vision-transformer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141224.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2404</span> Similarity Based Retrieval in Case Based Reasoning for Analysis of Medical Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Dasgupta">M. Dasgupta</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Banerjee"> S. Banerjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Content Based Image Retrieval (CBIR) coupled with Case Based Reasoning (CBR) is a paradigm that is becoming increasingly popular in the diagnosis and therapy planning of medical ailments utilizing the digital content of medical images. This paper presents a survey of some of the promising approaches used in the detection of abnormalities in retina images as well in mammographic screening and detection of regions of interest in MRI scans of the brain. We also describe our proposed algorithm to detect hard exudates in fundus images of the retina of Diabetic Retinopathy patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=case%20based%20reasoning" title="case based reasoning">case based reasoning</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=retina%20image" title=" retina image"> retina image</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20based%20retrieval" title=" similarity based retrieval"> similarity based retrieval</a> </p> <a href="https://publications.waset.org/abstracts/2992/similarity-based-retrieval-in-case-based-reasoning-for-analysis-of-medical-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2403</span> Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Goutam%20Kumar%20Ghorai">Goutam Kumar Ghorai</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandip%20Sadhukhan"> Sandip Sadhukhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Arpita%20Sarkar"> Arpita Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Debprasad%20Sinha"> Debprasad Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Sarkar"> G. Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashis%20K.%20Dhara"> Ashis K. Dhara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20fundus%20image" title="retinal fundus image">retinal fundus image</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20convolutional%20neural%20network" title=" deep convolutional neural network"> deep convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20detection%20of%20microaneurysms" title=" early detection of microaneurysms"> early detection of microaneurysms</a>, <a href="https://publications.waset.org/abstracts/search?q=screening%20of%20diabetic%20retinopathy" title=" screening of diabetic retinopathy"> screening of diabetic retinopathy</a> </p> <a href="https://publications.waset.org/abstracts/112349/deep-convolutional-neural-network-for-detection-of-microaneurysms-in-retinal-fundus-images-at-early-stage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2402</span> Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shu-Min%20Tsao">Shu-Min Tsao</a>, <a href="https://publications.waset.org/abstracts/search?q=Chung-Ming%20Lo"> Chung-Ming Lo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shao-Chun%20Chen"> Shao-Chun Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-aided%20diagnosis" title="computer-aided diagnosis">computer-aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=exudate" title=" exudate"> exudate</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/70086/computer-aided-exudate-diagnosis-for-the-screening-of-diabetic-retinopathy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70086.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2401</span> Multimodal Database of Retina Images for Africa: The First Open Access Digital Repository for Retina Images in Sub Saharan Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simon%20Arunga">Simon Arunga</a>, <a href="https://publications.waset.org/abstracts/search?q=Teddy%20Kwaga"> Teddy Kwaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Rita%20Kageni"> Rita Kageni</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Gichangi"> Michael Gichangi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nyawira%20Mwangi"> Nyawira Mwangi</a>, <a href="https://publications.waset.org/abstracts/search?q=Fred%20Kagwa"> Fred Kagwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Rogers%20Mwavu"> Rogers Mwavu</a>, <a href="https://publications.waset.org/abstracts/search?q=Amos%20Baryashaba"> Amos Baryashaba</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20F.%20Nakayama"> Luis F. Nakayama</a>, <a href="https://publications.waset.org/abstracts/search?q=Katharine%20Morley"> Katharine Morley</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Morley"> Michael Morley</a>, <a href="https://publications.waset.org/abstracts/search?q=Leo%20A.%20Celi"> Leo A. Celi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jessica%20Haberer"> Jessica Haberer</a>, <a href="https://publications.waset.org/abstracts/search?q=Celestino%20Obua"> Celestino Obua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: The main aim for creating the Multimodal Database of Retinal Images for Africa (MoDRIA) was to provide a publicly available repository of retinal images for responsible researchers to conduct algorithm development in a bid to curb the challenges of ophthalmic artificial intelligence (AI) in Africa. Methods: Data and retina images were ethically sourced from sites in Uganda and Kenya. Data on medical history, visual acuity, ocular examination, blood pressure, and blood sugar were collected. Retina images were captured using fundus cameras (Foru3-nethra and Canon CR-Mark-1). Images were stored on a secure online database. Results: The database consists of 7,859 retinal images in portable network graphics format from 1,988 participants. Images from patients with human immunodeficiency virus were 18.9%, 18.2% of images were from hypertensive patients, 12.8% from diabetic patients, and the rest from normal’ participants. Conclusion: Publicly available data repositories are a valuable asset in the development of AI technology. Therefore, is a need for the expansion of MoDRIA so as to provide larger datasets that are more representative of Sub-Saharan data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retina%20images" title="retina images">retina images</a>, <a href="https://publications.waset.org/abstracts/search?q=MoDRIA" title=" MoDRIA"> MoDRIA</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20repository" title=" image repository"> image repository</a>, <a href="https://publications.waset.org/abstracts/search?q=African%20database" title=" African database"> African database</a> </p> <a href="https://publications.waset.org/abstracts/169515/multimodal-database-of-retina-images-for-africa-the-first-open-access-digital-repository-for-retina-images-in-sub-saharan-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169515.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2400</span> Digital Retinal Images: Background and Damaged Areas Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20A.%20Gani">Eman A. Gani</a>, <a href="https://publications.waset.org/abstracts/search?q=Loay%20E.%20George"> Loay E. George</a>, <a href="https://publications.waset.org/abstracts/search?q=Faisel%20G.%20Mohammed"> Faisel G. Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20H.%20Sager"> Kamal H. Sager</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital retinal images are more appropriate for automatic screening of diabetic retinopathy systems. Unfortunately, a significant percentage of these images are poor quality that hinders further analysis due to many factors (such as patient movement, inadequate or non-uniform illumination, acquisition angle and retinal pigmentation). The retinal images of poor quality need to be enhanced before the extraction of features and abnormalities. So, the segmentation of retinal image is essential for this purpose, the segmentation is employed to smooth and strengthen image by separating the background and damaged areas from the overall image thus resulting in retinal image enhancement and less processing time. In this paper, methods for segmenting colored retinal image are proposed to improve the quality of retinal image diagnosis. The methods generate two segmentation masks; i.e., background segmentation mask for extracting the background area and poor quality mask for removing the noisy areas from the retinal image. The standard retinal image databases DIARETDB0, DIARETDB1, STARE, DRIVE and some images obtained from ophthalmologists have been used to test the validation of the proposed segmentation technique. Experimental results indicate the introduced methods are effective and can lead to high segmentation accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title="retinal images">retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20segmentation" title=" background segmentation"> background segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=damaged%20areas%20segmentation" title=" damaged areas segmentation"> damaged areas segmentation</a> </p> <a href="https://publications.waset.org/abstracts/12289/digital-retinal-images-background-and-damaged-areas-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2399</span> Novel Algorithm for Restoration of Retina Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Subbuthai">P. Subbuthai</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Muruganand"> S. Muruganand</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy is one of the complicated diseases and it is caused by the changes in the blood vessels of the retina. Extraction of retina image through Fundus camera sometimes produced poor contrast and noises. Because of this noise, detection of blood vessels in the retina is very complicated. So preprocessing is needed, in this paper, a novel algorithm is implemented to remove the noisy pixel in the retina image. The proposed algorithm is Extended Median Filter and it is applied to the green channel of the retina because green channel vessels are brighter than the background. Proposed extended median filter is compared with the existing standard median filter by performance metrics such as PSNR, MSE and RMSE. Experimental results show that the proposed Extended Median Filter algorithm gives a better result than the existing standard median filter in terms of noise suppression and detail preservation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fundus%20retina%20image" title="fundus retina image">fundus retina image</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20filter" title=" median filter"> median filter</a>, <a href="https://publications.waset.org/abstracts/search?q=microaneurysms" title=" microaneurysms"> microaneurysms</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a> </p> <a href="https://publications.waset.org/abstracts/20819/novel-algorithm-for-restoration-of-retina-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20819.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2398</span> Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Siriarchawatana">P. Siriarchawatana</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Leungchavaphongse"> K. Leungchavaphongse</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Covavisaruch"> N. Covavisaruch</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Rojananuangnit"> K. Rojananuangnit</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Boondaeng"> P. Boondaeng</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Panyayingyong"> N. Panyayingyong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 <em>vs</em>. 0.32, <em>p</em> &lt; 0.05 for inferotemporal vein, 0.33 <em>vs</em>. 0.30, <em>p</em> &lt; 0.01 for inferotemporal artery, 0.34 <em>vs</em>. 0.31, <em>p </em>&lt; 0.01 for superotemporal vein, and 0.33 <em>vs</em>. 0.30, <em>p</em> &lt; 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=glaucoma" title="glaucoma">glaucoma</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20vessel" title=" retinal vessel"> retinal vessel</a>, <a href="https://publications.waset.org/abstracts/search?q=central%20light%20reflex" title=" central light reflex"> central light reflex</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20photograph" title=" fundus photograph"> fundus photograph</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a> </p> <a href="https://publications.waset.org/abstracts/54545/comparison-of-central-light-reflex-width-to-retinal-vessel-diameter-ratio-between-glaucoma-and-normal-eyes-by-using-edge-detection-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54545.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">325</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2397</span> Autogenous Diabetic Retinopathy Censor for Ophthalmologists - AKSHI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Asiri%20Wijesinghe">Asiri Wijesinghe</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20D.%20Kodikara"> N. D. Kodikara</a>, <a href="https://publications.waset.org/abstracts/search?q=Damitha%20Sandaruwan"> Damitha Sandaruwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Diabetic Retinopathy (DR) is a rapidly growing interrogation around the world which can be annotated by abortive metabolism of glucose that causes long-term infection in human retina. This is one of the preliminary reason of visual impairment and blindness of adults. Information on retinal pathological mutation can be recognized using ocular fundus images. In this research, we are mainly focused on resurrecting an automated diagnosis system to detect DR anomalies such as severity level classification of DR patient (Non-proliferative Diabetic Retinopathy approach) and vessel tortuosity measurement of untwisted vessels to assessment of vessel anomalies (Proliferative Diabetic Retinopathy approach). Severity classification method is obtained better results according to the precision, recall, F-measure and accuracy (exceeds 94%) in all formats of cross validation. In ROC (Receiver Operating Characteristic) curves also visualized the higher AUC (Area Under Curve) percentage (exceeds 95%). User level evaluation of severity capturing is obtained higher accuracy (85%) result and fairly better values for each evaluation measurements. Untwisted vessel detection for tortuosity measurement also carried out the good results with respect to the sensitivity (85%), specificity (89%) and accuracy (87%). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fundus%20image" title="fundus image">fundus image</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=microaneurisms" title=" microaneurisms"> microaneurisms</a>, <a href="https://publications.waset.org/abstracts/search?q=hemorrhages" title=" hemorrhages"> hemorrhages</a>, <a href="https://publications.waset.org/abstracts/search?q=tortuosity" title=" tortuosity"> tortuosity</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=optic%20disc" title=" optic disc"> optic disc</a>, <a href="https://publications.waset.org/abstracts/search?q=fovea" title=" fovea"> fovea</a> </p> <a href="https://publications.waset.org/abstracts/45959/autogenous-diabetic-retinopathy-censor-for-ophthalmologists-akshi" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45959.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">341</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2396</span> Rhetoric and Renarrative Structure of Digital Images in Trans-Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Geng">Yang Geng</a>, <a href="https://publications.waset.org/abstracts/search?q=Anqi%20Zhao"> Anqi Zhao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The misreading theory of Harold Bloom provides a new diachronic perspective as an approach to the consistency between rhetoric of digital technology, dynamic movement of digital images and uncertain meaning of text. Reinterpreting the diachroneity of 'intertextuality' in the context of misreading theory extended the range of the 'intermediality' of transmedia to the intense tension between digital images and symbolic images throughout history of images. With the analogy between six categories of revisionary ratios and six steps of digital transformation, digital rhetoric might be illustrated as a linear process reflecting dynamic, intensive relations between digital moving images and original static images. Finally, it was concluded that two-way framework of the rhetoric of transformation of digital images and reversed served as a renarrative structure to revive static images by reconnecting them with digital moving images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rhetoric" title="rhetoric">rhetoric</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20art" title=" digital art"> digital art</a>, <a href="https://publications.waset.org/abstracts/search?q=intermediality" title=" intermediality"> intermediality</a>, <a href="https://publications.waset.org/abstracts/search?q=misreading%20theory" title=" misreading theory"> misreading theory</a> </p> <a href="https://publications.waset.org/abstracts/100230/rhetoric-and-renarrative-structure-of-digital-images-in-trans-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">255</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2395</span> Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20images" title="big images">big images</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20images" title=" binary images"> binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20similarity" title=" image similarity"> image similarity</a> </p> <a href="https://publications.waset.org/abstracts/89963/quick-similarity-measurement-of-binary-images-via-probabilistic-pixel-mapping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2394</span> 3D Guided Image Filtering to Improve Quality of Short-Time Binned Dynamic PET Images Using MRI Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tabassum%20Husain">Tabassum Husain</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Peng%20Li"> Shen Peng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhaolin%20Chen"> Zhaolin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper evaluates the usability of 3D Guided Image Filtering to enhance the quality of short-time binned dynamic PET images by using MRI images. Guided image filtering is an edge-preserving filter proposed to enhance 2D images. The 3D filter is applied on 1 and 5-minute binned images. The results are compared with 15-minute binned images and the Gaussian filtering. The guided image filter enhances the quality of dynamic PET images while also preserving important information of the voxels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images" title="dynamic PET images">dynamic PET images</a>, <a href="https://publications.waset.org/abstracts/search?q=guided%20image%20filter" title=" guided image filter"> guided image filter</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20preservation%20filtering" title=" information preservation filtering"> information preservation filtering</a> </p> <a href="https://publications.waset.org/abstracts/152864/3d-guided-image-filtering-to-improve-quality-of-short-time-binned-dynamic-pet-images-using-mri-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2393</span> Reduction of Speckle Noise in Echocardiographic Images: A Survey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fathi%20Kallel">Fathi Kallel</a>, <a href="https://publications.waset.org/abstracts/search?q=Saida%20Khachira"> Saida Khachira</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Ben%20Slima"> Mohamed Ben Slima</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ben%20Hamida"> Ahmed Ben Hamida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speckle noise is a main characteristic of cardiac ultrasound images, it corresponding to grainy appearance that degrades the image quality. For this reason, the ultrasound images are difficult to use automatically in clinical use, then treatments are required for this type of images. Then a filtering procedure of these images is necessary to eliminate the speckle noise and to improve the quality of ultrasound images which will be then segmented to extract the necessary forms that exist. In this paper, we present the importance of the pre-treatment step for segmentation. This work is applied to cardiac ultrasound images. In a first step, a comparative study of speckle filtering method will be presented and then we use a segmentation algorithm to locate and extract cardiac structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20processing" title="medical image processing">medical image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound%20images" title=" ultrasound images"> ultrasound images</a>, <a href="https://publications.waset.org/abstracts/search?q=Speckle%20noise" title=" Speckle noise"> Speckle noise</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=speckle%20filtering" title=" speckle filtering"> speckle filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=snakes" title=" snakes"> snakes</a> </p> <a href="https://publications.waset.org/abstracts/19064/reduction-of-speckle-noise-in-echocardiographic-images-a-survey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2392</span> Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emhimed%20Saffor">Emhimed Saffor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection "> edge detection </a> </p> <a href="https://publications.waset.org/abstracts/44926/subjective-evaluation-of-mathematical-morphology-edge-detection-on-computed-tomography-ct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2391</span> Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nidhal%20K.%20Azawi">Nidhal K. Azawi</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20M.%20Gauch"> John M. Gauch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colonoscopy%20classification" title="colonoscopy classification">colonoscopy classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20alignment" title=" image alignment"> image alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/92461/automatic-method-for-classification-of-informative-and-noninformative-images-in-colonoscopy-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2390</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2389</span> Effective Texture Features for Segmented Mammogram Images Based on Multi-Region of Interest Segmentation Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramayanam%20Suresh">Ramayanam Suresh</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Nagaraja%20Rao"> A. Nagaraja Rao</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Eswara%20Reddy"> B. Eswara Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture features of mammogram images are useful for finding masses or cancer cases in mammography, which have been used by radiologists. Textures are greatly succeeded for segmented images rather than normal images. It is necessary to perform segmentation for exclusive specification of cancer and non-cancer regions separately. Region of interest (ROI) is most commonly used technique for mammogram segmentation. Limitation of this method is that it is unable to explore segmentation for large collection of mammogram images. Therefore, this paper is proposed multi-ROI segmentation for addressing the above limitation. It supports greatly in finding the best texture features of mammogram images. Experimental study demonstrates the effectiveness of proposed work using benchmarked images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20features" title="texture features">texture features</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-ROI%20segmentation" title=" multi-ROI segmentation"> multi-ROI segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmarked%20images" title=" benchmarked images "> benchmarked images </a> </p> <a href="https://publications.waset.org/abstracts/88666/effective-texture-features-for-segmented-mammogram-images-based-on-multi-region-of-interest-segmentation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88666.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2388</span> Optimization Query Image Using Search Relevance Re-Ranking Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20G.%20Asmitha%20Chandini">T. G. Asmitha Chandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Web-based image search re-ranking, as an successful method to get better the results. In a query keyword, the first stair is store the images is first retrieve based on the text-based information. The user to select a query keywordimage, by using this query keyword other images are re-ranked based on their visual properties with images.Now a day to day, people projected to match images in a semantic space which is used attributes or reference classes closely related to the basis of semantic image. though, understanding a worldwide visual semantic space to demonstrate highly different images from the web is difficult and inefficient. The re-ranking images, which automatically offline part learns dissimilar semantic spaces for different query keywords. The features of images are projected into their related semantic spaces to get particular images. At the online stage, images are re-ranked by compare their semantic signatures obtained the semantic précised by the query keyword image. The query-specific semantic signatures extensively improve both the proper and efficiency of image re-ranking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Query" title="Query">Query</a>, <a href="https://publications.waset.org/abstracts/search?q=keyword" title=" keyword"> keyword</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=re-ranking" title=" re-ranking"> re-ranking</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic" title=" semantic"> semantic</a>, <a href="https://publications.waset.org/abstracts/search?q=signature" title=" signature"> signature</a> </p> <a href="https://publications.waset.org/abstracts/28398/optimization-query-image-using-search-relevance-re-ranking-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28398.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">549</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2387</span> Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edgardo%20V.%20Gubatanga%20Jr.">Edgardo V. Gubatanga Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joshua%20Salvacion"> Mark Joshua Salvacion</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20LiDAR" title="aerial LiDAR">aerial LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=colorization" title=" colorization"> colorization</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20images" title=" intensity images"> intensity images</a> </p> <a href="https://publications.waset.org/abstracts/94116/application-of-deep-learning-in-colorization-of-lidar-derived-intensity-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94116.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2386</span> Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Razan%20Manofely"> Razan Manofely</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajab%20M.%20Ben%20Yousef"> Rajab M. Ben Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enhancement" title="enhancement">enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=x-rays" title=" x-rays"> x-rays</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20intensity%20values" title=" pixel intensity values"> pixel intensity values</a>, <a href="https://publications.waset.org/abstracts/search?q=MatLab" title=" MatLab"> MatLab</a> </p> <a href="https://publications.waset.org/abstracts/31031/enhancement-of-x-rays-images-intensity-using-pixel-values-adjustments-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2385</span> Filtering and Reconstruction System for Grey-Level Forensic Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahd%20Aljarf">Ahd Aljarf</a>, <a href="https://publications.waset.org/abstracts/search?q=Saad%20Amin"> Saad Amin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20filtering" title="image filtering">image filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=forensic%20images" title=" forensic images"> forensic images</a> </p> <a href="https://publications.waset.org/abstracts/15654/filtering-and-reconstruction-system-for-grey-level-forensic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2384</span> Mage Fusion Based Eye Tumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ashit">Ahmed Ashit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20tumor" title=" eye tumor"> eye tumor</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20operators" title=" canny operators"> canny operators</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposed" title=" superimposed"> superimposed</a> </p> <a href="https://publications.waset.org/abstracts/30750/mage-fusion-based-eye-tumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30750.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2383</span> Using Deep Learning in Lyme Disease Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teja%20Koduru">Teja Koduru</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lyme" title="Lyme">Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=untreated%20Lyme" title=" untreated Lyme"> untreated Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=erythema%20migrans%20rash" title=" erythema migrans rash"> erythema migrans rash</a>, <a href="https://publications.waset.org/abstracts/search?q=EM%20rash" title=" EM rash"> EM rash</a> </p> <a href="https://publications.waset.org/abstracts/135383/using-deep-learning-in-lyme-disease-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135383.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">239</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2382</span> Clustering-Based Detection of Alzheimer&#039;s Disease Using Brain MR Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Matoug">Sofia Matoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Amr%20Abdel-Dayem"> Amr Abdel-Dayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a comprehensive survey of recent research studies to segment and classify brain MR (magnetic resonance) images in order to detect significant changes to brain ventricles. The paper also presents a general framework for detecting regions that atrophy, which can help neurologists in detecting and staging Alzheimer. Furthermore, a prototype was implemented to segment brain MR images in order to extract the region of interest (ROI) and then, a classifier was employed to differentiate between normal and abnormal brain tissues. Experimental results show that the proposed scheme can provide a reliable second opinion that neurologists can benefit from. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alzheimer" title="Alzheimer">Alzheimer</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20images" title=" brain images"> brain images</a>, <a href="https://publications.waset.org/abstracts/search?q=classification%20techniques" title=" classification techniques"> classification techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=Magnetic%20Resonance%20Images%20MRI" title=" Magnetic Resonance Images MRI"> Magnetic Resonance Images MRI</a> </p> <a href="https://publications.waset.org/abstracts/49930/clustering-based-detection-of-alzheimers-disease-using-brain-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=80">80</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=81">81</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fundus%20images&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10