CINXE.COM

Search results for: color fundus

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: color fundus</title> <meta name="description" content="Search results for: color fundus"> <meta name="keywords" content="color fundus"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="color fundus" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="color fundus"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1089</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: color fundus</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1089</span> Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haoqi%20Gao">Haoqi Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Koichi%20Ogawara"> Koichi Ogawara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20vascular%20segmentations" title="retinal vascular segmentations">retinal vascular segmentations</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20ad-versarial%20network" title=" generative ad-versarial network"> generative ad-versarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclegan" title=" cyclegan"> cyclegan</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a> </p> <a href="https://publications.waset.org/abstracts/110591/generative-adversarial-network-for-bidirectional-mappings-between-retinal-fundus-images-and-vessel-segmented-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1088</span> Comparison of Vessel Detection in Standard vs Ultra-WideField Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maher%20un%20Nisa">Maher un Nisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahsan%20Khawaja"> Ahsan Khawaja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal imaging with Ultra-WideField (UWF) view technology has opened up new avenues in the field of retinal pathology detection. Recent developments in retinal imaging such as Optos California Imaging Device helps in acquiring high resolution images of the retina to help the Ophthalmologists in diagnosing and analyzing eye related pathologies more accurately. This paper investigates the acquired retinal details by comparing vessel detection in standard 450 color fundus images with the state of the art 2000 UWF retinal images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fundus" title="color fundus">color fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=ultra-widefield" title=" ultra-widefield"> ultra-widefield</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel%20detection" title=" vessel detection"> vessel detection</a> </p> <a href="https://publications.waset.org/abstracts/33520/comparison-of-vessel-detection-in-standard-vs-ultra-widefield-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1087</span> Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Biran">A. Biran</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sobhe%20Bidari"> P. Sobhe Bidari</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Raahemifar"> K. Raahemifar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus" title=" fundus"> fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=CHT" title=" CHT"> CHT</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=hemorrhages" title=" hemorrhages"> hemorrhages</a> </p> <a href="https://publications.waset.org/abstracts/52591/automatic-method-for-exudates-and-hemorrhages-detection-from-fundus-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1086</span> Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rita%20Magdalena">Rita Magdalena</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20K.%20Caecar%20Pratiwi"> N. K. Caecar Pratiwi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunendah%20Nur%20Fuadah"> Yunendah Nur Fuadah</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Saidah"> Sofia Saidah</a>, <a href="https://publications.waset.org/abstracts/search?q=Bima%20Sakti"> Bima Sakti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20retina" title=" fundus retina"> fundus retina</a>, <a href="https://publications.waset.org/abstracts/search?q=morphology%20operation" title=" morphology operation"> morphology operation</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel" title=" vessel"> vessel</a> </p> <a href="https://publications.waset.org/abstracts/105620/morphology-operation-and-discrete-wavelet-transform-for-blood-vessels-segmentation-in-retina-fundus" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1085</span> Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shu-Min%20Tsao">Shu-Min Tsao</a>, <a href="https://publications.waset.org/abstracts/search?q=Chung-Ming%20Lo"> Chung-Ming Lo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shao-Chun%20Chen"> Shao-Chun Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-aided%20diagnosis" title="computer-aided diagnosis">computer-aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=exudate" title=" exudate"> exudate</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/70086/computer-aided-exudate-diagnosis-for-the-screening-of-diabetic-retinopathy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70086.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1084</span> Evaluating the Performance of Color Constancy Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Damanjit%20Kaur">Damanjit Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Avani%20Bhatia"> Avani Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20constancy" title="color constancy">color constancy</a>, <a href="https://publications.waset.org/abstracts/search?q=gray%20world" title=" gray world"> gray world</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20patch" title=" white patch"> white patch</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20white%20patch" title=" modified white patch "> modified white patch </a> </p> <a href="https://publications.waset.org/abstracts/4799/evaluating-the-performance-of-color-constancy-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4799.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1083</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1082</span> Attention Based Fully Convolutional Neural Network for Simultaneous Detection and Segmentation of Optic Disc in Retinal Fundus Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sandip%20Sadhukhan">Sandip Sadhukhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Arpita%20Sarkar"> Arpita Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Debprasad%20Sinha"> Debprasad Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Goutam%20Kumar%20Ghorai"> Goutam Kumar Ghorai</a>, <a href="https://publications.waset.org/abstracts/search?q=Gautam%20Sarkar"> Gautam Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashis%20K.%20Dhara"> Ashis K. Dhara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate segmentation of the optic disc is very important for computer-aided diagnosis of several ocular diseases such as glaucoma, diabetic retinopathy, and hypertensive retinopathy. The paper presents an accurate and fast optic disc detection and segmentation method using an attention based fully convolutional network. The network is trained from scratch using the fundus images of extended MESSIDOR database and the trained model is used for segmentation of optic disc. The false positives are removed based on morphological operation and shape features. The result is evaluated using three-fold cross-validation on six public fundus image databases such as DIARETDB0, DIARETDB1, DRIVE, AV-INSPIRE, CHASE DB1 and MESSIDOR. The attention based fully convolutional network is robust and effective for detection and segmentation of optic disc in the images affected by diabetic retinopathy and it outperforms existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention-based%20fully%20convolutional%20network" title="attention-based fully convolutional network">attention-based fully convolutional network</a>, <a href="https://publications.waset.org/abstracts/search?q=optic%20disc%20detection%20and%20segmentation" title=" optic disc detection and segmentation"> optic disc detection and segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20fundus%20image" title=" retinal fundus image"> retinal fundus image</a>, <a href="https://publications.waset.org/abstracts/search?q=screening%20of%20ocular%20diseases" title=" screening of ocular diseases"> screening of ocular diseases</a> </p> <a href="https://publications.waset.org/abstracts/112293/attention-based-fully-convolutional-neural-network-for-simultaneous-detection-and-segmentation-of-optic-disc-in-retinal-fundus-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1081</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1080</span> Spectra Analysis in Sunset Color Demonstrations with a White-Color LED as a Light Source</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Makoto%20Hasegawa">Makoto Hasegawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Seika%20Tokumitsu"> Seika Tokumitsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spectra of light beams emitted from white-color LED torches are different from those of conventional electric torches. In order to confirm if white-color LED torches can be used as light sources for popular sunset color demonstrations in spite of such differences, spectra of travelled light beams and scattered light beams with each of a white-color LED torch (composed of a blue LED and yellow-color fluorescent material) and a conventional electric torch as a light source were measured and compared with each other in a 50 cm-long water tank for sunset color demonstration experiments. Suspension liquid was prepared from acryl-emulsion and tap-water in the water tank, and light beams from the white-color LED torch or the conventional electric torch were allowed to travel in this suspension liquid. Sunset-like color was actually observed when the white-color LED torch was used as the light source in sunset color demonstrations. However, the observed colors when viewed with naked eye look slightly different from those obtainable with the conventional electric torch. At the same time, with the white-color LED, changes in colors in short to middle wavelength regions were recognized with careful observations. From those results, white-color LED torches are confirmed to be applicable as light sources in sunset color demonstrations, although certain attentions have to be paid. Further advanced classes will be successfully performed with white-color LED torches as light sources. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blue%20sky%20demonstration" title="blue sky demonstration">blue sky demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=sunset%20color%20demonstration" title=" sunset color demonstration"> sunset color demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20LED%20torch" title=" white LED torch"> white LED torch</a>, <a href="https://publications.waset.org/abstracts/search?q=physics%20education" title=" physics education"> physics education</a> </p> <a href="https://publications.waset.org/abstracts/47625/spectra-analysis-in-sunset-color-demonstrations-with-a-white-color-led-as-a-light-source" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1079</span> Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Goutam%20Kumar%20Ghorai">Goutam Kumar Ghorai</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandip%20Sadhukhan"> Sandip Sadhukhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Arpita%20Sarkar"> Arpita Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Debprasad%20Sinha"> Debprasad Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Sarkar"> G. Sarkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashis%20K.%20Dhara"> Ashis K. Dhara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20fundus%20image" title="retinal fundus image">retinal fundus image</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20convolutional%20neural%20network" title=" deep convolutional neural network"> deep convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20detection%20of%20microaneurysms" title=" early detection of microaneurysms"> early detection of microaneurysms</a>, <a href="https://publications.waset.org/abstracts/search?q=screening%20of%20diabetic%20retinopathy" title=" screening of diabetic retinopathy"> screening of diabetic retinopathy</a> </p> <a href="https://publications.waset.org/abstracts/112349/deep-convolutional-neural-network-for-detection-of-microaneurysms-in-retinal-fundus-images-at-early-stage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1078</span> A Neural Approach for Color-Textured Images Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalid%20Salhi">Khalid Salhi</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Miloud%20Jaara"> El Miloud Jaara</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Talibi%20Alaoui"> Mohammed Talibi Alaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a neural approach for unsupervised natural color-texture image segmentation, which is based on both Kohonen maps and mathematical morphology, using a combination of the texture and the image color information of the image, namely, the fractal features based on fractal dimension are selected to present the information texture, and the color features presented in RGB color space. These features are then used to train the network Kohonen, which will be represented by the underlying probability density function, the segmentation of this map is made by morphological watershed transformation. The performance of our color-texture segmentation approach is compared first, to color-based methods or texture-based methods only, and then to k-means method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=color-texture" title=" color-texture"> color-texture</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal" title=" fractal"> fractal</a>, <a href="https://publications.waset.org/abstracts/search?q=watershed" title=" watershed"> watershed</a> </p> <a href="https://publications.waset.org/abstracts/51740/a-neural-approach-for-color-textured-images-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1077</span> Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Siriarchawatana">P. Siriarchawatana</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Leungchavaphongse"> K. Leungchavaphongse</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Covavisaruch"> N. Covavisaruch</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Rojananuangnit"> K. Rojananuangnit</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Boondaeng"> P. Boondaeng</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Panyayingyong"> N. Panyayingyong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 <em>vs</em>. 0.32, <em>p</em> &lt; 0.05 for inferotemporal vein, 0.33 <em>vs</em>. 0.30, <em>p</em> &lt; 0.01 for inferotemporal artery, 0.34 <em>vs</em>. 0.31, <em>p </em>&lt; 0.01 for superotemporal vein, and 0.33 <em>vs</em>. 0.30, <em>p</em> &lt; 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=glaucoma" title="glaucoma">glaucoma</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20vessel" title=" retinal vessel"> retinal vessel</a>, <a href="https://publications.waset.org/abstracts/search?q=central%20light%20reflex" title=" central light reflex"> central light reflex</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20photograph" title=" fundus photograph"> fundus photograph</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a> </p> <a href="https://publications.waset.org/abstracts/54545/comparison-of-central-light-reflex-width-to-retinal-vessel-diameter-ratio-between-glaucoma-and-normal-eyes-by-using-edge-detection-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54545.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">325</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1076</span> Experimental Characterization of the Color Quality and Error Rate for an Red, Green, and Blue-Based Light Emission Diode-Fixture Used in Visible Light Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20F.%20Gutierrez">Juan F. Gutierrez</a>, <a href="https://publications.waset.org/abstracts/search?q=Jesus%20M.%20Quintero"> Jesus M. Quintero</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Sandoval"> Diego Sandoval</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An important feature of LED technology is the fast on-off commutation, which allows data transmission. Visible Light Communication (VLC) is a wireless method to transmit data with visible light. Modulation formats such as On-Off Keying (OOK) and Color Shift Keying (CSK) are used in VLC. Since CSK is based on three color bands uses red, green, and blue monochromatic LED (RGB-LED) to define a pattern of chromaticities. This type of CSK provides poor color quality in the illuminated area. This work presents the design and implementation of a VLC system using RGB-based CSK with 16, 8, and 4 color points, mixing with a steady baseline of a phosphor white-LED, to improve the color quality of the LED-Fixture. The experimental system was assessed in terms of the Color Rendering Index (CRI) and the Symbol Error Rate (SER). Good color quality performance of the LED-Fixture was obtained with an acceptable SER. The laboratory setup used to characterize and calibrate an LED-Fixture is described. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=VLC" title="VLC">VLC</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20lighting" title=" indoor lighting"> indoor lighting</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quality" title=" color quality"> color quality</a>, <a href="https://publications.waset.org/abstracts/search?q=symbol%20error%20rate" title=" symbol error rate"> symbol error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20shift%20keying" title=" color shift keying"> color shift keying</a> </p> <a href="https://publications.waset.org/abstracts/158336/experimental-characterization-of-the-color-quality-and-error-rate-for-an-red-green-and-blue-based-light-emission-diode-fixture-used-in-visible-light-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1075</span> The Impact of the “Cold Ambient Color = Healthy” Intuition on Consumer Food Choice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yining%20Yu">Yining Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bingjie%20Li"> Bingjie Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Miaolei%20Jia"> Miaolei Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Wang"> Lei Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ambient color temperature is one of the most ubiquitous factors in retailing. However, there is limited research regarding the effect of cold versus warm ambient color on consumers’ food consumption. This research investigates an unexplored lay belief named the “cold ambient color = healthy” intuition and its impact on food choice. We demonstrate that consumers have built the “cold ambient color = healthy” intuition, such that they infer that a restaurant with a cold-colored ambiance is more likely to sell healthy food than a warm-colored restaurant. This deep-seated intuition also guides consumers’ food choices. We find that using a cold (vs. warm) ambient color increases the choice of healthy food, which offers insights into healthy diet promotion for retailers and policymakers. Theoretically, our work contributes to the literature on color psychology, sensory marketing, and food consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ambient%20color%20temperature" title="ambient color temperature">ambient color temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=cold%20ambient%20color" title=" cold ambient color"> cold ambient color</a>, <a href="https://publications.waset.org/abstracts/search?q=food%20choice" title=" food choice"> food choice</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20wellbeing" title=" consumer wellbeing"> consumer wellbeing</a> </p> <a href="https://publications.waset.org/abstracts/148864/the-impact-of-the-cold-ambient-color-healthy-intuition-on-consumer-food-choice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1074</span> Costume Design Influenced by Seventeenth Century Color Palettes on a Contemporary Stage</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michele%20L.%20Dormaier">Michele L. Dormaier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the research was to design costumes based on historic colors used by artists during the seventeenth century. The researcher investigated European art, primarily paintings and portraiture, as well as the color palettes used by the artists. The methodology examined the artists, their work, the color palettes used in their work, and the practices of color usage within their palettes. By examining portraits of historic figures, as well as paintings of ordinary scenes, subjects, and people, further information about color palettes was revealed. Related to the color palettes, was the use of ‘broken colors’ which was a relatively new practice, dating from the sixteenth century. The color palettes used by the artists of the seventeenth century had their limitations due to available pigments. With an examination of not only their artwork, and with a closer look at their palettes, the researcher discovered the exciting choices they made, despite those restrictions. The research was also initiated with the historical elements of the era’s clothing, as well as that of available materials and dyes. These dyes were also limited in much the same manner as the pigments which the artist had at their disposal. The color palettes of the paintings have much to tell us about the lives, status, conditions, and relationships from the past. From this research, informed decisions regarding color choices for a production on a contemporary stage of a period piece could then be made. The designer’s choices were a historic gesture to the colors which might have been worn by the character’s real-life counterparts of the era. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=broken%20color%20palette" title="broken color palette">broken color palette</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20color%20research" title=" costume color research"> costume color research</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20design" title=" costume design"> costume design</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20history" title=" costume history"> costume history</a>, <a href="https://publications.waset.org/abstracts/search?q=seventeenth%20century%20color%20palette" title=" seventeenth century color palette"> seventeenth century color palette</a>, <a href="https://publications.waset.org/abstracts/search?q=sixteenth%20century%20color%20palette" title=" sixteenth century color palette"> sixteenth century color palette</a> </p> <a href="https://publications.waset.org/abstracts/87451/costume-design-influenced-by-seventeenth-century-color-palettes-on-a-contemporary-stage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1073</span> Effect of Blanching and Drying Methods on the Degradation Kinetics and Color Stability of Radish (Raphanus sativus) Leaves</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Radha%20Krishnan">K. Radha Krishnan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mirajul%20Alom"> Mirajul Alom</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dehydrated powder prepared from fresh radish (Raphanus sativus) leaves were investigated for the color stability by different drying methods (tray, sun and solar). The effect of blanching conditions, drying methods as well as drying temperatures (50 – 90°C) were considered for studying the color degradation kinetics of chlorophyll in the dehydrated powder. The hunter color parameters (L*, a*, b*) and total color difference (TCD) were determined in order to investigate the color degradation kinetics of chlorophyll. Blanching conditions, drying method and drying temperature influenced the changes in L*, a*, b* and TCD values. The changes in color values during processing were described by a first order kinetic model. The temperature dependence of chlorophyll degradation was adequately modeled by Arrhenius equation. To predict the losses in green color, a mathematical model was developed from the steady state kinetic parameters. The results from this study indicated the protective effect of blanching conditions on the color stability of dehydrated radish powder. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chlorophyll" title="chlorophyll">chlorophyll</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20stability" title=" color stability"> color stability</a>, <a href="https://publications.waset.org/abstracts/search?q=degradation%20kinetics" title=" degradation kinetics"> degradation kinetics</a>, <a href="https://publications.waset.org/abstracts/search?q=drying" title=" drying"> drying</a> </p> <a href="https://publications.waset.org/abstracts/44880/effect-of-blanching-and-drying-methods-on-the-degradation-kinetics-and-color-stability-of-radish-raphanus-sativus-leaves" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44880.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1072</span> Image Segmentation Using 2-D Histogram in RGB Color Space in Digital Libraries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Asnaoui%20Khalid">El Asnaoui Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Aksasse%20Brahim"> Aksasse Brahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ouanan%20Mohammed"> Ouanan Mohammed </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=hierarchical%20analysis" title=" hierarchical analysis"> hierarchical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=2-D%20histogram" title=" 2-D histogram"> 2-D histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/42096/image-segmentation-using-2-d-histogram-in-rgb-color-space-in-digital-libraries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1071</span> Parallel Version of Reinhard’s Color Transfer Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abhishek%20Bhardwaj">Abhishek Bhardwaj</a>, <a href="https://publications.waset.org/abstracts/search?q=Manish%20Kumar%20Bajpai"> Manish Kumar Bajpai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An image with its content and schema of colors presents an effective mode of information sharing and processing. By changing its color schema different visions and prospect are discovered by the users. This phenomenon of color transfer is being used by Social media and other channel of entertainment. Reinhard et al’s algorithm was the first one to solve this problem of color transfer. In this paper, we make this algorithm efficient by introducing domain parallelism among different processors. We also comment on the factors that affect the speedup of this problem. In the end by analyzing the experimental data we claim to propose a novel and efficient parallel Reinhard’s algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reinhard%20et%20al%E2%80%99s%20algorithm" title="Reinhard et al’s algorithm">Reinhard et al’s algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20transferring" title=" color transferring"> color transferring</a>, <a href="https://publications.waset.org/abstracts/search?q=parallelism" title=" parallelism"> parallelism</a>, <a href="https://publications.waset.org/abstracts/search?q=speedup" title=" speedup"> speedup</a> </p> <a href="https://publications.waset.org/abstracts/21874/parallel-version-of-reinhards-color-transfer-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">614</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1070</span> Content-Based Image Retrieval Using HSV Color Space Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Qazanfari">Hamed Qazanfari</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Hassanpour"> Hamid Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazem%20Qazanfari"> Kazem Qazanfari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image&#39;s edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title="content-based image retrieval">content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20difference%20histogram" title=" color difference histogram"> color difference histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=efficient%20features%20selection" title=" efficient features selection"> efficient features selection</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a> </p> <a href="https://publications.waset.org/abstracts/75068/content-based-image-retrieval-using-hsv-color-space-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75068.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1069</span> The Role of Metallic Mordant in Natural Dyeing Process: Experimental and Quantum Study on Color Fastness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo-Gaun%20Chen">Bo-Gaun Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Chiung-Hui%20Huang"> Chiung-Hui Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Mei-Ching%20Chiang"> Mei-Ching Chiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuo-Hsing%20Lee"> Kuo-Hsing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Chia-Chen%20Ho"> Chia-Chen Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Ping%20Huang"> Chin-Ping Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Heng%20Tien"> Chin-Heng Tien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is known that the natural dyeing of cloth results moderate color, but with poor color fastness. This study points out the correlation between the macroscopic color fastness of natural dye to the cotton fiber and the microscopic binding energy of dye molecule to the cellulose. With the additive metallic mordant, the new-formed coordination bond bridges the dye to the fiber surface and thus affects the color fastness as well as the color appearance. The density functional theory (DFT) calculation is therefore used to explore the most possible mechanism during the dyeing process. Finally, the experimental results reflect the strong effect of three different metal ions on the natural dyeing clothes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binding%20energy" title="binding energy">binding energy</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20fastness" title=" color fastness"> color fastness</a>, <a href="https://publications.waset.org/abstracts/search?q=density%20functional%20theory%20%28DFT%29" title=" density functional theory (DFT)"> density functional theory (DFT)</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20dyeing" title=" natural dyeing"> natural dyeing</a>, <a href="https://publications.waset.org/abstracts/search?q=metallic%20mordant" title=" metallic mordant"> metallic mordant</a> </p> <a href="https://publications.waset.org/abstracts/37833/the-role-of-metallic-mordant-in-natural-dyeing-process-experimental-and-quantum-study-on-color-fastness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37833.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">557</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1068</span> Effect of Color on Anagram Solving Ability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khushi%20Chhajed">Khushi Chhajed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Context: Color has been found to have an impact on cognitive performance. Due to the negative connotation associated with red, it has been found to impair performance on intellectual tasks. Aim: This study aims to assess the effect of color on individuals' anagram solving ability. Methodology: An experimental study was conducted on 66 participants in the age group of 18–24 years. A self-made anagram assessment tool was administered. Participants were expected to solve the tool in three colors- red, blue and grey. Results: A lower score was found when presented with the color blue as compared to red. The study also found that participants took relatively greater time to solve the red colored sheet. However these results are inconsistent with pre-existing literature. Conclusion: Hence, an association between color and performance on cognitive tasks can be seen. Future directions and potential limitations are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20psychology" title="color psychology">color psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=experiment" title=" experiment"> experiment</a>, <a href="https://publications.waset.org/abstracts/search?q=anagram" title=" anagram"> anagram</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a> </p> <a href="https://publications.waset.org/abstracts/160096/effect-of-color-on-anagram-solving-ability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1067</span> Understanding Perceptual Differences and Preferences of Urban Color in New Taipei City</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuheng%20Tao">Yuheng Tao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rapid urbanization has brought the consequences of incompatible and excessive homogeneity of urban system, and urban color planning has become one of the most effective ways to restore the characteristics of cities. Among the many urban color design research, the establishment of urban theme colors has rarely been discussed. This study took the "New Taipei City Environmental Aesthetic Color” project as a research case and conducted mixed-method research that included expert interviews and quantitative survey data. This study introduces how theme colors were selected by the experts and investigates public’s perception and preference of the selected theme colors. Several findings include 1) urban memory plays a significant role in determining urban theme colors; 2) When establishing urban theme colors, areas/cities with relatively weak urban memory are given priority to be defined; 3) Urban theme colors that imply cultural attributes are more widely accepted by the public; 4) A representative city theme color helps conserve culture rather than guiding innovation. In addition, this research rearranges the urban color symbolism and specific content of urban theme colors and provides a more scientific urban theme color selection scheme for urban planners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urban%20theme%20color" title="urban theme color">urban theme color</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20color%20attribute" title=" urban color attribute"> urban color attribute</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20perception" title=" public perception"> public perception</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20preferences" title=" public preferences"> public preferences</a> </p> <a href="https://publications.waset.org/abstracts/156583/understanding-perceptual-differences-and-preferences-of-urban-color-in-new-taipei-city" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156583.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1066</span> Tomato Fruit Color Changes during Ripening of Vine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.Radzevi%C4%8Dius">A.Radzevičius</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Vi%C5%A1kelis"> P. Viškelis</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Vi%C5%A1kelis"> J. Viškelis</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Karklelien%C4%97"> R. Karklelienė</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Ju%C5%A1kevi%C4%8Dien%C4%97"> D. Juškevičienė</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tomato (Lycopersicon esculentum Mill.) hybrid 'Brooklyn' was investigated at the LRCAF Institute of Horticulture. For investigation, five green tomatoes, which were grown on vine, were selected. Color measurements were made in the greenhouse with the same selected tomato fruits (fruits were not harvested and were growing and ripening on tomato vine through all experiment) in every two days while tomatoes fruits became fully ripen. Study showed that color index L has tendency to decline and established determination coefficient (R2) was 0.9504. Also, hue angle has tendency to decline during tomato fruit ripening on vine and it’s coefficient of determination (R2) reached–0.9739. Opposite tendency was determined with color index a, which has tendency to increase during tomato ripening and that was expressed by polynomial trendline where coefficient of determination (R2) reached–0.9592. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color" title="color">color</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20index" title=" color index"> color index</a>, <a href="https://publications.waset.org/abstracts/search?q=ripening" title=" ripening"> ripening</a>, <a href="https://publications.waset.org/abstracts/search?q=tomato" title=" tomato"> tomato</a> </p> <a href="https://publications.waset.org/abstracts/5502/tomato-fruit-color-changes-during-ripening-of-vine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5502.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">487</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1065</span> Contrast Enhancement of Color Images with Color Morphing Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javed%20Khan">Javed Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Aamir%20Saeed%20Malik"> Aamir Saeed Malik</a>, <a href="https://publications.waset.org/abstracts/search?q=Nidal%20Kamel"> Nidal Kamel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarat%20Chandra%20Dass"> Sarat Chandra Dass</a>, <a href="https://publications.waset.org/abstracts/search?q=Azura%20Mohd%20Affandi"> Azura Mohd Affandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20enhacement" title="contrast enhacement">contrast enhacement</a>, <a href="https://publications.waset.org/abstracts/search?q=normalized%20RGB" title=" normalized RGB"> normalized RGB</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20histogram%20equalization" title=" adaptive histogram equalization"> adaptive histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=cumulative%20variance." title=" cumulative variance."> cumulative variance.</a> </p> <a href="https://publications.waset.org/abstracts/42755/contrast-enhancement-of-color-images-with-color-morphing-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42755.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">376</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1064</span> Design and Development of 5-DOF Color Sorting Manipulator for Industrial Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atef%20A.%20Ata">Atef A. Ata</a>, <a href="https://publications.waset.org/abstracts/search?q=Sohair%20F.%20Rezeka"> Sohair F. Rezeka</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20El-Shenawy"> Ahmed El-Shenawy</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Diab"> Mohammed Diab</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system applications which consists of two integrated stations of processing and handling with a new image processing feature. Existing color sorting techniques use a set of inductive, capacitive, and optical sensors to differentiate object color. This research presents a mechatronics color sorting system solution with the application of image processing. A 5-DOF robot arm is designed and developed with pick and place operation to be main part of the color sorting system. Image processing procedure senses the circular objects in an image captured in real time by a webcam attached at the end-effector then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator that has pick-and-place mechanism. Performance analysis proves that this color based object sorting system works very accurate under ideal condition in term of adequate illumination, circular objects shape and color. The circular objects tested for sorting are red, green and blue. For non-ideal condition, such as unspecified color the accuracy reduces to 80%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=robotics%20manipulator" title="robotics manipulator">robotics manipulator</a>, <a href="https://publications.waset.org/abstracts/search?q=5-DOF%20manipulator" title=" 5-DOF manipulator"> 5-DOF manipulator</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20sorting" title=" color sorting"> color sorting</a>, <a href="https://publications.waset.org/abstracts/search?q=pick-and-place" title=" pick-and-place"> pick-and-place</a> </p> <a href="https://publications.waset.org/abstracts/1473/design-and-development-of-5-dof-color-sorting-manipulator-for-industrial-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1473.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">374</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1063</span> Clustering Color Space, Time Interest Points for Moving Objects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Bellamine">Insaf Bellamine</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Tairi"> Hamid Tairi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting moving objects in sequences is an essential step for video analysis. This paper mainly contributes to the Color Space-Time Interest Points (CSTIP) extraction and detection. We propose a new method for detection of moving objects. Two main steps compose the proposed method. First, we suggest to apply the algorithm of the detection of Color Space-Time Interest Points (CSTIP) on both components of the Color Structure-Texture Image Decomposition which is based on a Partial Differential Equation (PDE): a color geometric structure component and a color texture component. A descriptor is associated to each of these points. In a second stage, we address the problem of grouping the points (CSTIP) into clusters. Experiments and comparison to other motion detection methods on challenging sequences show the performance of the proposed method and its utility for video analysis. Experimental results are obtained from very different types of videos, namely sport videos and animation movies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Color%20Space-Time%20Interest%20Points%20%28CSTIP%29" title="Color Space-Time Interest Points (CSTIP)">Color Space-Time Interest Points (CSTIP)</a>, <a href="https://publications.waset.org/abstracts/search?q=Color%20Structure-Texture%20Image%20Decomposition" title=" Color Structure-Texture Image Decomposition"> Color Structure-Texture Image Decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Detection" title=" Motion Detection"> Motion Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a> </p> <a href="https://publications.waset.org/abstracts/21989/clustering-color-space-time-interest-points-for-moving-objects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1062</span> FISCEAPP: FIsh Skin Color Evaluation APPlication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Urban">J. Urban</a>, <a href="https://publications.waset.org/abstracts/search?q=%C3%81.%20S.%20Botella"> Á. S. Botella</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20E.%20Robaina"> L. E. Robaina</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20B%C3%A1rta"> A. Bárta</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sou%C4%8Dek"> P. Souček</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20C%C3%ADsa%C5%99"> P. Císař</a>, <a href="https://publications.waset.org/abstracts/search?q=%C5%A0.%20Pap%C3%A1%C4%8Dek"> Š. Papáček</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20M.%20Dom%C3%ADnguez"> L. M. Domínguez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin coloration in fish is of great physiological, behavioral and ecological importance and can be considered as an index of animal welfare in aquaculture as well as an important quality factor in the retail value. Currently, in order to compare color in animals fed on different diets, biochemical analysis, and colorimetry of fished, mildly anesthetized or dead body, are very accurate and meaningful measurements. The noninvasive method using digital images of the fish body was developed as a standalone application. This application deals with the computation burden and memory consumption of large input files, optimizing piece wise processing and analysis with the memory/computation time ratio. For the comparison of color distributions of various experiments and different color spaces (RGB, CIE L*a*b*) the comparable semi-equidistant binning of multi channels representation is introduced. It is derived from the knowledge of quantization levels and Freedman-Diaconis rule. The color calibrations and camera responsivity function were necessary part of the measurement process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20distribution" title="color distribution">color distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=fish%20skin%20color" title=" fish skin color"> fish skin color</a>, <a href="https://publications.waset.org/abstracts/search?q=piecewise%20transformation" title=" piecewise transformation"> piecewise transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20to%20background%20segmentation" title=" object to background segmentation"> object to background segmentation</a> </p> <a href="https://publications.waset.org/abstracts/15406/fisceapp-fish-skin-color-evaluation-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15406.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1061</span> Novel Algorithm for Restoration of Retina Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Subbuthai">P. Subbuthai</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Muruganand"> S. Muruganand</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy is one of the complicated diseases and it is caused by the changes in the blood vessels of the retina. Extraction of retina image through Fundus camera sometimes produced poor contrast and noises. Because of this noise, detection of blood vessels in the retina is very complicated. So preprocessing is needed, in this paper, a novel algorithm is implemented to remove the noisy pixel in the retina image. The proposed algorithm is Extended Median Filter and it is applied to the green channel of the retina because green channel vessels are brighter than the background. Proposed extended median filter is compared with the existing standard median filter by performance metrics such as PSNR, MSE and RMSE. Experimental results show that the proposed Extended Median Filter algorithm gives a better result than the existing standard median filter in terms of noise suppression and detail preservation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fundus%20retina%20image" title="fundus retina image">fundus retina image</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20filter" title=" median filter"> median filter</a>, <a href="https://publications.waset.org/abstracts/search?q=microaneurysms" title=" microaneurysms"> microaneurysms</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a> </p> <a href="https://publications.waset.org/abstracts/20819/novel-algorithm-for-restoration-of-retina-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20819.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1060</span> Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seynabou%20Toure">Seynabou Toure</a>, <a href="https://publications.waset.org/abstracts/search?q=Oumar%20Diop"> Oumar Diop</a>, <a href="https://publications.waset.org/abstracts/search?q=Kidiyo%20Kpalma"> Kidiyo Kpalma</a>, <a href="https://publications.waset.org/abstracts/search?q=Amadou%20S.%20Maiga"> Amadou S. Maiga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=coastline" title=" coastline"> coastline</a>, <a href="https://publications.waset.org/abstracts/search?q=color" title=" color"> color</a>, <a href="https://publications.waset.org/abstracts/search?q=sea-land%20segmentation" title=" sea-land segmentation"> sea-land segmentation</a> </p> <a href="https://publications.waset.org/abstracts/84598/best-performing-color-space-for-land-sea-segmentation-using-wavelet-transform-color-texture-features-and-fusion-of-over-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">247</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=36">36</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=37">37</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20fundus&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10