CINXE.COM
Search results for: color vision test
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: color vision test</title> <meta name="description" content="Search results for: color vision test"> <meta name="keywords" content="color vision test"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="color vision test" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="color vision test"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 11099</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: color vision test</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11099</span> Cone Contrast Sensitivity of Normal Trichromats and Those with Red-Green Dichromats</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tatsuya%20Iizuka">Tatsuya Iizuka</a>, <a href="https://publications.waset.org/abstracts/search?q=Takushi%20Kawamorita"> Takushi Kawamorita</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomoya%20Handa"> Tomoya Handa</a>, <a href="https://publications.waset.org/abstracts/search?q=Hitoshi%20Ishikawa"> Hitoshi Ishikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We report normative cone contrast sensitivity values and sensitivity and specificity values for a computer-based color vision test, the cone contrast test-HD (CCT-HD). The participants included 50 phakic eyes with normal color vision (NCV) and 20 dichromatic eyes (ten with protanopia and ten with deuteranopia). The CCT-HD was used to measure L, M, and S-CCT-HD scores (color vision deficiency, L-, M-cone logCS≦1.65, S-cone logCS≦0.425) to investigate the sensitivity and specificity of CCT-HD based on anomalous-type diagnosis with animalscope. The mean ± standard error L-, M-, S-cone logCS for protanopia were 0.90±0.04, 1.65±0.03, and 0.63±0.02, respectively; for deuteranopia 1.74±0.03, 1.31±0.03, and 0.61±0.06, respectively; and for age-matched NCV were 1.89±0.04, 1.84±0.04, and 0.60±0.03, respectively, with significant differences for each group except for S-CCT-HD (Bonferroni corrected α = 0.0167, p < 0.0167). The sensitivity and specificity of CCT-HD were 100% for protan and deutan in diagnosing abnormal types from 20 to 64 years of age, but the specificity decreased to 65% for protan and 55% for deutan in older persons > 65. CCT-HD is comparable to the diagnostic performance of the anomalous type in the anomaloscope for the 20-64-year-old age group. However, the results should be interpreted cautiously in those ≥ 65 years. They are more susceptible to acquired color vision deficiencies due to the yellowing of the crystalline lens and other factors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20test%20HD" title="cone contrast test HD">cone contrast test HD</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20vision%20test" title=" color vision test"> color vision test</a>, <a href="https://publications.waset.org/abstracts/search?q=congenital%20color%20vision%20deficiency" title=" congenital color vision deficiency"> congenital color vision deficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=red-green%20dichromacy" title=" red-green dichromacy"> red-green dichromacy</a>, <a href="https://publications.waset.org/abstracts/search?q=cone%20contrast%20sensitivity" title=" cone contrast sensitivity"> cone contrast sensitivity</a> </p> <a href="https://publications.waset.org/abstracts/159154/cone-contrast-sensitivity-of-normal-trichromats-and-those-with-red-green-dichromats" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159154.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11098</span> Evaluating the Performance of Color Constancy Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Damanjit%20Kaur">Damanjit Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Avani%20Bhatia"> Avani Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20constancy" title="color constancy">color constancy</a>, <a href="https://publications.waset.org/abstracts/search?q=gray%20world" title=" gray world"> gray world</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20patch" title=" white patch"> white patch</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20white%20patch" title=" modified white patch "> modified white patch </a> </p> <a href="https://publications.waset.org/abstracts/4799/evaluating-the-performance-of-color-constancy-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4799.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11097</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11096</span> Enhancing the Bionic Eye: A Real-time Image Optimization Framework to Encode Color and Spatial Information Into Retinal Prostheses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=William%20Huang">William Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal prostheses are currently limited to low resolution grayscale images that lack color and spatial information. This study develops a novel real-time image optimization framework and tools to encode maximum information to the prostheses which are constrained by the number of electrodes. One key idea is to localize main objects in images while reducing unnecessary background noise through region-contrast saliency maps. A novel color depth mapping technique was developed through MiniBatchKmeans clustering and color space selection. The resulting image was downsampled using bicubic interpolation to reduce image size while preserving color quality. In comparison to current schemes, the proposed framework demonstrated better visual quality in tested images. The use of the region-contrast saliency map showed improvements in efficacy up to 30%. Finally, the computational speed of this algorithm is less than 380 ms on tested cases, making real-time retinal prostheses feasible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20implants" title="retinal implants">retinal implants</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20processing%20unit" title=" virtual processing unit"> virtual processing unit</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=saliency%20maps" title=" saliency maps"> saliency maps</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quantization" title=" color quantization"> color quantization</a> </p> <a href="https://publications.waset.org/abstracts/147972/enhancing-the-bionic-eye-a-real-time-image-optimization-framework-to-encode-color-and-spatial-information-into-retinal-prostheses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147972.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11095</span> Automatic Detection of Proliferative Cells in Immunohistochemically Images of Meningioma Using Fuzzy C-Means Clustering and HSV Color Space</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vahid%20Anari">Vahid Anari</a>, <a href="https://publications.waset.org/abstracts/search?q=Mina%20Bakhshi"> Mina Bakhshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual search and identification of immunohistochemically stained tissue of meningioma was performed manually in pathologic laboratories to detect and diagnose the cancers type of meningioma. This task is very tedious and time-consuming. Moreover, because of cell's complex nature, it still remains a challenging task to segment cells from its background and analyze them automatically. In this paper, we develop and test a computerized scheme that can automatically identify cells in microscopic images of meningioma and classify them into positive (proliferative) and negative (normal) cells. Dataset including 150 images are used to test the scheme. The scheme uses Fuzzy C-means algorithm as a color clustering method based on perceptually uniform hue, saturation, value (HSV) color space. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=positive%20cell" title="positive cell">positive cell</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20segmentation" title=" color segmentation"> color segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color%20space" title=" HSV color space"> HSV color space</a>, <a href="https://publications.waset.org/abstracts/search?q=immunohistochemistry" title=" immunohistochemistry"> immunohistochemistry</a>, <a href="https://publications.waset.org/abstracts/search?q=meningioma" title=" meningioma"> meningioma</a>, <a href="https://publications.waset.org/abstracts/search?q=thresholding" title=" thresholding"> thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20c-means" title=" fuzzy c-means"> fuzzy c-means</a> </p> <a href="https://publications.waset.org/abstracts/109638/automatic-detection-of-proliferative-cells-in-immunohistochemically-images-of-meningioma-using-fuzzy-c-means-clustering-and-hsv-color-space" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109638.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11094</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11093</span> Hand Detection and Recognition for Malay Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Noah%20A.%20Rahman">Mohd Noah A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Afzaal%20H.%20Seyal"> Afzaal H. Seyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Norhafilah%20Bara"> Norhafilah Bara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Developing a software application using an interface with computers and peripheral devices using gestures of human body such as hand movements keeps growing in interest. A review on this hand gesture detection and recognition based on computer vision technique remains a very challenging task. This is to provide more natural, innovative and sophisticated way of non-verbal communication, such as sign language, in human computer interaction. Nevertheless, this paper explores hand detection and hand gesture recognition applying a vision based approach. The hand detection and recognition used skin color spaces such as HSV and YCrCb are applied. However, there are limitations that are needed to be considered. Almost all of skin color space models are sensitive to quickly changing or mixed lighting circumstances. There are certain restrictions in order for the hand recognition to give better results such as the distance of user’s hand to the webcam and the posture and size of the hand. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/46765/hand-detection-and-recognition-for-malay-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11092</span> Spectra Analysis in Sunset Color Demonstrations with a White-Color LED as a Light Source</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Makoto%20Hasegawa">Makoto Hasegawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Seika%20Tokumitsu"> Seika Tokumitsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spectra of light beams emitted from white-color LED torches are different from those of conventional electric torches. In order to confirm if white-color LED torches can be used as light sources for popular sunset color demonstrations in spite of such differences, spectra of travelled light beams and scattered light beams with each of a white-color LED torch (composed of a blue LED and yellow-color fluorescent material) and a conventional electric torch as a light source were measured and compared with each other in a 50 cm-long water tank for sunset color demonstration experiments. Suspension liquid was prepared from acryl-emulsion and tap-water in the water tank, and light beams from the white-color LED torch or the conventional electric torch were allowed to travel in this suspension liquid. Sunset-like color was actually observed when the white-color LED torch was used as the light source in sunset color demonstrations. However, the observed colors when viewed with naked eye look slightly different from those obtainable with the conventional electric torch. At the same time, with the white-color LED, changes in colors in short to middle wavelength regions were recognized with careful observations. From those results, white-color LED torches are confirmed to be applicable as light sources in sunset color demonstrations, although certain attentions have to be paid. Further advanced classes will be successfully performed with white-color LED torches as light sources. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blue%20sky%20demonstration" title="blue sky demonstration">blue sky demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=sunset%20color%20demonstration" title=" sunset color demonstration"> sunset color demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20LED%20torch" title=" white LED torch"> white LED torch</a>, <a href="https://publications.waset.org/abstracts/search?q=physics%20education" title=" physics education"> physics education</a> </p> <a href="https://publications.waset.org/abstracts/47625/spectra-analysis-in-sunset-color-demonstrations-with-a-white-color-led-as-a-light-source" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11091</span> Burnout Recognition for Call Center Agents by Using Skin Color Detection with Hand Poses </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Sayed%20A.%20Sharara">El Sayed A. Sharara</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tsuji"> A. Tsuji</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Call centers have been expanding and they have influence on activation in various markets increasingly. A call center’s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance is evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent burnout detection system has been implemented by using a web camera and has been processed by MATLAB. From the experimental results, our system achieved 96.3% for upper back pain detection and 94.2% for neck pain detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=call%20center%20agents" title="call center agents">call center agents</a>, <a href="https://publications.waset.org/abstracts/search?q=fatigue" title=" fatigue"> fatigue</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20color%20detection" title=" skin color detection"> skin color detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a> </p> <a href="https://publications.waset.org/abstracts/74913/burnout-recognition-for-call-center-agents-by-using-skin-color-detection-with-hand-poses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11090</span> A Neural Approach for Color-Textured Images Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalid%20Salhi">Khalid Salhi</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Miloud%20Jaara"> El Miloud Jaara</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Talibi%20Alaoui"> Mohammed Talibi Alaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a neural approach for unsupervised natural color-texture image segmentation, which is based on both Kohonen maps and mathematical morphology, using a combination of the texture and the image color information of the image, namely, the fractal features based on fractal dimension are selected to present the information texture, and the color features presented in RGB color space. These features are then used to train the network Kohonen, which will be represented by the underlying probability density function, the segmentation of this map is made by morphological watershed transformation. The performance of our color-texture segmentation approach is compared first, to color-based methods or texture-based methods only, and then to k-means method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=color-texture" title=" color-texture"> color-texture</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal" title=" fractal"> fractal</a>, <a href="https://publications.waset.org/abstracts/search?q=watershed" title=" watershed"> watershed</a> </p> <a href="https://publications.waset.org/abstracts/51740/a-neural-approach-for-color-textured-images-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11089</span> Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simiao%20Ren">Simiao Ren</a>, <a href="https://publications.waset.org/abstracts/search?q=En%20Wei"> En Wei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title=" YOLOv8"> YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=swimming%20pool" title=" swimming pool"> swimming pool</a>, <a href="https://publications.waset.org/abstracts/search?q=drowning" title=" drowning"> drowning</a>, <a href="https://publications.waset.org/abstracts/search?q=domain%20adaptation" title=" domain adaptation"> domain adaptation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=GP-GAN" title=" GP-GAN"> GP-GAN</a> </p> <a href="https://publications.waset.org/abstracts/163443/domain-adaptation-save-lives-drowning-detection-in-swimming-pool-scene-based-on-yolov8-improved-by-gaussian-poisson-generative-adversarial-network-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11088</span> Experimental Characterization of the Color Quality and Error Rate for an Red, Green, and Blue-Based Light Emission Diode-Fixture Used in Visible Light Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20F.%20Gutierrez">Juan F. Gutierrez</a>, <a href="https://publications.waset.org/abstracts/search?q=Jesus%20M.%20Quintero"> Jesus M. Quintero</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Sandoval"> Diego Sandoval</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An important feature of LED technology is the fast on-off commutation, which allows data transmission. Visible Light Communication (VLC) is a wireless method to transmit data with visible light. Modulation formats such as On-Off Keying (OOK) and Color Shift Keying (CSK) are used in VLC. Since CSK is based on three color bands uses red, green, and blue monochromatic LED (RGB-LED) to define a pattern of chromaticities. This type of CSK provides poor color quality in the illuminated area. This work presents the design and implementation of a VLC system using RGB-based CSK with 16, 8, and 4 color points, mixing with a steady baseline of a phosphor white-LED, to improve the color quality of the LED-Fixture. The experimental system was assessed in terms of the Color Rendering Index (CRI) and the Symbol Error Rate (SER). Good color quality performance of the LED-Fixture was obtained with an acceptable SER. The laboratory setup used to characterize and calibrate an LED-Fixture is described. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=VLC" title="VLC">VLC</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20lighting" title=" indoor lighting"> indoor lighting</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quality" title=" color quality"> color quality</a>, <a href="https://publications.waset.org/abstracts/search?q=symbol%20error%20rate" title=" symbol error rate"> symbol error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20shift%20keying" title=" color shift keying"> color shift keying</a> </p> <a href="https://publications.waset.org/abstracts/158336/experimental-characterization-of-the-color-quality-and-error-rate-for-an-red-green-and-blue-based-light-emission-diode-fixture-used-in-visible-light-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11087</span> Cigarette Smoke Detection Based on YOLOV3</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Li">Wei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuo%20Yang"> Tuo Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=cigarette%20smoke%20detection" title=" cigarette smoke detection"> cigarette smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction" title=" color feature extraction"> color feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/159151/cigarette-smoke-detection-based-on-yolov3" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11086</span> Vision Based People Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boukerch%20Haroun">Boukerch Haroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Luo%20Qing%20Sheng"> Luo Qing Sheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Hua%20Shi"> Li Hua Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Boukraa%20Sebti"> Boukraa Sebti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camshift%20algorithm" title="camshift algorithm">camshift algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a> </p> <a href="https://publications.waset.org/abstracts/2264/vision-based-people-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11085</span> The Impact of the “Cold Ambient Color = Healthy” Intuition on Consumer Food Choice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yining%20Yu">Yining Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bingjie%20Li"> Bingjie Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Miaolei%20Jia"> Miaolei Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Wang"> Lei Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ambient color temperature is one of the most ubiquitous factors in retailing. However, there is limited research regarding the effect of cold versus warm ambient color on consumers’ food consumption. This research investigates an unexplored lay belief named the “cold ambient color = healthy” intuition and its impact on food choice. We demonstrate that consumers have built the “cold ambient color = healthy” intuition, such that they infer that a restaurant with a cold-colored ambiance is more likely to sell healthy food than a warm-colored restaurant. This deep-seated intuition also guides consumers’ food choices. We find that using a cold (vs. warm) ambient color increases the choice of healthy food, which offers insights into healthy diet promotion for retailers and policymakers. Theoretically, our work contributes to the literature on color psychology, sensory marketing, and food consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ambient%20color%20temperature" title="ambient color temperature">ambient color temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=cold%20ambient%20color" title=" cold ambient color"> cold ambient color</a>, <a href="https://publications.waset.org/abstracts/search?q=food%20choice" title=" food choice"> food choice</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20wellbeing" title=" consumer wellbeing"> consumer wellbeing</a> </p> <a href="https://publications.waset.org/abstracts/148864/the-impact-of-the-cold-ambient-color-healthy-intuition-on-consumer-food-choice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11084</span> Costume Design Influenced by Seventeenth Century Color Palettes on a Contemporary Stage</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michele%20L.%20Dormaier">Michele L. Dormaier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the research was to design costumes based on historic colors used by artists during the seventeenth century. The researcher investigated European art, primarily paintings and portraiture, as well as the color palettes used by the artists. The methodology examined the artists, their work, the color palettes used in their work, and the practices of color usage within their palettes. By examining portraits of historic figures, as well as paintings of ordinary scenes, subjects, and people, further information about color palettes was revealed. Related to the color palettes, was the use of ‘broken colors’ which was a relatively new practice, dating from the sixteenth century. The color palettes used by the artists of the seventeenth century had their limitations due to available pigments. With an examination of not only their artwork, and with a closer look at their palettes, the researcher discovered the exciting choices they made, despite those restrictions. The research was also initiated with the historical elements of the era’s clothing, as well as that of available materials and dyes. These dyes were also limited in much the same manner as the pigments which the artist had at their disposal. The color palettes of the paintings have much to tell us about the lives, status, conditions, and relationships from the past. From this research, informed decisions regarding color choices for a production on a contemporary stage of a period piece could then be made. The designer’s choices were a historic gesture to the colors which might have been worn by the character’s real-life counterparts of the era. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=broken%20color%20palette" title="broken color palette">broken color palette</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20color%20research" title=" costume color research"> costume color research</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20design" title=" costume design"> costume design</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20history" title=" costume history"> costume history</a>, <a href="https://publications.waset.org/abstracts/search?q=seventeenth%20century%20color%20palette" title=" seventeenth century color palette"> seventeenth century color palette</a>, <a href="https://publications.waset.org/abstracts/search?q=sixteenth%20century%20color%20palette" title=" sixteenth century color palette"> sixteenth century color palette</a> </p> <a href="https://publications.waset.org/abstracts/87451/costume-design-influenced-by-seventeenth-century-color-palettes-on-a-contemporary-stage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11083</span> Effect of Blanching and Drying Methods on the Degradation Kinetics and Color Stability of Radish (Raphanus sativus) Leaves</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Radha%20Krishnan">K. Radha Krishnan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mirajul%20Alom"> Mirajul Alom</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dehydrated powder prepared from fresh radish (Raphanus sativus) leaves were investigated for the color stability by different drying methods (tray, sun and solar). The effect of blanching conditions, drying methods as well as drying temperatures (50 – 90°C) were considered for studying the color degradation kinetics of chlorophyll in the dehydrated powder. The hunter color parameters (L*, a*, b*) and total color difference (TCD) were determined in order to investigate the color degradation kinetics of chlorophyll. Blanching conditions, drying method and drying temperature influenced the changes in L*, a*, b* and TCD values. The changes in color values during processing were described by a first order kinetic model. The temperature dependence of chlorophyll degradation was adequately modeled by Arrhenius equation. To predict the losses in green color, a mathematical model was developed from the steady state kinetic parameters. The results from this study indicated the protective effect of blanching conditions on the color stability of dehydrated radish powder. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chlorophyll" title="chlorophyll">chlorophyll</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20stability" title=" color stability"> color stability</a>, <a href="https://publications.waset.org/abstracts/search?q=degradation%20kinetics" title=" degradation kinetics"> degradation kinetics</a>, <a href="https://publications.waset.org/abstracts/search?q=drying" title=" drying"> drying</a> </p> <a href="https://publications.waset.org/abstracts/44880/effect-of-blanching-and-drying-methods-on-the-degradation-kinetics-and-color-stability-of-radish-raphanus-sativus-leaves" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44880.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11082</span> Visual Improvement with Low Vision Aids in Children with Stargardt’s Disease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anum%20Akhter">Anum Akhter</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Altaf"> Sumaira Altaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the effect of low vision devices i.e. telescope and magnifying glasses on distance visual acuity and near visual acuity of children with Stargardt’s disease. Setting: Low vision department, Alshifa Trust Eye Hospital, Rawalpindi, Pakistan. Methods: 52 children having Stargardt’s disease were included in the study. All children were diagnosed by pediatrics ophthalmologists. Comprehensive low vision assessment was done by me in Low vision clinic. Visual acuity was measured using ETDRS chart. Refraction and other supplementary tests were performed. Children with Stargardt’s disease were provided with different telescopes and magnifying glasses for improving far vision and near vision. Results: Out of 52 children, 17 children were males and 35 children were females. Distance visual acuity and near visual acuity improved significantly with low vision aid trial. All children showed visual acuity better than 6/19 with a telescope of higher magnification. Improvement in near visual acuity was also significant with magnifying glasses trial. Conclusions: Low vision aids are useful for improvement in visual acuity in children. Children with Stargardt’s disease who are having a problem in education and daily life activities can get help from low vision aids. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stargardt" title="Stargardt">Stargardt</a>, <a href="https://publications.waset.org/abstracts/search?q=s%20disease" title="s disease">s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision%20aids" title=" low vision aids"> low vision aids</a>, <a href="https://publications.waset.org/abstracts/search?q=telescope" title=" telescope"> telescope</a>, <a href="https://publications.waset.org/abstracts/search?q=magnifiers" title=" magnifiers"> magnifiers</a> </p> <a href="https://publications.waset.org/abstracts/24382/visual-improvement-with-low-vision-aids-in-children-with-stargardts-disease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">539</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11081</span> Image Segmentation Using 2-D Histogram in RGB Color Space in Digital Libraries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Asnaoui%20Khalid">El Asnaoui Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Aksasse%20Brahim"> Aksasse Brahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ouanan%20Mohammed"> Ouanan Mohammed </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=hierarchical%20analysis" title=" hierarchical analysis"> hierarchical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=2-D%20histogram" title=" 2-D histogram"> 2-D histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/42096/image-segmentation-using-2-d-histogram-in-rgb-color-space-in-digital-libraries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11080</span> Parallel Version of Reinhard’s Color Transfer Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abhishek%20Bhardwaj">Abhishek Bhardwaj</a>, <a href="https://publications.waset.org/abstracts/search?q=Manish%20Kumar%20Bajpai"> Manish Kumar Bajpai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An image with its content and schema of colors presents an effective mode of information sharing and processing. By changing its color schema different visions and prospect are discovered by the users. This phenomenon of color transfer is being used by Social media and other channel of entertainment. Reinhard et al’s algorithm was the first one to solve this problem of color transfer. In this paper, we make this algorithm efficient by introducing domain parallelism among different processors. We also comment on the factors that affect the speedup of this problem. In the end by analyzing the experimental data we claim to propose a novel and efficient parallel Reinhard’s algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reinhard%20et%20al%E2%80%99s%20algorithm" title="Reinhard et al’s algorithm">Reinhard et al’s algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20transferring" title=" color transferring"> color transferring</a>, <a href="https://publications.waset.org/abstracts/search?q=parallelism" title=" parallelism"> parallelism</a>, <a href="https://publications.waset.org/abstracts/search?q=speedup" title=" speedup"> speedup</a> </p> <a href="https://publications.waset.org/abstracts/21874/parallel-version-of-reinhards-color-transfer-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">614</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11079</span> A Custom Convolutional Neural Network with Hue, Saturation, Value Color for Malaria Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghazala%20Hcini">Ghazala Hcini</a>, <a href="https://publications.waset.org/abstracts/search?q=Imen%20Jdey"> Imen Jdey</a>, <a href="https://publications.waset.org/abstracts/search?q=Hela%20Ltifi"> Hela Ltifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malaria disease should be considered and handled as a potential restorative catastrophe. One of the most challenging tasks in the field of microscopy image processing is due to differences in test design and vulnerability of cell classifications. In this article, we focused on applying deep learning to classify patients by identifying images of infected and uninfected cells. We performed multiple forms, counting a classification approach using the Hue, Saturation, Value (HSV) color space. HSV is used since of its superior ability to speak to image brightness; at long last, for classification, a convolutional neural network (CNN) architecture is created. Clusters of focus were used to deliver the classification. The highlights got to be forbidden, and a few more clamor sorts are included in the information. The suggested method has a precision of 99.79%, a recall value of 99.55%, and provides 99.96% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20transformation" title=" color transformation"> color transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color" title=" HSV color"> HSV color</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20diagnosis" title=" malaria diagnosis"> malaria diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20cells%20images" title=" malaria cells images"> malaria cells images</a> </p> <a href="https://publications.waset.org/abstracts/161232/a-custom-convolutional-neural-network-with-hue-saturation-value-color-for-malaria-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11078</span> Bag of Words Representation Based on Fusing Two Color Local Descriptors and Building Multiple Dictionaries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Abdedayem">Fatma Abdedayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose an extension to the famous method called Bag of words (BOW) which proved a successful role in the field of image categorization. Practically, this method based on representing image with visual words. In this work, firstly, we extract features from images using Spatial Pyramid Representation (SPR) and two dissimilar color descriptors which are opponent-SIFT and transformed-color-SIFT. Secondly, we fuse color local features by joining the two histograms coming from these descriptors. Thirdly, after collecting of all features, we generate multi-dictionaries coming from n random feature subsets that obtained by dividing all features into n random groups. Then, by using these dictionaries separately each image can be represented by n histograms which are lately concatenated horizontally and form the final histogram, that allows to combine Multiple Dictionaries (MDBoW). In the final step, in order to classify image we have applied Support Vector Machine (SVM) on the generated histograms. Experimentally, we have used two dissimilar image datasets in order to test our proposition: Caltech 256 and PASCAL VOC 2007. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20words%20%28BOW%29" title="bag of words (BOW)">bag of words (BOW)</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20descriptors" title=" color descriptors"> color descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-dictionaries" title=" multi-dictionaries"> multi-dictionaries</a>, <a href="https://publications.waset.org/abstracts/search?q=MDBoW" title=" MDBoW"> MDBoW</a> </p> <a href="https://publications.waset.org/abstracts/14637/bag-of-words-representation-based-on-fusing-two-color-local-descriptors-and-building-multiple-dictionaries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14637.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11077</span> Content-Based Image Retrieval Using HSV Color Space Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Qazanfari">Hamed Qazanfari</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Hassanpour"> Hamid Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazem%20Qazanfari"> Kazem Qazanfari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title="content-based image retrieval">content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20difference%20histogram" title=" color difference histogram"> color difference histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=efficient%20features%20selection" title=" efficient features selection"> efficient features selection</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a> </p> <a href="https://publications.waset.org/abstracts/75068/content-based-image-retrieval-using-hsv-color-space-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75068.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11076</span> The Role of Metallic Mordant in Natural Dyeing Process: Experimental and Quantum Study on Color Fastness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo-Gaun%20Chen">Bo-Gaun Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Chiung-Hui%20Huang"> Chiung-Hui Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Mei-Ching%20Chiang"> Mei-Ching Chiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuo-Hsing%20Lee"> Kuo-Hsing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Chia-Chen%20Ho"> Chia-Chen Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Ping%20Huang"> Chin-Ping Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Heng%20Tien"> Chin-Heng Tien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is known that the natural dyeing of cloth results moderate color, but with poor color fastness. This study points out the correlation between the macroscopic color fastness of natural dye to the cotton fiber and the microscopic binding energy of dye molecule to the cellulose. With the additive metallic mordant, the new-formed coordination bond bridges the dye to the fiber surface and thus affects the color fastness as well as the color appearance. The density functional theory (DFT) calculation is therefore used to explore the most possible mechanism during the dyeing process. Finally, the experimental results reflect the strong effect of three different metal ions on the natural dyeing clothes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binding%20energy" title="binding energy">binding energy</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20fastness" title=" color fastness"> color fastness</a>, <a href="https://publications.waset.org/abstracts/search?q=density%20functional%20theory%20%28DFT%29" title=" density functional theory (DFT)"> density functional theory (DFT)</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20dyeing" title=" natural dyeing"> natural dyeing</a>, <a href="https://publications.waset.org/abstracts/search?q=metallic%20mordant" title=" metallic mordant"> metallic mordant</a> </p> <a href="https://publications.waset.org/abstracts/37833/the-role-of-metallic-mordant-in-natural-dyeing-process-experimental-and-quantum-study-on-color-fastness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37833.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">558</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11075</span> Effect of Color on Anagram Solving Ability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khushi%20Chhajed">Khushi Chhajed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Context: Color has been found to have an impact on cognitive performance. Due to the negative connotation associated with red, it has been found to impair performance on intellectual tasks. Aim: This study aims to assess the effect of color on individuals' anagram solving ability. Methodology: An experimental study was conducted on 66 participants in the age group of 18–24 years. A self-made anagram assessment tool was administered. Participants were expected to solve the tool in three colors- red, blue and grey. Results: A lower score was found when presented with the color blue as compared to red. The study also found that participants took relatively greater time to solve the red colored sheet. However these results are inconsistent with pre-existing literature. Conclusion: Hence, an association between color and performance on cognitive tasks can be seen. Future directions and potential limitations are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20psychology" title="color psychology">color psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=experiment" title=" experiment"> experiment</a>, <a href="https://publications.waset.org/abstracts/search?q=anagram" title=" anagram"> anagram</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a> </p> <a href="https://publications.waset.org/abstracts/160096/effect-of-color-on-anagram-solving-ability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11074</span> Functional Vision of Older People with Cognitive Impairment Living in Galician Nursing Homes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20V%C3%A1zquez">C. Vázquez</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20M.%20Gigirey"> L. M. Gigirey</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20P.%20del%20Oro"> C. P. del Oro</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Seoane"> S. Seoane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Poor vision is common among older people, and several studies show connections between visual impairment and cognitive function. 15 older adult live in Galician Government nursing homes, and cognitive decline is one of the main reasons of admission. Objectives: (1) To evaluate functional far and near vision of older people with cognitive impairment. (2) To determine connections between visual and cognitive state of “our” residents. Methodology: A total of 364 older adults (aged 65 years or more) underwent a visual and cognitive screening. We tested presenting visual acuity (binocular visual acuity with habitual correction if warn) for distance and near vision (E-Snellen, usual working distance for near vision). Binocular presenting visual acuity less than 0.3 was used as cut point for diagnosis of visual impairment. Exclusion criteria included immobilized residents unable to reach the USC Dual Sensory Loss Unit for visual screening. To screen cognition we employed the mini-mental examination test (Spanish version). Analysis of categorical variables was performed using chi-square tests. We utilized Pearson and Spearman correlation tests and the variance analysis to determine differences between groups of interest (SPSS 19.0 version). Results: the percentage of residents with cognitive decline reaches 32.2% Prevalence of visual impairment for distance and near vision increases among those subjects with cognitive impairment respect those with normal cognition. Shift correlation exists between distance visual acuity and mini-mental test (age and sex controlled), and moderate association was found in case of near vision (p<0.01). Conclusion: First results shows that people with cognitive impairment have poor functional distance and near vision than those with normal cognition. Next step will be to analyse the individual contribution of distance and near vision loss on cognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20impairment" title="visual impairment">visual impairment</a>, <a href="https://publications.waset.org/abstracts/search?q=cognition" title=" cognition"> cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=aging" title=" aging"> aging</a>, <a href="https://publications.waset.org/abstracts/search?q=nursing%20homes" title=" nursing homes"> nursing homes</a> </p> <a href="https://publications.waset.org/abstracts/17992/functional-vision-of-older-people-with-cognitive-impairment-living-in-galician-nursing-homes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">428</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11073</span> Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20C.%20Chen">Joseph C. Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DMAIC" title="DMAIC">DMAIC</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision%20system" title=" machine vision system"> machine vision system</a>, <a href="https://publications.waset.org/abstracts/search?q=process%20capability" title=" process capability"> process capability</a>, <a href="https://publications.waset.org/abstracts/search?q=Taguchi%20Parameter%20Design" title=" Taguchi Parameter Design"> Taguchi Parameter Design</a> </p> <a href="https://publications.waset.org/abstracts/68243/optimizing-machine-vision-system-setup-accuracy-by-six-sigma-dmaic-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">437</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11072</span> Understanding Perceptual Differences and Preferences of Urban Color in New Taipei City</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuheng%20Tao">Yuheng Tao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rapid urbanization has brought the consequences of incompatible and excessive homogeneity of urban system, and urban color planning has become one of the most effective ways to restore the characteristics of cities. Among the many urban color design research, the establishment of urban theme colors has rarely been discussed. This study took the "New Taipei City Environmental Aesthetic Color” project as a research case and conducted mixed-method research that included expert interviews and quantitative survey data. This study introduces how theme colors were selected by the experts and investigates public’s perception and preference of the selected theme colors. Several findings include 1) urban memory plays a significant role in determining urban theme colors; 2) When establishing urban theme colors, areas/cities with relatively weak urban memory are given priority to be defined; 3) Urban theme colors that imply cultural attributes are more widely accepted by the public; 4) A representative city theme color helps conserve culture rather than guiding innovation. In addition, this research rearranges the urban color symbolism and specific content of urban theme colors and provides a more scientific urban theme color selection scheme for urban planners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urban%20theme%20color" title="urban theme color">urban theme color</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20color%20attribute" title=" urban color attribute"> urban color attribute</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20perception" title=" public perception"> public perception</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20preferences" title=" public preferences"> public preferences</a> </p> <a href="https://publications.waset.org/abstracts/156583/understanding-perceptual-differences-and-preferences-of-urban-color-in-new-taipei-city" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156583.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11071</span> Comparison of Visio-spatial Intelligence Between Amateur Rugby and Netball Players Using a Hand-Eye Coordination Specific Visual Test Battery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lourens%20Millard">Lourens Millard</a>, <a href="https://publications.waset.org/abstracts/search?q=Gerrit%20Jan%20Breukelman"> Gerrit Jan Breukelman</a>, <a href="https://publications.waset.org/abstracts/search?q=Nonkululeko%20Mathe"> Nonkululeko Mathe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aim: The research aims to investigate the differences in visio-spatial skills (VSS) between athletes and non-athletes, as well as variations across sports, presenting conflicting findings. Therefore, the objective of this study was to determine if there exist significant differences in visio-spatial intelligence skills between rugby players and netball players, and whether such disparities are present when comparing both groups to non-athletes. Methods: Participants underwent an optometric assessment, followed by an evaluation of VSS using six established tests: the Hart Near Far Rock, saccadic eye movement, evasion, accumulator, flash memory, and ball wall toss tests. Results: The results revealed that rugby players significantly outperformed netball players in speed of recognition, peripheral awareness, and hand-eye coordination (p=.000). Moreover, both rugby players and netball players performed significantly better than non-athletes in five of the six tests (p=.000), with the exception being the visual memory test (p=.809). Conclusion: This discrepancy in performance suggests that certain VSS are superior in athletes compared to non-athletes, highlighting potential implications for theories of vision, test selection, and the development of sport-specific VSS testing batteries. Furthermore, the use of a hand-eye coordination-specific VSS test battery effectively differentiated between different sports. However, this pattern was not consistent across all VSS tests, indicating that further research should explore the training methods employed by both sports, as these factors may contribute to the observed differences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visio-spatial%20intelligence%20%28VSI%29" title="visio-spatial intelligence (VSI)">visio-spatial intelligence (VSI)</a>, <a href="https://publications.waset.org/abstracts/search?q=rugby%20vision" title=" rugby vision"> rugby vision</a>, <a href="https://publications.waset.org/abstracts/search?q=netball%20vision" title=" netball vision"> netball vision</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20skills" title=" visual skills"> visual skills</a>, <a href="https://publications.waset.org/abstracts/search?q=sport%20vision." title=" sport vision."> sport vision.</a> </p> <a href="https://publications.waset.org/abstracts/188192/comparison-of-visio-spatial-intelligence-between-amateur-rugby-and-netball-players-using-a-hand-eye-coordination-specific-visual-test-battery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188192.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">51</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11070</span> Tomato Fruit Color Changes during Ripening of Vine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.Radzevi%C4%8Dius">A.Radzevičius</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Vi%C5%A1kelis"> P. Viškelis</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Vi%C5%A1kelis"> J. Viškelis</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Karklelien%C4%97"> R. Karklelienė</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Ju%C5%A1kevi%C4%8Dien%C4%97"> D. Juškevičienė</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tomato (Lycopersicon esculentum Mill.) hybrid 'Brooklyn' was investigated at the LRCAF Institute of Horticulture. For investigation, five green tomatoes, which were grown on vine, were selected. Color measurements were made in the greenhouse with the same selected tomato fruits (fruits were not harvested and were growing and ripening on tomato vine through all experiment) in every two days while tomatoes fruits became fully ripen. Study showed that color index L has tendency to decline and established determination coefficient (R2) was 0.9504. Also, hue angle has tendency to decline during tomato fruit ripening on vine and it’s coefficient of determination (R2) reached–0.9739. Opposite tendency was determined with color index a, which has tendency to increase during tomato ripening and that was expressed by polynomial trendline where coefficient of determination (R2) reached–0.9592. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color" title="color">color</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20index" title=" color index"> color index</a>, <a href="https://publications.waset.org/abstracts/search?q=ripening" title=" ripening"> ripening</a>, <a href="https://publications.waset.org/abstracts/search?q=tomato" title=" tomato"> tomato</a> </p> <a href="https://publications.waset.org/abstracts/5502/tomato-fruit-color-changes-during-ripening-of-vine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5502.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">488</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=369">369</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=370">370</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20vision%20test&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>