CINXE.COM

Search results for: color SIFT

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: color SIFT</title> <meta name="description" content="Search results for: color SIFT"> <meta name="keywords" content="color SIFT"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="color SIFT" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="color SIFT"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1090</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: color SIFT</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1090</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">468</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1089</span> A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daliyah%20S.%20Aljutaili">Daliyah S. Aljutaili</a>, <a href="https://publications.waset.org/abstracts/search?q=Redna%20A.%20Almutlaq"> Redna A. Almutlaq</a>, <a href="https://publications.waset.org/abstracts/search?q=Suha%20A.%20Alharbi"> Suha A. Alharbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Dina%20M.%20Ibrahim"> Dina M. Ibrahim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture&rsquo;s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=currency%20recognition" title="currency recognition">currency recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20detection%20and%20description" title=" feature detection and description"> feature detection and description</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT%20algorithm" title=" SIFT algorithm"> SIFT algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%20algorithm" title=" SURF algorithm"> SURF algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=speeded%20up%20and%20robust%20features" title=" speeded up and robust features"> speeded up and robust features</a> </p> <a href="https://publications.waset.org/abstracts/94315/a-speeded-up-robust-scale-invariant-feature-transform-currency-recognition-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94315.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1088</span> Bag of Words Representation Based on Fusing Two Color Local Descriptors and Building Multiple Dictionaries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Abdedayem">Fatma Abdedayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose an extension to the famous method called Bag of words (BOW) which proved a successful role in the field of image categorization. Practically, this method based on representing image with visual words. In this work, firstly, we extract features from images using Spatial Pyramid Representation (SPR) and two dissimilar color descriptors which are opponent-SIFT and transformed-color-SIFT. Secondly, we fuse color local features by joining the two histograms coming from these descriptors. Thirdly, after collecting of all features, we generate multi-dictionaries coming from n random feature subsets that obtained by dividing all features into n random groups. Then, by using these dictionaries separately each image can be represented by n histograms which are lately concatenated horizontally and form the final histogram, that allows to combine Multiple Dictionaries (MDBoW). In the final step, in order to classify image we have applied Support Vector Machine (SVM) on the generated histograms. Experimentally, we have used two dissimilar image datasets in order to test our proposition: Caltech 256 and PASCAL VOC 2007. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20words%20%28BOW%29" title="bag of words (BOW)">bag of words (BOW)</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20descriptors" title=" color descriptors"> color descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-dictionaries" title=" multi-dictionaries"> multi-dictionaries</a>, <a href="https://publications.waset.org/abstracts/search?q=MDBoW" title=" MDBoW"> MDBoW</a> </p> <a href="https://publications.waset.org/abstracts/14637/bag-of-words-representation-based-on-fusing-two-color-local-descriptors-and-building-multiple-dictionaries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14637.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1087</span> Object Detection Based on Plane Segmentation and Features Matching for a Service Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ant%C3%B3nio%20J.%20R.%20Neves">António J. R. Neves</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Garcia"> Rui Garcia</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Dias"> Paulo Dias</a>, <a href="https://publications.waset.org/abstracts/search?q=Alina%20Trifan"> Alina Trifan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the aging of the world population and the continuous growth in technology, service robots are more and more explored nowadays as alternatives to healthcare givers or personal assistants for the elderly or disabled people. Any service robot should be capable of interacting with the human companion, receive commands, navigate through the environment, either known or unknown, and recognize objects. This paper proposes an approach for object recognition based on the use of depth information and color images for a service robot. We present a study on two of the most used methods for object detection, where 3D data is used to detect the position of objects to classify that are found on horizontal surfaces. Since most of the objects of interest accessible for service robots are on these surfaces, the proposed 3D segmentation reduces the processing time and simplifies the scene for object recognition. The first approach for object recognition is based on color histograms, while the second is based on the use of the SIFT and SURF feature descriptors. We present comparative experimental results obtained with a real service robot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title="object detection">object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature" title=" feature"> feature</a>, <a href="https://publications.waset.org/abstracts/search?q=descriptors" title=" descriptors"> descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF" title=" SURF"> SURF</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20images" title=" depth images"> depth images</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20robots" title=" service robots"> service robots</a> </p> <a href="https://publications.waset.org/abstracts/39840/object-detection-based-on-plane-segmentation-and-features-matching-for-a-service-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">546</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1086</span> SIFT and Perceptual Zoning Applied to CBIR Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simone%20B.%20K.%20Aires">Simone B. K. Aires</a>, <a href="https://publications.waset.org/abstracts/search?q=Cinthia%20O.%20de%20A.%20Freitas"> Cinthia O. de A. Freitas</a>, <a href="https://publications.waset.org/abstracts/search?q=Luiz%20E.%20S.%20Oliveira"> Luiz E. S. Oliveira</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper contributes to the CBIR systems applied to trademark retrieval. The proposed model includes aspects from visual perception of the shapes, by means of feature extractor associated to a non-symmetrical perceptual zoning mechanism based on the Principles of Gestalt. Thus, the feature set were performed using Scale Invariant Feature Transform (SIFT). We carried out experiments using four different zonings strategies (Z = 4, 5H, 5V, 7) for matching and retrieval tasks. Our proposal method achieved the normalized recall (Rn) equal to 0.84. Experiments show that the non-symmetrical zoning could be considered as a tool to build more reliable trademark retrieval systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CBIR" title="CBIR">CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=Gestalt" title=" Gestalt"> Gestalt</a>, <a href="https://publications.waset.org/abstracts/search?q=matching" title=" matching"> matching</a>, <a href="https://publications.waset.org/abstracts/search?q=non-symmetrical%20zoning" title=" non-symmetrical zoning"> non-symmetrical zoning</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a> </p> <a href="https://publications.waset.org/abstracts/15764/sift-and-perceptual-zoning-applied-to-cbir-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15764.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1085</span> Evaluating the Performance of Color Constancy Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Damanjit%20Kaur">Damanjit Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Avani%20Bhatia"> Avani Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20constancy" title="color constancy">color constancy</a>, <a href="https://publications.waset.org/abstracts/search?q=gray%20world" title=" gray world"> gray world</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20patch" title=" white patch"> white patch</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20white%20patch" title=" modified white patch "> modified white patch </a> </p> <a href="https://publications.waset.org/abstracts/4799/evaluating-the-performance-of-color-constancy-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4799.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1084</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1083</span> Fast and Scale-Adaptive Target Tracking via PCA-SIFT</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang">Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongchang%20Chen"> Hongchang Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li"> Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiangpeng%20Zhang"> Jiangpeng Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title="target tracking">target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA-SIFT" title=" PCA-SIFT"> PCA-SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=mean-shift" title=" mean-shift"> mean-shift</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-adaptive" title=" scale-adaptive"> scale-adaptive</a> </p> <a href="https://publications.waset.org/abstracts/19009/fast-and-scale-adaptive-target-tracking-via-pca-sift" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1082</span> A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Takanori%20Tanaka">Takanori Tanaka</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Kitao"> Daisuke Kitao</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Ikeda"> Daisuke Ikeda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data-intensive%20science" title="data-intensive science">data-intensive science</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=aurora" title=" aurora"> aurora</a> </p> <a href="https://publications.waset.org/abstracts/19532/a-similar-image-retrieval-system-for-auroral-all-sky-images-based-on-local-features-and-color-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1081</span> Classifications of Images for the Recognition of People’s Behaviors by SIFT and SVM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Henni%20Sid%20Ahmed">Henni Sid Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Belbachir%20Mohamed%20Faouzi"> Belbachir Mohamed Faouzi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean%20Caelen"> Jean Caelen </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Behavior recognition has been studied for realizing drivers assisting system and automated navigation and is an important studied field in the intelligent Building. In this paper, a recognition method of behavior recognition separated from a real image was studied. Images were divided into several categories according to the actual weather, distance and angle of view etc. SIFT was firstly used to detect key points and describe them because the SIFT (Scale Invariant Feature Transform) features were invariant to image scale and rotation and were robust to changes in the viewpoint and illumination. My goal is to develop a robust and reliable system which is composed of two fixed cameras in every room of intelligent building which are connected to a computer for acquisition of video sequences, with a program using these video sequences as inputs, we use SIFT represented different images of video sequences, and SVM (support vector machine) Lights as a programming tool for classification of images in order to classify people’s behaviors in the intelligent building in order to give maximum comfort with optimized energy consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20analysis" title="video analysis">video analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=people%20behavior" title=" people behavior"> people behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20building" title=" intelligent building"> intelligent building</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification "> classification </a> </p> <a href="https://publications.waset.org/abstracts/24738/classifications-of-images-for-the-recognition-of-peoples-behaviors-by-sift-and-svm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24738.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1080</span> Using Scale Invariant Feature Transform Features to Recognize Characters in Natural Scene Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belaynesh%20Chekol">Belaynesh Chekol</a>, <a href="https://publications.waset.org/abstracts/search?q=Numan%20%C3%87elebi"> Numan Çelebi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main purpose of this work is to recognize individual characters extracted from natural scene images using scale invariant feature transform (SIFT) features as an input to K-nearest neighbor (KNN); a classification learner algorithm. For this task, 1,068 and 78 images of English alphabet characters taken from Chars74k data set is used to train and test the classifier respectively. For each character image, We have generated describing features by using SIFT algorithm. This set of features is fed to the learner so that it can recognize and label new images of English characters. Two types of KNN (fine KNN and weighted KNN) were trained and the resulted classification accuracy is 56.9% and 56.5% respectively. The training time taken was the same for both fine and weighted KNN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title="character recognition">character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=KNN" title=" KNN"> KNN</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20scene%20image" title=" natural scene image"> natural scene image</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a> </p> <a href="https://publications.waset.org/abstracts/58580/using-scale-invariant-feature-transform-features-to-recognize-characters-in-natural-scene-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">281</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1079</span> A Computer-Aided System for Tooth Shade Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zuhal%20Kurt">Zuhal Kurt</a>, <a href="https://publications.waset.org/abstracts/search?q=Meral%20Kurt"> Meral Kurt</a>, <a href="https://publications.waset.org/abstracts/search?q=Bilge%20T.%20Bal"> Bilge T. Bal</a>, <a href="https://publications.waset.org/abstracts/search?q=Kemal%20Ozkan"> Kemal Ozkan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classifiers" title="classifiers">classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20determination" title=" color determination"> color determination</a>, <a href="https://publications.waset.org/abstracts/search?q=computer-aided%20system" title=" computer-aided system"> computer-aided system</a>, <a href="https://publications.waset.org/abstracts/search?q=tooth%20shade%20matching" title=" tooth shade matching"> tooth shade matching</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/51113/a-computer-aided-system-for-tooth-shade-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51113.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">444</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1078</span> A Trends Analysis of Yatch Simulator</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Neung%20Lee">Jae-Neung Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Keun-Chang%20Kwak"> Keun-Chang Kwak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes an analysis of Yacht Simulator international trends and also explains about Yacht. Examples of yacht Simulator using Yacht Simulator include image processing for totaling the total number of vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT (scale invariant features transform) matching, and application of median filter and thresholding. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=yacht%20simulator" title="yacht simulator">yacht simulator</a>, <a href="https://publications.waset.org/abstracts/search?q=simulator" title=" simulator"> simulator</a>, <a href="https://publications.waset.org/abstracts/search?q=trends%20analysis" title=" trends analysis"> trends analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a> </p> <a href="https://publications.waset.org/abstracts/23888/a-trends-analysis-of-yatch-simulator" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23888.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1077</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1076</span> Change Detection Method Based on Scale-Invariant Feature Transformation Keypoints and Segmentation for Synthetic Aperture Radar Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lan%20Du">Lan Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hui%20Dai"> Hui Dai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Synthetic aperture radar (SAR) image change detection has recently become a challenging problem owing to the existence of speckle noises. In this paper, an unsupervised distribution-free change detection for SAR image based on scale-invariant feature transform (SIFT) keypoints and segmentation is proposed. Firstly, the noise-robust SIFT keypoints which reveal the blob-like structures in an image are extracted in the log-ratio image to reduce the detection range. Then, different from the traditional change detection which directly obtains the change-detection map from the difference image, segmentation is made around the extracted keypoints in the two original multitemporal SAR images to obtain accurate changed region. At last, the change-detection map is generated by comparing the two segmentations. Experimental results on the real SAR image dataset demonstrate the effectiveness of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title="change detection">change detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Synthetic%20Aperture%20Radar%20%28SAR%29" title=" Synthetic Aperture Radar (SAR)"> Synthetic Aperture Radar (SAR)</a>, <a href="https://publications.waset.org/abstracts/search?q=Scale-Invariant%20Feature%20Transformation%20%28SIFT%29" title=" Scale-Invariant Feature Transformation (SIFT)"> Scale-Invariant Feature Transformation (SIFT)</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/66992/change-detection-method-based-on-scale-invariant-feature-transformation-keypoints-and-segmentation-for-synthetic-aperture-radar-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1075</span> Spectra Analysis in Sunset Color Demonstrations with a White-Color LED as a Light Source</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Makoto%20Hasegawa">Makoto Hasegawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Seika%20Tokumitsu"> Seika Tokumitsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spectra of light beams emitted from white-color LED torches are different from those of conventional electric torches. In order to confirm if white-color LED torches can be used as light sources for popular sunset color demonstrations in spite of such differences, spectra of travelled light beams and scattered light beams with each of a white-color LED torch (composed of a blue LED and yellow-color fluorescent material) and a conventional electric torch as a light source were measured and compared with each other in a 50 cm-long water tank for sunset color demonstration experiments. Suspension liquid was prepared from acryl-emulsion and tap-water in the water tank, and light beams from the white-color LED torch or the conventional electric torch were allowed to travel in this suspension liquid. Sunset-like color was actually observed when the white-color LED torch was used as the light source in sunset color demonstrations. However, the observed colors when viewed with naked eye look slightly different from those obtainable with the conventional electric torch. At the same time, with the white-color LED, changes in colors in short to middle wavelength regions were recognized with careful observations. From those results, white-color LED torches are confirmed to be applicable as light sources in sunset color demonstrations, although certain attentions have to be paid. Further advanced classes will be successfully performed with white-color LED torches as light sources. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blue%20sky%20demonstration" title="blue sky demonstration">blue sky demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=sunset%20color%20demonstration" title=" sunset color demonstration"> sunset color demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20LED%20torch" title=" white LED torch"> white LED torch</a>, <a href="https://publications.waset.org/abstracts/search?q=physics%20education" title=" physics education"> physics education</a> </p> <a href="https://publications.waset.org/abstracts/47625/spectra-analysis-in-sunset-color-demonstrations-with-a-white-color-led-as-a-light-source" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1074</span> A Neural Approach for Color-Textured Images Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalid%20Salhi">Khalid Salhi</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Miloud%20Jaara"> El Miloud Jaara</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Talibi%20Alaoui"> Mohammed Talibi Alaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a neural approach for unsupervised natural color-texture image segmentation, which is based on both Kohonen maps and mathematical morphology, using a combination of the texture and the image color information of the image, namely, the fractal features based on fractal dimension are selected to present the information texture, and the color features presented in RGB color space. These features are then used to train the network Kohonen, which will be represented by the underlying probability density function, the segmentation of this map is made by morphological watershed transformation. The performance of our color-texture segmentation approach is compared first, to color-based methods or texture-based methods only, and then to k-means method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=color-texture" title=" color-texture"> color-texture</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal" title=" fractal"> fractal</a>, <a href="https://publications.waset.org/abstracts/search?q=watershed" title=" watershed"> watershed</a> </p> <a href="https://publications.waset.org/abstracts/51740/a-neural-approach-for-color-textured-images-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1073</span> Experimental Characterization of the Color Quality and Error Rate for an Red, Green, and Blue-Based Light Emission Diode-Fixture Used in Visible Light Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20F.%20Gutierrez">Juan F. Gutierrez</a>, <a href="https://publications.waset.org/abstracts/search?q=Jesus%20M.%20Quintero"> Jesus M. Quintero</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Sandoval"> Diego Sandoval</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An important feature of LED technology is the fast on-off commutation, which allows data transmission. Visible Light Communication (VLC) is a wireless method to transmit data with visible light. Modulation formats such as On-Off Keying (OOK) and Color Shift Keying (CSK) are used in VLC. Since CSK is based on three color bands uses red, green, and blue monochromatic LED (RGB-LED) to define a pattern of chromaticities. This type of CSK provides poor color quality in the illuminated area. This work presents the design and implementation of a VLC system using RGB-based CSK with 16, 8, and 4 color points, mixing with a steady baseline of a phosphor white-LED, to improve the color quality of the LED-Fixture. The experimental system was assessed in terms of the Color Rendering Index (CRI) and the Symbol Error Rate (SER). Good color quality performance of the LED-Fixture was obtained with an acceptable SER. The laboratory setup used to characterize and calibrate an LED-Fixture is described. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=VLC" title="VLC">VLC</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20lighting" title=" indoor lighting"> indoor lighting</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quality" title=" color quality"> color quality</a>, <a href="https://publications.waset.org/abstracts/search?q=symbol%20error%20rate" title=" symbol error rate"> symbol error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20shift%20keying" title=" color shift keying"> color shift keying</a> </p> <a href="https://publications.waset.org/abstracts/158336/experimental-characterization-of-the-color-quality-and-error-rate-for-an-red-green-and-blue-based-light-emission-diode-fixture-used-in-visible-light-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1072</span> The Impact of the “Cold Ambient Color = Healthy” Intuition on Consumer Food Choice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yining%20Yu">Yining Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bingjie%20Li"> Bingjie Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Miaolei%20Jia"> Miaolei Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Wang"> Lei Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ambient color temperature is one of the most ubiquitous factors in retailing. However, there is limited research regarding the effect of cold versus warm ambient color on consumers’ food consumption. This research investigates an unexplored lay belief named the “cold ambient color = healthy” intuition and its impact on food choice. We demonstrate that consumers have built the “cold ambient color = healthy” intuition, such that they infer that a restaurant with a cold-colored ambiance is more likely to sell healthy food than a warm-colored restaurant. This deep-seated intuition also guides consumers’ food choices. We find that using a cold (vs. warm) ambient color increases the choice of healthy food, which offers insights into healthy diet promotion for retailers and policymakers. Theoretically, our work contributes to the literature on color psychology, sensory marketing, and food consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ambient%20color%20temperature" title="ambient color temperature">ambient color temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=cold%20ambient%20color" title=" cold ambient color"> cold ambient color</a>, <a href="https://publications.waset.org/abstracts/search?q=food%20choice" title=" food choice"> food choice</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20wellbeing" title=" consumer wellbeing"> consumer wellbeing</a> </p> <a href="https://publications.waset.org/abstracts/148864/the-impact-of-the-cold-ambient-color-healthy-intuition-on-consumer-food-choice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1071</span> Costume Design Influenced by Seventeenth Century Color Palettes on a Contemporary Stage</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michele%20L.%20Dormaier">Michele L. Dormaier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the research was to design costumes based on historic colors used by artists during the seventeenth century. The researcher investigated European art, primarily paintings and portraiture, as well as the color palettes used by the artists. The methodology examined the artists, their work, the color palettes used in their work, and the practices of color usage within their palettes. By examining portraits of historic figures, as well as paintings of ordinary scenes, subjects, and people, further information about color palettes was revealed. Related to the color palettes, was the use of ‘broken colors’ which was a relatively new practice, dating from the sixteenth century. The color palettes used by the artists of the seventeenth century had their limitations due to available pigments. With an examination of not only their artwork, and with a closer look at their palettes, the researcher discovered the exciting choices they made, despite those restrictions. The research was also initiated with the historical elements of the era’s clothing, as well as that of available materials and dyes. These dyes were also limited in much the same manner as the pigments which the artist had at their disposal. The color palettes of the paintings have much to tell us about the lives, status, conditions, and relationships from the past. From this research, informed decisions regarding color choices for a production on a contemporary stage of a period piece could then be made. The designer’s choices were a historic gesture to the colors which might have been worn by the character’s real-life counterparts of the era. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=broken%20color%20palette" title="broken color palette">broken color palette</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20color%20research" title=" costume color research"> costume color research</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20design" title=" costume design"> costume design</a>, <a href="https://publications.waset.org/abstracts/search?q=costume%20history" title=" costume history"> costume history</a>, <a href="https://publications.waset.org/abstracts/search?q=seventeenth%20century%20color%20palette" title=" seventeenth century color palette"> seventeenth century color palette</a>, <a href="https://publications.waset.org/abstracts/search?q=sixteenth%20century%20color%20palette" title=" sixteenth century color palette"> sixteenth century color palette</a> </p> <a href="https://publications.waset.org/abstracts/87451/costume-design-influenced-by-seventeenth-century-color-palettes-on-a-contemporary-stage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1070</span> An Experiment of Three-Dimensional Point Clouds Using GoPro</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jong-Hwa%20Kim">Jong-Hwa Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Mu-Wook%20Pyeon"> Mu-Wook Pyeon</a>, <a href="https://publications.waset.org/abstracts/search?q=Yang-dam%20Eo"> Yang-dam Eo</a>, <a href="https://publications.waset.org/abstracts/search?q=Ill-Woong%20Jang"> Ill-Woong Jang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Construction of geo-spatial information recently tends to develop as multi-dimensional geo-spatial information. People constructing spatial information is also expanding its area to the general public from some experts. As well as, studies are in progress using a variety of devices, with the aim of near real-time update. In this paper, getting the stereo images using GoPro device used widely also to the general public as well as experts. And correcting the distortion of the images, then by using SIFT, DLT, is acquired the point clouds. It presented a possibility that on the basis of this experiment, using a video device that is readily available in real life, to create a real-time digital map. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GoPro" title="GoPro">GoPro</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=DLT" title=" DLT"> DLT</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20clouds" title=" point clouds"> point clouds</a> </p> <a href="https://publications.waset.org/abstracts/5342/an-experiment-of-three-dimensional-point-clouds-using-gopro" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1069</span> Effect of Blanching and Drying Methods on the Degradation Kinetics and Color Stability of Radish (Raphanus sativus) Leaves</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Radha%20Krishnan">K. Radha Krishnan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mirajul%20Alom"> Mirajul Alom</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dehydrated powder prepared from fresh radish (Raphanus sativus) leaves were investigated for the color stability by different drying methods (tray, sun and solar). The effect of blanching conditions, drying methods as well as drying temperatures (50 – 90°C) were considered for studying the color degradation kinetics of chlorophyll in the dehydrated powder. The hunter color parameters (L*, a*, b*) and total color difference (TCD) were determined in order to investigate the color degradation kinetics of chlorophyll. Blanching conditions, drying method and drying temperature influenced the changes in L*, a*, b* and TCD values. The changes in color values during processing were described by a first order kinetic model. The temperature dependence of chlorophyll degradation was adequately modeled by Arrhenius equation. To predict the losses in green color, a mathematical model was developed from the steady state kinetic parameters. The results from this study indicated the protective effect of blanching conditions on the color stability of dehydrated radish powder. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chlorophyll" title="chlorophyll">chlorophyll</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20stability" title=" color stability"> color stability</a>, <a href="https://publications.waset.org/abstracts/search?q=degradation%20kinetics" title=" degradation kinetics"> degradation kinetics</a>, <a href="https://publications.waset.org/abstracts/search?q=drying" title=" drying"> drying</a> </p> <a href="https://publications.waset.org/abstracts/44880/effect-of-blanching-and-drying-methods-on-the-degradation-kinetics-and-color-stability-of-radish-raphanus-sativus-leaves" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44880.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1068</span> Image Segmentation Using 2-D Histogram in RGB Color Space in Digital Libraries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Asnaoui%20Khalid">El Asnaoui Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Aksasse%20Brahim"> Aksasse Brahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ouanan%20Mohammed"> Ouanan Mohammed </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=hierarchical%20analysis" title=" hierarchical analysis"> hierarchical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=2-D%20histogram" title=" 2-D histogram"> 2-D histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/42096/image-segmentation-using-2-d-histogram-in-rgb-color-space-in-digital-libraries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1067</span> Parallel Version of Reinhard’s Color Transfer Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abhishek%20Bhardwaj">Abhishek Bhardwaj</a>, <a href="https://publications.waset.org/abstracts/search?q=Manish%20Kumar%20Bajpai"> Manish Kumar Bajpai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An image with its content and schema of colors presents an effective mode of information sharing and processing. By changing its color schema different visions and prospect are discovered by the users. This phenomenon of color transfer is being used by Social media and other channel of entertainment. Reinhard et al’s algorithm was the first one to solve this problem of color transfer. In this paper, we make this algorithm efficient by introducing domain parallelism among different processors. We also comment on the factors that affect the speedup of this problem. In the end by analyzing the experimental data we claim to propose a novel and efficient parallel Reinhard’s algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reinhard%20et%20al%E2%80%99s%20algorithm" title="Reinhard et al’s algorithm">Reinhard et al’s algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20transferring" title=" color transferring"> color transferring</a>, <a href="https://publications.waset.org/abstracts/search?q=parallelism" title=" parallelism"> parallelism</a>, <a href="https://publications.waset.org/abstracts/search?q=speedup" title=" speedup"> speedup</a> </p> <a href="https://publications.waset.org/abstracts/21874/parallel-version-of-reinhards-color-transfer-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">614</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1066</span> Content-Based Image Retrieval Using HSV Color Space Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Qazanfari">Hamed Qazanfari</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Hassanpour"> Hamid Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazem%20Qazanfari"> Kazem Qazanfari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image&#39;s edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title="content-based image retrieval">content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20difference%20histogram" title=" color difference histogram"> color difference histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=efficient%20features%20selection" title=" efficient features selection"> efficient features selection</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a> </p> <a href="https://publications.waset.org/abstracts/75068/content-based-image-retrieval-using-hsv-color-space-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75068.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1065</span> The Role of Metallic Mordant in Natural Dyeing Process: Experimental and Quantum Study on Color Fastness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo-Gaun%20Chen">Bo-Gaun Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Chiung-Hui%20Huang"> Chiung-Hui Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Mei-Ching%20Chiang"> Mei-Ching Chiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuo-Hsing%20Lee"> Kuo-Hsing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Chia-Chen%20Ho"> Chia-Chen Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Ping%20Huang"> Chin-Ping Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Heng%20Tien"> Chin-Heng Tien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is known that the natural dyeing of cloth results moderate color, but with poor color fastness. This study points out the correlation between the macroscopic color fastness of natural dye to the cotton fiber and the microscopic binding energy of dye molecule to the cellulose. With the additive metallic mordant, the new-formed coordination bond bridges the dye to the fiber surface and thus affects the color fastness as well as the color appearance. The density functional theory (DFT) calculation is therefore used to explore the most possible mechanism during the dyeing process. Finally, the experimental results reflect the strong effect of three different metal ions on the natural dyeing clothes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binding%20energy" title="binding energy">binding energy</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20fastness" title=" color fastness"> color fastness</a>, <a href="https://publications.waset.org/abstracts/search?q=density%20functional%20theory%20%28DFT%29" title=" density functional theory (DFT)"> density functional theory (DFT)</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20dyeing" title=" natural dyeing"> natural dyeing</a>, <a href="https://publications.waset.org/abstracts/search?q=metallic%20mordant" title=" metallic mordant"> metallic mordant</a> </p> <a href="https://publications.waset.org/abstracts/37833/the-role-of-metallic-mordant-in-natural-dyeing-process-experimental-and-quantum-study-on-color-fastness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37833.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">558</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1064</span> Effect of Color on Anagram Solving Ability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khushi%20Chhajed">Khushi Chhajed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Context: Color has been found to have an impact on cognitive performance. Due to the negative connotation associated with red, it has been found to impair performance on intellectual tasks. Aim: This study aims to assess the effect of color on individuals' anagram solving ability. Methodology: An experimental study was conducted on 66 participants in the age group of 18–24 years. A self-made anagram assessment tool was administered. Participants were expected to solve the tool in three colors- red, blue and grey. Results: A lower score was found when presented with the color blue as compared to red. The study also found that participants took relatively greater time to solve the red colored sheet. However these results are inconsistent with pre-existing literature. Conclusion: Hence, an association between color and performance on cognitive tasks can be seen. Future directions and potential limitations are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20psychology" title="color psychology">color psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=experiment" title=" experiment"> experiment</a>, <a href="https://publications.waset.org/abstracts/search?q=anagram" title=" anagram"> anagram</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a> </p> <a href="https://publications.waset.org/abstracts/160096/effect-of-color-on-anagram-solving-ability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1063</span> Understanding Perceptual Differences and Preferences of Urban Color in New Taipei City</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuheng%20Tao">Yuheng Tao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rapid urbanization has brought the consequences of incompatible and excessive homogeneity of urban system, and urban color planning has become one of the most effective ways to restore the characteristics of cities. Among the many urban color design research, the establishment of urban theme colors has rarely been discussed. This study took the "New Taipei City Environmental Aesthetic Color” project as a research case and conducted mixed-method research that included expert interviews and quantitative survey data. This study introduces how theme colors were selected by the experts and investigates public’s perception and preference of the selected theme colors. Several findings include 1) urban memory plays a significant role in determining urban theme colors; 2) When establishing urban theme colors, areas/cities with relatively weak urban memory are given priority to be defined; 3) Urban theme colors that imply cultural attributes are more widely accepted by the public; 4) A representative city theme color helps conserve culture rather than guiding innovation. In addition, this research rearranges the urban color symbolism and specific content of urban theme colors and provides a more scientific urban theme color selection scheme for urban planners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=urban%20theme%20color" title="urban theme color">urban theme color</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20color%20attribute" title=" urban color attribute"> urban color attribute</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20perception" title=" public perception"> public perception</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20preferences" title=" public preferences"> public preferences</a> </p> <a href="https://publications.waset.org/abstracts/156583/understanding-perceptual-differences-and-preferences-of-urban-color-in-new-taipei-city" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156583.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1062</span> Tomato Fruit Color Changes during Ripening of Vine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.Radzevi%C4%8Dius">A.Radzevičius</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Vi%C5%A1kelis"> P. Viškelis</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Vi%C5%A1kelis"> J. Viškelis</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Karklelien%C4%97"> R. Karklelienė</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Ju%C5%A1kevi%C4%8Dien%C4%97"> D. Juškevičienė</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tomato (Lycopersicon esculentum Mill.) hybrid 'Brooklyn' was investigated at the LRCAF Institute of Horticulture. For investigation, five green tomatoes, which were grown on vine, were selected. Color measurements were made in the greenhouse with the same selected tomato fruits (fruits were not harvested and were growing and ripening on tomato vine through all experiment) in every two days while tomatoes fruits became fully ripen. Study showed that color index L has tendency to decline and established determination coefficient (R2) was 0.9504. Also, hue angle has tendency to decline during tomato fruit ripening on vine and it’s coefficient of determination (R2) reached–0.9739. Opposite tendency was determined with color index a, which has tendency to increase during tomato ripening and that was expressed by polynomial trendline where coefficient of determination (R2) reached–0.9592. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color" title="color">color</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20index" title=" color index"> color index</a>, <a href="https://publications.waset.org/abstracts/search?q=ripening" title=" ripening"> ripening</a>, <a href="https://publications.waset.org/abstracts/search?q=tomato" title=" tomato"> tomato</a> </p> <a href="https://publications.waset.org/abstracts/5502/tomato-fruit-color-changes-during-ripening-of-vine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5502.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">488</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1061</span> Contrast Enhancement of Color Images with Color Morphing Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javed%20Khan">Javed Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Aamir%20Saeed%20Malik"> Aamir Saeed Malik</a>, <a href="https://publications.waset.org/abstracts/search?q=Nidal%20Kamel"> Nidal Kamel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarat%20Chandra%20Dass"> Sarat Chandra Dass</a>, <a href="https://publications.waset.org/abstracts/search?q=Azura%20Mohd%20Affandi"> Azura Mohd Affandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20enhacement" title="contrast enhacement">contrast enhacement</a>, <a href="https://publications.waset.org/abstracts/search?q=normalized%20RGB" title=" normalized RGB"> normalized RGB</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20histogram%20equalization" title=" adaptive histogram equalization"> adaptive histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=cumulative%20variance." title=" cumulative variance."> cumulative variance.</a> </p> <a href="https://publications.waset.org/abstracts/42755/contrast-enhancement-of-color-images-with-color-morphing-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42755.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=36">36</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=37">37</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=color%20SIFT&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10