CINXE.COM

Search results for: vehicle color recognition

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: vehicle color recognition</title> <meta name="description" content="Search results for: vehicle color recognition"> <meta name="keywords" content="vehicle color recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="vehicle color recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="vehicle color recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4052</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: vehicle color recognition</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4052</span> SCNet: A Vehicle Color Classification Network Based on Spatial Cluster Loss and Channel Attention Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fei%20Gao">Fei Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinyang%20Dong"> Xinyang Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Yisu%20Ge"> Yisu Ge</a>, <a href="https://publications.waset.org/abstracts/search?q=Shufang%20Lu"> Shufang Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Libo%20Weng"> Libo Weng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vehicle color recognition plays an important role in traffic accident investigation. However, due to the influence of illumination, weather, and noise, vehicle color recognition still faces challenges. In this paper, a vehicle color classification network based on spatial cluster loss and channel attention mechanism (SCNet) is proposed for vehicle color recognition. A channel attention module is applied to extract the features of vehicle color representative regions and reduce the weight of nonrepresentative color regions in the channel. The proposed loss function, called spatial clustering loss (SC-loss), consists of two channel-specific components, such as a concentration component and a diversity component. The concentration component forces all feature channels belonging to the same class to be concentrated through the channel cluster. The diversity components impose additional constraints on the channels through the mean distance coefficient, making them mutually exclusive in spatial dimensions. In the comparison experiments, the proposed method can achieve state-of-the-art performance on the public datasets, VCD, and VeRi, which are 96.1% and 96.2%, respectively. In addition, the ablation experiment further proves that SC-loss can effectively improve the accuracy of vehicle color recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20transportation" title=" intelligent transportation"> intelligent transportation</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition" title=" vehicle color recognition"> vehicle color recognition</a> </p> <a href="https://publications.waset.org/abstracts/132381/scnet-a-vehicle-color-classification-network-based-on-spatial-cluster-loss-and-channel-attention-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132381.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4051</span> An Ensemble-based Method for Vehicle Color Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saeedeh%20Barzegar%20Khalilsaraei">Saeedeh Barzegar Khalilsaraei</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoocheher%20Kelarestaghi"> Manoocheher Kelarestaghi</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshad%20Eshghi"> Farshad Eshghi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vehicle%20Color%20Recognition" title="Vehicle Color Recognition">Vehicle Color Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Ensemble%20Algorithm" title="Ensemble Algorithm">Ensemble Algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Stack%20Generalization" title="Stack Generalization">Stack Generalization</a>, <a href="https://publications.waset.org/abstracts/search?q=Convolutional%20Neural%20Network" title="Convolutional Neural Network">Convolutional Neural Network</a> </p> <a href="https://publications.waset.org/abstracts/146909/an-ensemble-based-method-for-vehicle-color-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">85</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4050</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4049</span> Evaluating the Performance of Color Constancy Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Damanjit%20Kaur">Damanjit Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Avani%20Bhatia"> Avani Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20constancy" title="color constancy">color constancy</a>, <a href="https://publications.waset.org/abstracts/search?q=gray%20world" title=" gray world"> gray world</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20patch" title=" white patch"> white patch</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20white%20patch" title=" modified white patch "> modified white patch </a> </p> <a href="https://publications.waset.org/abstracts/4799/evaluating-the-performance-of-color-constancy-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4799.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4048</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4047</span> Burnout Recognition for Call Center Agents by Using Skin Color Detection with Hand Poses </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Sayed%20A.%20Sharara">El Sayed A. Sharara</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tsuji"> A. Tsuji</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Call centers have been expanding and they have influence on activation in various markets increasingly. A call center&rsquo;s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance is evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent burnout detection system has been implemented by using a web camera and has been processed by MATLAB. From the experimental results, our system achieved 96.3% for upper back pain detection and 94.2% for neck pain detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=call%20center%20agents" title="call center agents">call center agents</a>, <a href="https://publications.waset.org/abstracts/search?q=fatigue" title=" fatigue"> fatigue</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20color%20detection" title=" skin color detection"> skin color detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a> </p> <a href="https://publications.waset.org/abstracts/74913/burnout-recognition-for-call-center-agents-by-using-skin-color-detection-with-hand-poses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">293</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4046</span> Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Zhang">Qiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojian%20Hu"> Xiaojian Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time" title="real-time">real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-vehicle%20tracking" title=" multi-vehicle tracking"> multi-vehicle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20attribution" title=" color attribution"> color attribution</a> </p> <a href="https://publications.waset.org/abstracts/136438/real-time-multi-vehicle-tracking-application-at-intersections-based-on-feature-selection-in-combination-with-color-attribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4045</span> Hand Detection and Recognition for Malay Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Noah%20A.%20Rahman">Mohd Noah A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Afzaal%20H.%20Seyal"> Afzaal H. Seyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Norhafilah%20Bara"> Norhafilah Bara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Developing a software application using an interface with computers and peripheral devices using gestures of human body such as hand movements keeps growing in interest. A review on this hand gesture detection and recognition based on computer vision technique remains a very challenging task. This is to provide more natural, innovative and sophisticated way of non-verbal communication, such as sign language, in human computer interaction. Nevertheless, this paper explores hand detection and hand gesture recognition applying a vision based approach. The hand detection and recognition used skin color spaces such as HSV and YCrCb are applied. However, there are limitations that are needed to be considered. Almost all of skin color space models are sensitive to quickly changing or mixed lighting circumstances. There are certain restrictions in order for the hand recognition to give better results such as the distance of user’s hand to the webcam and the posture and size of the hand. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/46765/hand-detection-and-recognition-for-malay-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4044</span> Colour Recognition Pen Technology in Dental Technique and Dental Laboratories</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Dabirinezhad">M. Dabirinezhad</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Bayat%20Pour"> M. Bayat Pour</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Dabirinejad"> A. Dabirinejad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of the color spectrum of the teeth plays a significant role in the dental laboratories to produce dentures. Since there are various types and colours of teeth for each patient, there is a need to specify the exact and the most suitable colour to produce a denture. Usually, dentists utilize pallets to identify the color that suits a patient based on the color of the adjacent teeth. Consistent with this, there can be human errors by dentists to recognize the optimum colour for the patient, and it can be annoying for the patient. According to the statistics, there are some claims from the patients that they are not satisfied by the colour of their dentures after the installation of the denture in their mouths. This problem emanates from the lack of sufficient accuracy during the colour recognition process of denture production. The colour recognition pen (CRP) is a technology to distinguish the colour spectrum of the intended teeth with the highest accuracy. CRP is equipped with a sensor that is capable to read and analyse a wide range of spectrums. It is also connected to a database that contains all the spectrum ranges, which exist in the market. The database is editable and updatable based on market requirements. Another advantage of this invention can be mentioned as saving time for the patients since there is no need to redo the denture production in case of failure on the first try. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colour%20recognition%20pen" title="colour recognition pen">colour recognition pen</a>, <a href="https://publications.waset.org/abstracts/search?q=colour%20spectrum" title=" colour spectrum"> colour spectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=dental%20laboratory" title=" dental laboratory"> dental laboratory</a>, <a href="https://publications.waset.org/abstracts/search?q=denture" title=" denture"> denture</a> </p> <a href="https://publications.waset.org/abstracts/132064/colour-recognition-pen-technology-in-dental-technique-and-dental-laboratories" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4043</span> Traffic Light Detection Using Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaishnavi%20Shivde">Vaishnavi Shivde</a>, <a href="https://publications.waset.org/abstracts/search?q=Shrishti%20Sinha"> Shrishti Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Trapti%20Mishra"> Trapti Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20light%20detection" title="traffic light detection">traffic light detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/137254/traffic-light-detection-using-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">173</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4042</span> RGB Color Based Real Time Traffic Sign Detection and Feature Extraction System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kay%20Thinzar%20Phu">Kay Thinzar Phu</a>, <a href="https://publications.waset.org/abstracts/search?q=Lwin%20Lwin%20Oo"> Lwin Lwin Oo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In an intelligent transport system and advanced driver assistance system, the developing of real-time traffic sign detection and recognition (TSDR) system plays an important part in recent research field. There are many challenges for developing real-time TSDR system due to motion artifacts, variable lighting and weather conditions and situations of traffic signs. Researchers have already proposed various methods to minimize the challenges problem. The aim of the proposed research is to develop an efficient and effective TSDR in real time. This system proposes an adaptive thresholding method based on RGB color for traffic signs detection and new features for traffic signs recognition. In this system, the RGB color thresholding is used to detect the blue and yellow color traffic signs regions. The system performs the shape identify to decide whether the output candidate region is traffic sign or not. Lastly, new features such as termination points, bifurcation points, and 90’ angles are extracted from validated image. This system uses Myanmar Traffic Sign dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adaptive%20thresholding%20based%20on%20RGB%20color" title="adaptive thresholding based on RGB color">adaptive thresholding based on RGB color</a>, <a href="https://publications.waset.org/abstracts/search?q=blue%20color%20detection" title=" blue color detection"> blue color detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=yellow%20color%20detection" title=" yellow color detection"> yellow color detection</a> </p> <a href="https://publications.waset.org/abstracts/77127/rgb-color-based-real-time-traffic-sign-detection-and-feature-extraction-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77127.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4041</span> A Background Subtraction Based Moving Object Detection Around the Host Vehicle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyojin%20Lim">Hyojin Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuong%20Nguyen%20Khac"> Cuong Nguyen Khac</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose moving object detection method which is helpful for driver to safely take his/her car out of parking lot. When moving objects such as motorbikes, pedestrians, the other cars and some obstacles are detected at the rear-side of host vehicle, the proposed algorithm can provide to driver warning. We assume that the host vehicle is just before departure. Gaussian Mixture Model (GMM) based background subtraction is basically applied. Pre-processing such as smoothing and post-processing as morphological filtering are added.We examine “which color space has better performance for detection of moving objects?” Three color spaces including RGB, YCbCr, and Y are applied and compared, in terms of detection rate. Through simulation, we prove that RGB space is more suitable for moving object detection based on background subtraction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gaussian%20mixture%20model" title="gaussian mixture model">gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title=" background subtraction"> background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title=" moving object detection"> moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20space" title=" color space"> color space</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20filtering" title=" morphological filtering"> morphological filtering</a> </p> <a href="https://publications.waset.org/abstracts/32650/a-background-subtraction-based-moving-object-detection-around-the-host-vehicle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">617</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4040</span> Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seynabou%20Toure">Seynabou Toure</a>, <a href="https://publications.waset.org/abstracts/search?q=Oumar%20Diop"> Oumar Diop</a>, <a href="https://publications.waset.org/abstracts/search?q=Kidiyo%20Kpalma"> Kidiyo Kpalma</a>, <a href="https://publications.waset.org/abstracts/search?q=Amadou%20S.%20Maiga"> Amadou S. Maiga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=coastline" title=" coastline"> coastline</a>, <a href="https://publications.waset.org/abstracts/search?q=color" title=" color"> color</a>, <a href="https://publications.waset.org/abstracts/search?q=sea-land%20segmentation" title=" sea-land segmentation"> sea-land segmentation</a> </p> <a href="https://publications.waset.org/abstracts/84598/best-performing-color-space-for-land-sea-segmentation-using-wavelet-transform-color-texture-features-and-fusion-of-over-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">247</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4039</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4038</span> Color Fusion of Remote Sensing Images for Imparting Fluvial Geomorphological Features of River Yamuna and Ganga over Doon Valley </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20S.%20Jagadeesh%20Kumar">P. S. Jagadeesh Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Tracy%20Lin%20Huan"> Tracy Lin Huan</a>, <a href="https://publications.waset.org/abstracts/search?q=Rebecca%20K.%20Rossi"> Rebecca K. Rossi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yanmin%20Yuan"> Yanmin Yuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianpei%20Li"> Xianpei Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The fiscal growth of any country hinges on the prudent administration of water resources. The river Yamuna and Ganga are measured as the life line of India as it affords the needs for life to endure. Earth observation over remote sensing images permits the precise description and identification of ingredients on the superficial from space and airborne platforms. Multiple and heterogeneous image sources are accessible for the same geographical section; multispectral, hyperspectral, radar, multitemporal, and multiangular images. In this paper, a taxonomical learning of the fluvial geomorphological features of river Yamuna and Ganga over doon valley using color fusion of multispectral remote sensing images was performed. Experimental results exhibited that the segmentation based colorization technique stranded on pattern recognition, and color mapping fashioned more colorful and truthful colorized images for geomorphological feature extraction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fusion" title="color fusion">color fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=geomorphology" title=" geomorphology"> geomorphology</a>, <a href="https://publications.waset.org/abstracts/search?q=fluvial%20processes" title=" fluvial processes"> fluvial processes</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral%20images" title=" multispectral images"> multispectral images</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/87961/color-fusion-of-remote-sensing-images-for-imparting-fluvial-geomorphological-features-of-river-yamuna-and-ganga-over-doon-valley" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87961.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4037</span> Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiao%20Chen">Xiao Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoying%20Kong"> Xiaoying Kong</a>, <a href="https://publications.waset.org/abstracts/search?q=Min%20Xu"> Min Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vehicle%20classification" title="vehicle classification">vehicle classification</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20traffic%20model" title=" road traffic model"> road traffic model</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetic%20sensing" title=" magnetic sensing"> magnetic sensing</a> </p> <a href="https://publications.waset.org/abstracts/86644/road-vehicle-recognition-using-magnetic-sensing-feature-extraction-and-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4036</span> Design of Speed Bump Recognition System Integrated with Adjustable Shock Absorber Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ming-Yen%20Chang">Ming-Yen Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sheng-Hung%20Ke"> Sheng-Hung Ke</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research focuses on the development of a speed bump identification system for real-time control of adjustable shock absorbers in vehicular suspension systems. The study initially involved the collection of images of various speed bumps, and rubber speed bump profiles found on roadways. These images were utilized for training and recognition purposes through the deep learning object detection algorithm YOLOv5. Subsequently, the trained speed bump identification program was integrated with an in-vehicle camera system for live image capture during driving. These images were instantly transmitted to a computer for processing. Using the principles of monocular vision ranging, the distance between the vehicle and an approaching speed bump was determined. The appropriate control distance was established through both practical vehicle measurements and theoretical calculations. Collaboratively, with the electronically adjustable shock absorbers equipped in the vehicle, a shock absorber control system was devised to dynamically adapt the damping force just prior to encountering a speed bump. This system effectively mitigates passenger discomfort and enhances ride quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adjustable%20shock%20absorbers" title="adjustable shock absorbers">adjustable shock absorbers</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=monocular%20vision%20ranging" title=" monocular vision ranging"> monocular vision ranging</a>, <a href="https://publications.waset.org/abstracts/search?q=ride" title=" ride"> ride</a> </p> <a href="https://publications.waset.org/abstracts/175109/design-of-speed-bump-recognition-system-integrated-with-adjustable-shock-absorber-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4035</span> A Unified Deep Framework for Joint 3d Pose Estimation and Action Recognition from a Single Color Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huy%20Hieu%20Pham">Huy Hieu Pham</a>, <a href="https://publications.waset.org/abstracts/search?q=Houssam%20Salmane"> Houssam Salmane</a>, <a href="https://publications.waset.org/abstracts/search?q=Louahdi%20Khoudour"> Louahdi Khoudour</a>, <a href="https://publications.waset.org/abstracts/search?q=Alain%20Crouzil"> Alain Crouzil</a>, <a href="https://publications.waset.org/abstracts/search?q=Pablo%20Zegers"> Pablo Zegers</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergio%20Velastin"> Sergio Velastin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from color video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important key points of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the Spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, Microsoft Research Redmond (MSR) Action3D, and Stony Brook University (SBU) Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20action%20recognition" title="human action recognition">human action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=D-CNN" title=" D-CNN"> D-CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/115449/a-unified-deep-framework-for-joint-3d-pose-estimation-and-action-recognition-from-a-single-color-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4034</span> A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Takanori%20Tanaka">Takanori Tanaka</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Kitao"> Daisuke Kitao</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisuke%20Ikeda"> Daisuke Ikeda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data-intensive%20science" title="data-intensive science">data-intensive science</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=aurora" title=" aurora"> aurora</a> </p> <a href="https://publications.waset.org/abstracts/19532/a-similar-image-retrieval-system-for-auroral-all-sky-images-based-on-local-features-and-color-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4033</span> Object Recognition Approach Based on Generalized Hough Transform and Color Distribution Serving in Generating Arabic Sentences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Farhani">Nada Farhani</a>, <a href="https://publications.waset.org/abstracts/search?q=Naim%20Terbeh"> Naim Terbeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Zrigui"> Mounir Zrigui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The recognition of the objects contained in images has always presented a challenge in the field of research because of several difficulties that the researcher can envisage because of the variability of shape, position, contrast of objects, etc. In this paper, we will be interested in the recognition of objects. The classical Hough Transform (HT) presented a tool for detecting straight line segments in images. The technique of HT has been generalized (GHT) for the detection of arbitrary forms. With GHT, the forms sought are not necessarily defined analytically but rather by a particular silhouette. For more precision, we proposed to combine the results from the GHT with the results from a calculation of similarity between the histograms and the spatiograms of the images. The main purpose of our work is to use the concepts from recognition to generate sentences in Arabic that summarize the content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recognition%20of%20shape" title="recognition of shape">recognition of shape</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20hough%20transformation" title=" generalized hough transformation"> generalized hough transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiogram" title=" spatiogram"> spatiogram</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/101706/object-recognition-approach-based-on-generalized-hough-transform-and-color-distribution-serving-in-generating-arabic-sentences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101706.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4032</span> Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fidel%20L%C3%B2pez%20Saca">Fidel Lòpez Saca</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Avil%C3%A9s-Cruz"> Carlos Avilés-Cruz</a>, <a href="https://publications.waset.org/abstracts/search?q=Miguel%20Magos-Rivera"> Miguel Magos-Rivera</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Antonio%20Lara-Ch%C3%A1vez"> José Antonio Lara-Chávez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor&rsquo;s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title="deep-learning">deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20identification" title=" image identification"> image identification</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20engineering." title=" industrial engineering."> industrial engineering.</a> </p> <a href="https://publications.waset.org/abstracts/126071/automatic-product-identification-based-on-deep-learning-theory-in-an-assembly-line" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126071.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4031</span> A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Zheng">Yuan Zheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model&rsquo;s wireframes. Additionally, a normalization operation for the model&rsquo;s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D-2D%20matching" title="3D-2D matching">3D-2D matching</a>, <a href="https://publications.waset.org/abstracts/search?q=fitness%20function" title=" fitness function"> fitness function</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20vehicle%20model" title=" 3D vehicle model"> 3D vehicle model</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20image%20gradient" title=" local image gradient"> local image gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouette%20information" title=" silhouette information"> silhouette information</a> </p> <a href="https://publications.waset.org/abstracts/45357/a-practical-and-efficient-evaluation-function-for-3d-model-based-vehicle-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45357.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4030</span> Detecting Characters as Objects Towards Character Recognition on Licence Plates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alden%20Boby">Alden Boby</a>, <a href="https://publications.waset.org/abstracts/search?q=Dane%20Brown"> Dane Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Connan"> James Connan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Character recognition is a well-researched topic across disciplines. Regardless, creating a solution that can cater to multiple situations is still challenging. Vehicle licence plates lack an international standard, meaning that different countries and regions have their own licence plate format. A problem that arises from this is that the typefaces and designs from different regions make it difficult to create a solution that can cater to a wide range of licence plates. The main issue concerning detection is the character recognition stage. This paper aims to create an object detection-based character recognition model trained on a custom dataset that consists of typefaces of licence plates from various regions. Given that characters have featured consistently maintained across an array of fonts, YOLO can be trained to recognise characters based on these features, which may provide better performance than OCR methods such as Tesseract OCR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=licence%20plate%20recognition" title=" licence plate recognition"> licence plate recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a> </p> <a href="https://publications.waset.org/abstracts/155443/detecting-characters-as-objects-towards-character-recognition-on-licence-plates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4029</span> Spectra Analysis in Sunset Color Demonstrations with a White-Color LED as a Light Source</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Makoto%20Hasegawa">Makoto Hasegawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Seika%20Tokumitsu"> Seika Tokumitsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spectra of light beams emitted from white-color LED torches are different from those of conventional electric torches. In order to confirm if white-color LED torches can be used as light sources for popular sunset color demonstrations in spite of such differences, spectra of travelled light beams and scattered light beams with each of a white-color LED torch (composed of a blue LED and yellow-color fluorescent material) and a conventional electric torch as a light source were measured and compared with each other in a 50 cm-long water tank for sunset color demonstration experiments. Suspension liquid was prepared from acryl-emulsion and tap-water in the water tank, and light beams from the white-color LED torch or the conventional electric torch were allowed to travel in this suspension liquid. Sunset-like color was actually observed when the white-color LED torch was used as the light source in sunset color demonstrations. However, the observed colors when viewed with naked eye look slightly different from those obtainable with the conventional electric torch. At the same time, with the white-color LED, changes in colors in short to middle wavelength regions were recognized with careful observations. From those results, white-color LED torches are confirmed to be applicable as light sources in sunset color demonstrations, although certain attentions have to be paid. Further advanced classes will be successfully performed with white-color LED torches as light sources. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blue%20sky%20demonstration" title="blue sky demonstration">blue sky demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=sunset%20color%20demonstration" title=" sunset color demonstration"> sunset color demonstration</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20LED%20torch" title=" white LED torch"> white LED torch</a>, <a href="https://publications.waset.org/abstracts/search?q=physics%20education" title=" physics education"> physics education</a> </p> <a href="https://publications.waset.org/abstracts/47625/spectra-analysis-in-sunset-color-demonstrations-with-a-white-color-led-as-a-light-source" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4028</span> Object Detection Based on Plane Segmentation and Features Matching for a Service Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ant%C3%B3nio%20J.%20R.%20Neves">António J. R. Neves</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Garcia"> Rui Garcia</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Dias"> Paulo Dias</a>, <a href="https://publications.waset.org/abstracts/search?q=Alina%20Trifan"> Alina Trifan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the aging of the world population and the continuous growth in technology, service robots are more and more explored nowadays as alternatives to healthcare givers or personal assistants for the elderly or disabled people. Any service robot should be capable of interacting with the human companion, receive commands, navigate through the environment, either known or unknown, and recognize objects. This paper proposes an approach for object recognition based on the use of depth information and color images for a service robot. We present a study on two of the most used methods for object detection, where 3D data is used to detect the position of objects to classify that are found on horizontal surfaces. Since most of the objects of interest accessible for service robots are on these surfaces, the proposed 3D segmentation reduces the processing time and simplifies the scene for object recognition. The first approach for object recognition is based on color histograms, while the second is based on the use of the SIFT and SURF feature descriptors. We present comparative experimental results obtained with a real service robot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title="object detection">object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature" title=" feature"> feature</a>, <a href="https://publications.waset.org/abstracts/search?q=descriptors" title=" descriptors"> descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF" title=" SURF"> SURF</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20images" title=" depth images"> depth images</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20robots" title=" service robots"> service robots</a> </p> <a href="https://publications.waset.org/abstracts/39840/object-detection-based-on-plane-segmentation-and-features-matching-for-a-service-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">545</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4027</span> A Neural Approach for Color-Textured Images Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalid%20Salhi">Khalid Salhi</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Miloud%20Jaara"> El Miloud Jaara</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Talibi%20Alaoui"> Mohammed Talibi Alaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a neural approach for unsupervised natural color-texture image segmentation, which is based on both Kohonen maps and mathematical morphology, using a combination of the texture and the image color information of the image, namely, the fractal features based on fractal dimension are selected to present the information texture, and the color features presented in RGB color space. These features are then used to train the network Kohonen, which will be represented by the underlying probability density function, the segmentation of this map is made by morphological watershed transformation. The performance of our color-texture segmentation approach is compared first, to color-based methods or texture-based methods only, and then to k-means method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=color-texture" title=" color-texture"> color-texture</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal" title=" fractal"> fractal</a>, <a href="https://publications.waset.org/abstracts/search?q=watershed" title=" watershed"> watershed</a> </p> <a href="https://publications.waset.org/abstracts/51740/a-neural-approach-for-color-textured-images-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4026</span> An Automated System for the Detection of Citrus Greening Disease Based on Visual Descriptors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sidra%20Naeem">Sidra Naeem</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayesha%20Naeem"> Ayesha Naeem</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahar%20Rahim"> Sahar Rahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Nawaz%20Qadri"> Nadia Nawaz Qadri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Citrus greening is a bacterial disease that causes considerable damage to citrus fruits worldwide. Efficient method for this disease detection must be carried out to minimize the production loss. This paper presents a pattern recognition system that comprises three stages for the detection of citrus greening from Orange leaves: segmentation, feature extraction and classification. Image segmentation is accomplished by adaptive thresholding. The feature extraction stage comprises of three visual descriptors i.e. shape, color and texture. From shape feature we have used asymmetry index, from color feature we have used histogram of Cb component from YCbCr domain and from texture feature we have used local binary pattern. Classification was done using support vector machines and k nearest neighbors. The best performances of the system is Accuracy = 88.02% and AUROC = 90.1% was achieved by automatic segmented images. Our experiments validate that: (1). Segmentation is an imperative preprocessing step for computer assisted diagnosis of citrus greening, and (2). The combination of shape, color and texture features form a complementary set towards the identification of citrus greening disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=citrus%20greening" title="citrus greening">citrus greening</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/98969/an-automated-system-for-the-detection-of-citrus-greening-disease-based-on-visual-descriptors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4025</span> Experimental Characterization of the Color Quality and Error Rate for an Red, Green, and Blue-Based Light Emission Diode-Fixture Used in Visible Light Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20F.%20Gutierrez">Juan F. Gutierrez</a>, <a href="https://publications.waset.org/abstracts/search?q=Jesus%20M.%20Quintero"> Jesus M. Quintero</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Sandoval"> Diego Sandoval</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An important feature of LED technology is the fast on-off commutation, which allows data transmission. Visible Light Communication (VLC) is a wireless method to transmit data with visible light. Modulation formats such as On-Off Keying (OOK) and Color Shift Keying (CSK) are used in VLC. Since CSK is based on three color bands uses red, green, and blue monochromatic LED (RGB-LED) to define a pattern of chromaticities. This type of CSK provides poor color quality in the illuminated area. This work presents the design and implementation of a VLC system using RGB-based CSK with 16, 8, and 4 color points, mixing with a steady baseline of a phosphor white-LED, to improve the color quality of the LED-Fixture. The experimental system was assessed in terms of the Color Rendering Index (CRI) and the Symbol Error Rate (SER). Good color quality performance of the LED-Fixture was obtained with an acceptable SER. The laboratory setup used to characterize and calibrate an LED-Fixture is described. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=VLC" title="VLC">VLC</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20lighting" title=" indoor lighting"> indoor lighting</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quality" title=" color quality"> color quality</a>, <a href="https://publications.waset.org/abstracts/search?q=symbol%20error%20rate" title=" symbol error rate"> symbol error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20shift%20keying" title=" color shift keying"> color shift keying</a> </p> <a href="https://publications.waset.org/abstracts/158336/experimental-characterization-of-the-color-quality-and-error-rate-for-an-red-green-and-blue-based-light-emission-diode-fixture-used-in-visible-light-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4024</span> The Impact of the “Cold Ambient Color = Healthy” Intuition on Consumer Food Choice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yining%20Yu">Yining Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bingjie%20Li"> Bingjie Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Miaolei%20Jia"> Miaolei Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Wang"> Lei Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ambient color temperature is one of the most ubiquitous factors in retailing. However, there is limited research regarding the effect of cold versus warm ambient color on consumers’ food consumption. This research investigates an unexplored lay belief named the “cold ambient color = healthy” intuition and its impact on food choice. We demonstrate that consumers have built the “cold ambient color = healthy” intuition, such that they infer that a restaurant with a cold-colored ambiance is more likely to sell healthy food than a warm-colored restaurant. This deep-seated intuition also guides consumers’ food choices. We find that using a cold (vs. warm) ambient color increases the choice of healthy food, which offers insights into healthy diet promotion for retailers and policymakers. Theoretically, our work contributes to the literature on color psychology, sensory marketing, and food consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ambient%20color%20temperature" title="ambient color temperature">ambient color temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=cold%20ambient%20color" title=" cold ambient color"> cold ambient color</a>, <a href="https://publications.waset.org/abstracts/search?q=food%20choice" title=" food choice"> food choice</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20wellbeing" title=" consumer wellbeing"> consumer wellbeing</a> </p> <a href="https://publications.waset.org/abstracts/148864/the-impact-of-the-cold-ambient-color-healthy-intuition-on-consumer-food-choice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4023</span> Design and Construction of Vehicle Tracking System with Global Positioning System/Global System for Mobile Communication Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bala%20Adamu%20Malami">Bala Adamu Malami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The necessity of low-cost electronic vehicle/car security designed in coordination with other security measures is always there in our society to reduce the risk of vehicle intrusion. Keeping this problem in mind, we are designing an automatic GPS system which is technology to build an integrated and fully customized vehicle to detect the movement of the vehicle and also serve as a security system at a reasonable cost. Users can locate the vehicle's position via GPS by using the Google Maps application to show vehicle coordinates on a smartphone. The tracking system uses a Global System for Mobile Communication (GSM) modem for communication between the mobile station and the microcontroller to send and receive commands. Further design can be improved to capture the vehicle movement range and alert the vehicle owner when the vehicle is out of range. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electronic" title="electronic">electronic</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=GSM%20modem" title=" GSM modem"> GSM modem</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle" title=" vehicle"> vehicle</a> </p> <a href="https://publications.waset.org/abstracts/159657/design-and-construction-of-vehicle-tracking-system-with-global-positioning-systemglobal-system-for-mobile-communication-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159657.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=135">135</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=136">136</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=vehicle%20color%20recognition&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10