CINXE.COM

Search results for: gesture recognition

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: gesture recognition</title> <meta name="description" content="Search results for: gesture recognition"> <meta name="keywords" content="gesture recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="gesture recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="gesture recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1723</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: gesture recognition</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1723</span> Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keyi%20Wang">Keyi Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/132854/static-and-dynamic-hand-gesture-recognition-using-convolutional-neural-network-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1722</span> Hand Detection and Recognition for Malay Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Noah%20A.%20Rahman">Mohd Noah A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Afzaal%20H.%20Seyal"> Afzaal H. Seyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Norhafilah%20Bara"> Norhafilah Bara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Developing a software application using an interface with computers and peripheral devices using gestures of human body such as hand movements keeps growing in interest. A review on this hand gesture detection and recognition based on computer vision technique remains a very challenging task. This is to provide more natural, innovative and sophisticated way of non-verbal communication, such as sign language, in human computer interaction. Nevertheless, this paper explores hand detection and hand gesture recognition applying a vision based approach. The hand detection and recognition used skin color spaces such as HSV and YCrCb are applied. However, there are limitations that are needed to be considered. Almost all of skin color space models are sensitive to quickly changing or mixed lighting circumstances. There are certain restrictions in order for the hand recognition to give better results such as the distance of user’s hand to the webcam and the posture and size of the hand. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/46765/hand-detection-and-recognition-for-malay-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1721</span> Defect Localization and Interaction on Surfaces with Projection Mapping and Gesture Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Wang">Qiang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=MingRong%20Lai"> MingRong Lai</a>, <a href="https://publications.waset.org/abstracts/search?q=Miao%20Luo"> Miao Luo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a method for accurately localizing and interacting with known surface defects by overlaying patterns onto real-world surfaces using a projection system. Given the world coordinates of the defects, we project corresponding patterns onto the surfaces, providing an intuitive visualization of the specific defect locations. To enable users to interact with and retrieve more information about individual defects, we implement a gesture recognition system based on a pruned and optimized version of YOLOv6. This lightweight model achieves an accuracy of 82.8% and is suitable for deployment on low-performance devices. Our approach demonstrates the potential for enhancing defect identification, inspection processes, and user interaction in various applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=defect%20localization" title="defect localization">defect localization</a>, <a href="https://publications.waset.org/abstracts/search?q=projection%20mapping" title=" projection mapping"> projection mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv6" title=" YOLOv6"> YOLOv6</a> </p> <a href="https://publications.waset.org/abstracts/165856/defect-localization-and-interaction-on-surfaces-with-projection-mapping-and-gesture-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165856.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1720</span> Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maha%20Jazouli">Maha Jazouli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Suicide is one of the most important causes of death in the prison environment, both in Canada and internationally. Rates of attempts of suicide and self-harm have been on the rise in recent years, with hangings being the most frequent method resorted to. The objective of this article is to propose a method to automatically detect in real time suicidal behaviors. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Our proposed system gives us satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=suicide%20detection" title="suicide detection">suicide detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect%20azure" title=" Kinect azure"> Kinect azure</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20camera" title=" RGB-D camera"> RGB-D camera</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a> </p> <a href="https://publications.waset.org/abstracts/143744/automatic-detection-of-suicidal-behaviors-using-an-rgb-d-camera-azure-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">188</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1719</span> Hand Gestures Based Emotion Identification Using Flex Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Ali">S. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Yunus"> R. Yunus</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Arif"> A. Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Ayaz"> Y. Ayaz</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Baber%20Sial"> M. Baber Sial</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Asif"> R. Asif</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Naseer"> N. Naseer</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jawad%20Khan"> M. Jawad Khan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we have proposed a gesture to emotion recognition method using flex sensors mounted on metacarpophalangeal joints. The flex sensors are fixed in a wearable glove. The data from the glove are sent to PC using Wi-Fi. Four gestures: finger pointing, thumbs up, fist open and fist close are performed by five subjects. Each gesture is categorized into sad, happy, and excited class based on the velocity and acceleration of the hand gesture. Seventeen inspectors observed the emotions and hand gestures of the five subjects. The emotional state based on the investigators assessment and acquired movement speed data is compared. Overall, we achieved 77% accurate results. Therefore, the proposed design can be used for emotional state detection applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20identification" title="emotion identification">emotion identification</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20models" title=" emotion models"> emotion models</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20perception" title=" user perception"> user perception</a> </p> <a href="https://publications.waset.org/abstracts/98297/hand-gestures-based-emotion-identification-using-flex-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98297.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1718</span> Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saad%20M.%20Darwish">Saad M. Darwish</a>, <a href="https://publications.waset.org/abstracts/search?q=Magda%20M.%20Madbouly"> Magda M. Madbouly</a>, <a href="https://publications.waset.org/abstracts/search?q=Murad%20B.%20Khorsheed"> Murad B. Khorsheed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title="hand gesture recognition">hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title=" hand detection"> hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=type-2%20fuzzy%20logic" title=" type-2 fuzzy logic"> type-2 fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=hidden%20Markov%20Model" title=" hidden Markov Model "> hidden Markov Model </a> </p> <a href="https://publications.waset.org/abstracts/18565/hand-gesture-recognition-for-sign-language-a-new-higher-order-fuzzy-hmm-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18565.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1717</span> CONDUCTHOME: Gesture Interface Control of Home Automation Boxes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Branstett">J. Branstett</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Gagneux"> V. Gagneux</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Leleu"> A. Leleu</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Levadoux"> B. Levadoux</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Pascale"> J. Pascale</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the interface CONDUCTHOME which controls home automation systems with a Leap Motion using ‘invariant gesture protocols’. The function of this interface is to simplify the interaction of the user with its environment. A hardware part allows the Leap Motion to be carried around the house. A software part interacts with the home automation box and displays the useful information for the user. An objective of this work is the development a natural/invariant/simple gesture control interface to help elder people/people with disabilities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automation" title="automation">automation</a>, <a href="https://publications.waset.org/abstracts/search?q=ergonomics" title=" ergonomics"> ergonomics</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=interoperability" title=" interoperability"> interoperability</a> </p> <a href="https://publications.waset.org/abstracts/38302/conducthome-gesture-interface-control-of-home-automation-boxes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38302.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1716</span> Gesture-Controlled Interface Using Computer Vision and Python</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vedant%20Vardhan%20Rathour">Vedant Vardhan Rathour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anant%20Agrawal"> Anant Agrawal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/193844/gesture-controlled-interface-using-computer-vision-and-python" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1715</span> Hand Gesture Recognition Interface Based on IR Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang-Keun%20Ahn">Yang-Keun Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwang-Soon%20Choi"> Kwang-Soon Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Young-Choong%20Park"> Young-Choong Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwang-Mo%20Jung"> Kwang-Mo Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vision based user interfaces to control TVs and PCs have the advantage of being able to perform natural control without being limited to a specific device. Accordingly, various studies on hand gesture recognition using RGB cameras or depth cameras have been conducted. However, such cameras have the disadvantage of lacking in accuracy or the construction cost being large. The proposed method uses a low cost IR camera to accurately differentiate between the hand and the background. Also, complicated learning and template matching methodologies are not used, and the correlation between the fingertips extracted through curvatures is utilized to recognize Click and Move gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recognition" title="recognition">recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gestures" title=" hand gestures"> hand gestures</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared%20camera" title=" infrared camera"> infrared camera</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20cameras" title=" RGB cameras"> RGB cameras</a> </p> <a href="https://publications.waset.org/abstracts/13373/hand-gesture-recognition-interface-based-on-ir-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13373.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1714</span> Vision-Based Hand Segmentation Techniques for Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jebali">M. Jebali</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jemni"> M. Jemni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20segmentation" title=" hand segmentation"> hand segmentation</a> </p> <a href="https://publications.waset.org/abstracts/26490/vision-based-hand-segmentation-techniques-for-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1713</span> Interactive Shadow Play Animation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo%20Wan">Bo Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiu%20Wen"> Xiu Wen</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingling%20An"> Lingling An</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoling%20Ding"> Xiaoling Ding</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hadow%20play%20animation" title="hadow play animation">hadow play animation</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect" title=" Kinect"> Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=VRPN" title=" VRPN"> VRPN</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a> </p> <a href="https://publications.waset.org/abstracts/19293/interactive-shadow-play-animation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1712</span> Visualization-Based Feature Extraction for Classification in Real-Time Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=%C3%81goston%20Nagy">Ágoston Nagy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a method of using unsupervised machine learning to visualize the feature space of a dataset in 2D, in order to find most characteristic segments in the set. After dimension reduction, users can select clusters by manual drawing. Selected clusters are recorded into a data model that is used for later predictions, based on realtime data. Predictions are made with supervised learning, using Gesture Recognition Toolkit. The paper introduces two example applications: a semantic audio organizer for analyzing incoming sounds, and a gesture database organizer where gestural data (recorded by a Leap motion) is visualized for further manipulation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20interaction" title=" real-time interaction"> real-time interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/68382/visualization-based-feature-extraction-for-classification-in-real-time-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1711</span> Integrated Gesture and Voice-Activated Mouse Control System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dev%20Pratap%20Singh">Dev Pratap Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshika%20Hasija"> Harshika Hasija</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashwini%20S."> Ashwini S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computers using hand gestures and voice commands. The system leverages advanced computer vision techniques using the Media Pipe framework and OpenCV to detect and interpret real-time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the speech recognition library allows for seamless execution of tasks like web searches, location navigation, and gesture control in the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20assistant" title=" voice assistant"> voice assistant</a> </p> <a href="https://publications.waset.org/abstracts/193896/integrated-gesture-and-voice-activated-mouse-control-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">10</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1710</span> Hand Gesture Detection via EmguCV Canny Pruning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20N.%20Mosola">N. N. Mosola</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Molete"> S. J. Molete</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20S.%20Masoebe"> L. S. Masoebe</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Letsae"> M. Letsae</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho&rsquo;s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canny%20pruning" title="canny pruning">canny pruning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20tracking" title=" skin tracking"> skin tracking</a> </p> <a href="https://publications.waset.org/abstracts/91296/hand-gesture-detection-via-emgucv-canny-pruning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1709</span> Patient-Friendly Hand Gesture Recognition Using AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Prabhu">K. Prabhu</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Dinesh"> K. Dinesh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Ranjani"> M. Ranjani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Suhitha"> M. Suhitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the five gestures will be detected when shown with their hands via the webcam, which is placed for gesture detection. The personal computer is used for displaying the gestures and for running the code in the raspberry pi imager. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nodeMCU" title="nodeMCU">nodeMCU</a>, <a href="https://publications.waset.org/abstracts/search?q=AI%20technology" title=" AI technology"> AI technology</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture" title=" gesture"> gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=patient" title=" patient"> patient</a> </p> <a href="https://publications.waset.org/abstracts/144943/patient-friendly-hand-gesture-recognition-using-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1708</span> Users’ Preferences for Map Navigation Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Y.%20Pang">Y. Y. Pang</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20A.%20Ismail"> N. A. Ismail</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The map is a powerful and convenient tool in helping us to navigate to different places, but the use of indirect devices often makes its usage cumbersome. This study intends to propose a new map navigation dialogue that uses hand gesture. A set of dialogue was developed from users’ perspective to provide users complete freedom for panning, zooming, rotate, and find direction operations. A participatory design experiment was involved here where one hand gesture and two hand gesture dialogues had been analysed in the forms of hand gestures to develop a set of usable dialogues. The major finding was that users prefer one-hand gesture compared to two-hand gesture in map navigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title="hand gesture">hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=map%20navigation" title=" map navigation"> map navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=participatory%20design" title=" participatory design"> participatory design</a>, <a href="https://publications.waset.org/abstracts/search?q=intuitive%20interaction" title=" intuitive interaction"> intuitive interaction</a> </p> <a href="https://publications.waset.org/abstracts/19455/users-preferences-for-map-navigation-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19455.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1707</span> Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zahra%20Alipour">Zahra Alipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirreza%20Moheb%20Afzali"> Amirreza Moheb Afzali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title="YOLOv8">YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=mediapipe" title=" mediapipe"> mediapipe</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20tracking" title=" finger tracking"> finger tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20estimation" title=" joint estimation"> joint estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=human-computer%20interaction%20%28HCI%29" title=" human-computer interaction (HCI)"> human-computer interaction (HCI)</a> </p> <a href="https://publications.waset.org/abstracts/194650/real-time-finger-tracking-evaluating-yolov8-and-mediapipe-for-enhanced-hci" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">5</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1706</span> Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marie%20Alaghband">Marie Alaghband</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivan%20Garibay"> Ivan Garibay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of &quot;sad&quot;, &quot;surprise&quot;, &quot;fear&quot;, &quot;angry&quot;, &quot;neutral&quot;, &quot;disgust&quot;, and &quot;happy&quot;. We also considered the &quot;None&quot; class if the image&rsquo;s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annotated%20facial%20expression%20dataset" title="annotated facial expression dataset">annotated facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sequenced%20facial%20expression%20dataset" title=" sequenced facial expression dataset"> sequenced facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a> </p> <a href="https://publications.waset.org/abstracts/129717/facial-expression-phoenix-feph-an-annotated-sequenced-dataset-for-facial-and-emotion-specified-expressions-in-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1705</span> Real-Time Gesture Recognition System Using Microsoft Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankita%20Wadhawan">Ankita Wadhawan</a>, <a href="https://publications.waset.org/abstracts/search?q=Parteek%20Kumar"> Parteek Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Umesh%20Kumar"> Umesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gesture is any body movement that expresses some attitude or any sentiment. Gestures as a sign language are used by deaf people for conveying messages which helps in eliminating the communication barrier between deaf people and normal persons. Nowadays, everybody is using mobile phone and computer as a very important gadget in their life. But there are some physically challenged people who are blind/deaf and the use of mobile phone or computer like device is very difficult for them. So, there is an immense need of a system which works on body gesture or sign language as input. In this research, Microsoft Kinect Sensor, SDK V2 and Hidden Markov Toolkit (HTK) are used to recognize the object, motion of object and human body joints through Touch less NUI (Natural User Interface) in real-time. The depth data collected from Microsoft Kinect has been used to recognize gestures of Indian Sign Language (ISL). The recorded clips are analyzed using depth, IR and skeletal data at different angles and positions. The proposed system has an average accuracy of 85%. The developed Touch less NUI provides an interface to recognize gestures and controls the cursor and click operation in computer just by waving hand gesture. This research will help deaf people to make use of mobile phones, computers and socialize among other persons in the society. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20user%20interface" title=" natural user interface"> natural user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/88362/real-time-gesture-recognition-system-using-microsoft-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1704</span> A Novel Combined Finger Counting and Finite State Machine Technique for ASL Translation Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rania%20Ahmed%20Kadry%20Abdel%20Gawad%20Birry">Rania Ahmed Kadry Abdel Gawad Birry</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El-Habrouk"> Mohamed El-Habrouk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a brief survey of the techniques used for sign language recognition along with the types of sensors used to perform the task. It presents a modified method for identification of an isolated sign language gesture using Microsoft Kinect with the OpenNI framework. It presents the way of extracting robust features from the depth image provided by Microsoft Kinect and the OpenNI interface and to use them in creating a robust and accurate gesture recognition system, for the purpose of ASL translation. The Prime Sense’s Natural Interaction Technology for End-user - NITE™ - was also used in the C++ implementation of the system. The algorithm presents a simple finger counting algorithm for static signs as well as directional Finite State Machine (FSM) description of the hand motion in order to help in translating a sign language gesture. This includes both letters and numbers performed by a user, which in-turn may be used as an input for voice pronunciation systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20counting" title=" finger counting"> finger counting</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a> </p> <a href="https://publications.waset.org/abstracts/43466/a-novel-combined-finger-counting-and-finite-state-machine-technique-for-asl-translation-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1703</span> Prototyping a Portable, Affordable Sign Language Glove</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vidhi%20Jain">Vidhi Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication between speakers and non-speakers of American Sign Language (ASL) can be problematic, inconvenient, and expensive. This project attempts to bridge the communication gap by designing a portable glove that captures the user’s ASL gestures and outputs the translated text on a smartphone. The glove is equipped with flex sensors, contact sensors, and a gyroscope to measure the flexion of the fingers, the contact between fingers, and the rotation of the hand. The glove’s Arduino UNO microcontroller analyzes the sensor readings to identify the gesture from a library of learned gestures. The Bluetooth module transmits the gesture to a smartphone. Using this device, one day speakers of ASL may be able to communicate with others in an affordable and convenient way. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title="sign language">sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=morse%20code" title=" morse code"> morse code</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title=" American sign language"> American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a> </p> <a href="https://publications.waset.org/abstracts/183474/prototyping-a-portable-affordable-sign-language-glove" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1702</span> Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daleesha%20M.%20Viswanathan">Daleesha M. Viswanathan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumam%20Mary%20Idicula"> Sumam Mary Idicula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=orientation%20features" title="orientation features">orientation features</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20feature%20vector" title=" discrete feature vector"> discrete feature vector</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM." title=" HMM."> HMM.</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a> </p> <a href="https://publications.waset.org/abstracts/35653/hand-motion-trajectory-analysis-for-dynamic-hand-gestures-used-in-indian-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1701</span> Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jazouli">M. Jazouli</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Elhoufi"> S. Elhoufi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Majda"> A. Majda</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Zarghili"> A. Zarghili</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Aalouane"> R. Aalouane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ASD" title="ASD">ASD</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=stereotypical%20motor%20movements" title=" stereotypical motor movements"> stereotypical motor movements</a> </p> <a href="https://publications.waset.org/abstracts/49346/stereotypical-motor-movement-recognition-using-microsoft-kinect-with-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1700</span> Development of a Computer Vision System for the Blind and Visually Impaired Person</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20C.%20Belleza">Rodrigo C. Belleza</a>, <a href="https://publications.waset.org/abstracts/search?q=Jr."> Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Roselyn%20A.%20Maa%C3%B1o"> Roselyn A. Maaño</a>, <a href="https://publications.waset.org/abstracts/search?q=Karl%20Patrick%20E.%20Camota"> Karl Patrick E. Camota</a>, <a href="https://publications.waset.org/abstracts/search?q=Darwin%20Kim%20Q.%20Bulawan"> Darwin Kim Q. Bulawan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may result from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithms" title="algorithms">algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20systems" title=" embedded systems"> embedded systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a> </p> <a href="https://publications.waset.org/abstracts/2016/development-of-a-computer-vision-system-for-the-blind-and-visually-impaired-person" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1699</span> Human Gesture Recognition for Real-Time Control of Humanoid Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Aswath">S. Aswath</a>, <a href="https://publications.waset.org/abstracts/search?q=Chinmaya%20Krishna%20Tilak"> Chinmaya Krishna Tilak</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20Suresh"> Amal Suresh</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Udupa"> Ganesh Udupa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electromyogram" title="electromyogram">electromyogram</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture" title=" gesture"> gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=muscle%20sensor" title=" muscle sensor"> muscle sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=humanoid%20robot" title=" humanoid robot"> humanoid robot</a>, <a href="https://publications.waset.org/abstracts/search?q=microcontroller" title=" microcontroller"> microcontroller</a>, <a href="https://publications.waset.org/abstracts/search?q=Zigbee" title=" Zigbee"> Zigbee</a> </p> <a href="https://publications.waset.org/abstracts/7288/human-gesture-recognition-for-real-time-control-of-humanoid-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">407</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1698</span> Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aqsa%20Ali">Aqsa Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Aleem%20Mushtaq"> Aleem Mushtaq</a>, <a href="https://publications.waset.org/abstracts/search?q=Attaullah%20Memon"> Attaullah Memon</a>, <a href="https://publications.waset.org/abstracts/search?q=Monna"> Monna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (<em>TFT</em>&nbsp;LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=assistive%20hand%20gesture%20interpreter" title=" assistive hand gesture interpreter"> assistive hand gesture interpreter</a>, <a href="https://publications.waset.org/abstracts/search?q=human-machine%20interface" title=" human-machine interface"> human-machine interface</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=sensing%20glove" title=" sensing glove"> sensing glove</a> </p> <a href="https://publications.waset.org/abstracts/52683/hand-gesture-interpretation-using-sensing-glove-integrated-with-machine-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1697</span> Information Retrieval from Internet Using Hand Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aniket%20S.%20Joshi">Aniket S. Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Aditya%20R.%20Mane"> Aditya R. Mane</a>, <a href="https://publications.waset.org/abstracts/search?q=Arjun%20Tukaram"> Arjun Tukaram </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the 21st century, in the era of e-world, people are continuously getting updated by daily information such as weather conditions, news, stock exchange market updates, new projects, cricket updates, sports and other such applications. In the busy situation, they want this information on the little use of keyboard, time. Today in order to get such information user have to repeat same mouse and keyboard actions which includes time and inconvenience. In India due to rural background many people are not much familiar about the use of computer and internet also. Also in small clinics, small offices, and hotels and in the airport there should be a system which retrieves daily information with the minimum use of keyboard and mouse actions. We plan to design application based project that can easily retrieve information with minimum use of keyboard and mouse actions and make our task more convenient and easier. This can be possible with an image processing application which takes real time hand gestures which will get matched by system and retrieve information. Once selected the functions with hand gestures, the system will report action information to user. In this project we use real time hand gesture movements to select required option which is stored on the screen in the form of RSS Feeds. Gesture will select the required option and the information will be popped and we got the information. A real time hand gesture makes the application handier and easier to use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color%20model" title=" HSV color model"> HSV color model</a>, <a href="https://publications.waset.org/abstracts/search?q=Blob%20detection" title=" Blob detection"> Blob detection</a> </p> <a href="https://publications.waset.org/abstracts/29069/information-retrieval-from-internet-using-hand-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29069.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1696</span> Haptic Cycle: Designing Enhanced Museum Learning Activities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Menelaos%20N.%20Katsantonis">Menelaos N. Katsantonis</a>, <a href="https://publications.waset.org/abstracts/search?q=Athanasios%20Manikas"> Athanasios Manikas</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandros%20Chatzis"> Alexandros Chatzis</a>, <a href="https://publications.waset.org/abstracts/search?q=Stavros%20Doropoulos"> Stavros Doropoulos</a>, <a href="https://publications.waset.org/abstracts/search?q=Anastasios%20Avramis"> Anastasios Avramis</a>, <a href="https://publications.waset.org/abstracts/search?q=Ioannis%20Mavridis"> Ioannis Mavridis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Museums enhance their potential by adopting new technologies and techniques to appeal to more visitors and engage them in creative and joyful activities. In this study, the Haptic Cycle is presented, a cycle of museum activities proposed for the development of museum learning approaches with optimized effectiveness and engagement. Haptic Cycle envisages the improvement of the museum’s services by offering a wide range of activities. Haptic Cycle activities make the museum’s exhibitions more approachable by bringing them closer to the visitors. Visitors can interact with the museum’s artifacts and explore them haptically and sonically. Haptic Cycle proposes constructivist learning activities in which visitors actively construct their knowledge by exploring the artifacts, experimenting with them and realizing their importance. Based on the Haptic Cycle, we developed the HapticSOUND system, an innovative virtual reality system that includes an advanced user interface that employs gesture-based technology. HapticSOUND’s interface utilizes the leap motion gesture recognition controller and a 3D-printed traditional Cretan lute, utilized by visitors to perform various activities such as exploring the lute and playing notes and songs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=haptic%20cycle" title="haptic cycle">haptic cycle</a>, <a href="https://publications.waset.org/abstracts/search?q=HapticSOUND" title=" HapticSOUND"> HapticSOUND</a>, <a href="https://publications.waset.org/abstracts/search?q=museum%20learning" title=" museum learning"> museum learning</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture-based" title=" gesture-based"> gesture-based</a>, <a href="https://publications.waset.org/abstracts/search?q=leap%20motion" title=" leap motion"> leap motion</a> </p> <a href="https://publications.waset.org/abstracts/165300/haptic-cycle-designing-enhanced-museum-learning-activities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165300.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1695</span> Proposed Solutions Based on Affective Computing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Diego%20Adrian%20Cardenas%20Jorge">Diego Adrian Cardenas Jorge</a>, <a href="https://publications.waset.org/abstracts/search?q=Gerardo%20Mirando%20Guisado"> Gerardo Mirando Guisado</a>, <a href="https://publications.waset.org/abstracts/search?q=Alfredo%20Barrientos%20Padilla"> Alfredo Barrientos Padilla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A system based on Affective Computing can detect and interpret human information like voice, facial expressions and body movement to detect emotions and execute a corresponding response. This data is important due to the fact that a person can communicate more effectively with emotions than can be possible with words. This information can be processed through technological components like Facial Recognition, Gait Recognition or Gesture Recognition. As of now, solutions proposed using this technology only consider one component at a given moment. This research investigation proposes two solutions based on Affective Computing taking into account more than one component for emotion detection. The proposals reflect the levels of dependency between hardware devices and software, as well as the interaction process between the system and the user which implies the development of scenarios where both proposals will be put to the test in a live environment. Both solutions are to be developed in code by software engineers to prove the feasibility. To validate the impact on society and business interest, interviews with stakeholders are conducted with an investment mind set where each solution is labeled on a scale of 1 through 5, being one a minimum possible investment and 5 the maximum. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=affective%20computing" title="affective computing">affective computing</a>, <a href="https://publications.waset.org/abstracts/search?q=emotions" title=" emotions"> emotions</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=gait%20recognition" title=" gait recognition"> gait recognition</a> </p> <a href="https://publications.waset.org/abstracts/43577/proposed-solutions-based-on-affective-computing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43577.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">369</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1694</span> An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Ajili">Insaf Ajili</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Mallem"> Malik Mallem</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Yves%20Didier"> Jean-Yves Didier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20recognition" title="human motion recognition">human motion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20representation" title=" motion representation"> motion representation</a>, <a href="https://publications.waset.org/abstracts/search?q=Laban%20Movement%20Analysis" title=" Laban Movement Analysis"> Laban Movement Analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=Discrete%20Hidden%20Markov%20Model" title=" Discrete Hidden Markov Model"> Discrete Hidden Markov Model</a> </p> <a href="https://publications.waset.org/abstracts/87469/an-efficient-motion-recognition-system-based-on-lma-technique-and-a-discrete-hidden-markov-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87469.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">207</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=57">57</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=58">58</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=gesture%20recognition&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10