CINXE.COM
Search results for: hand gesture
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: hand gesture</title> <meta name="description" content="Search results for: hand gesture"> <meta name="keywords" content="hand gesture"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="hand gesture" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="hand gesture"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3775</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: hand gesture</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3775</span> Users’ Preferences for Map Navigation Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Y.%20Pang">Y. Y. Pang</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20A.%20Ismail"> N. A. Ismail</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The map is a powerful and convenient tool in helping us to navigate to different places, but the use of indirect devices often makes its usage cumbersome. This study intends to propose a new map navigation dialogue that uses hand gesture. A set of dialogue was developed from users’ perspective to provide users complete freedom for panning, zooming, rotate, and find direction operations. A participatory design experiment was involved here where one hand gesture and two hand gesture dialogues had been analysed in the forms of hand gestures to develop a set of usable dialogues. The major finding was that users prefer one-hand gesture compared to two-hand gesture in map navigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title="hand gesture">hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=map%20navigation" title=" map navigation"> map navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=participatory%20design" title=" participatory design"> participatory design</a>, <a href="https://publications.waset.org/abstracts/search?q=intuitive%20interaction" title=" intuitive interaction"> intuitive interaction</a> </p> <a href="https://publications.waset.org/abstracts/19455/users-preferences-for-map-navigation-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19455.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3774</span> Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keyi%20Wang">Keyi Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/132854/static-and-dynamic-hand-gesture-recognition-using-convolutional-neural-network-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3773</span> Hand Detection and Recognition for Malay Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Noah%20A.%20Rahman">Mohd Noah A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Afzaal%20H.%20Seyal"> Afzaal H. Seyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Norhafilah%20Bara"> Norhafilah Bara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Developing a software application using an interface with computers and peripheral devices using gestures of human body such as hand movements keeps growing in interest. A review on this hand gesture detection and recognition based on computer vision technique remains a very challenging task. This is to provide more natural, innovative and sophisticated way of non-verbal communication, such as sign language, in human computer interaction. Nevertheless, this paper explores hand detection and hand gesture recognition applying a vision based approach. The hand detection and recognition used skin color spaces such as HSV and YCrCb are applied. However, there are limitations that are needed to be considered. Almost all of skin color space models are sensitive to quickly changing or mixed lighting circumstances. There are certain restrictions in order for the hand recognition to give better results such as the distance of user’s hand to the webcam and the posture and size of the hand. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/46765/hand-detection-and-recognition-for-malay-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3772</span> Hand Gestures Based Emotion Identification Using Flex Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Ali">S. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Yunus"> R. Yunus</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Arif"> A. Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Ayaz"> Y. Ayaz</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Baber%20Sial"> M. Baber Sial</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Asif"> R. Asif</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Naseer"> N. Naseer</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jawad%20Khan"> M. Jawad Khan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we have proposed a gesture to emotion recognition method using flex sensors mounted on metacarpophalangeal joints. The flex sensors are fixed in a wearable glove. The data from the glove are sent to PC using Wi-Fi. Four gestures: finger pointing, thumbs up, fist open and fist close are performed by five subjects. Each gesture is categorized into sad, happy, and excited class based on the velocity and acceleration of the hand gesture. Seventeen inspectors observed the emotions and hand gestures of the five subjects. The emotional state based on the investigators assessment and acquired movement speed data is compared. Overall, we achieved 77% accurate results. Therefore, the proposed design can be used for emotional state detection applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20identification" title="emotion identification">emotion identification</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20models" title=" emotion models"> emotion models</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20perception" title=" user perception"> user perception</a> </p> <a href="https://publications.waset.org/abstracts/98297/hand-gestures-based-emotion-identification-using-flex-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98297.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3771</span> Vision-Based Hand Segmentation Techniques for Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jebali">M. Jebali</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jemni"> M. Jemni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20segmentation" title=" hand segmentation"> hand segmentation</a> </p> <a href="https://publications.waset.org/abstracts/26490/vision-based-hand-segmentation-techniques-for-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3770</span> Patient-Friendly Hand Gesture Recognition Using AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Prabhu">K. Prabhu</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Dinesh"> K. Dinesh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Ranjani"> M. Ranjani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Suhitha"> M. Suhitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the five gestures will be detected when shown with their hands via the webcam, which is placed for gesture detection. The personal computer is used for displaying the gestures and for running the code in the raspberry pi imager. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nodeMCU" title="nodeMCU">nodeMCU</a>, <a href="https://publications.waset.org/abstracts/search?q=AI%20technology" title=" AI technology"> AI technology</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture" title=" gesture"> gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=patient" title=" patient"> patient</a> </p> <a href="https://publications.waset.org/abstracts/144943/patient-friendly-hand-gesture-recognition-using-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3769</span> Hand Motion and Gesture Control of Laboratory Test Equipment Using the Leap Motion Controller</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ian%20A.%20Grout">Ian A. Grout</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the design and development of a system to provide hand motion and gesture control of laboratory test equipment is considered and discussed. The Leap Motion controller is used to provide an input to control a laboratory power supply as part of an electronic circuit experiment. By suitable hand motions and gestures, control of the power supply is provided remotely and without the need to physically touch the equipment used. As such, it provides an alternative manner in which to control electronic equipment via a PC and is considered here within the field of human computer interaction (HCI). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=control" title="control">control</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=test%20equipment" title=" test equipment"> test equipment</a> </p> <a href="https://publications.waset.org/abstracts/72099/hand-motion-and-gesture-control-of-laboratory-test-equipment-using-the-leap-motion-controller" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72099.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3768</span> Gesture-Controlled Interface Using Computer Vision and Python</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vedant%20Vardhan%20Rathour">Vedant Vardhan Rathour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anant%20Agrawal"> Anant Agrawal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/193844/gesture-controlled-interface-using-computer-vision-and-python" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3767</span> Integrated Gesture and Voice-Activated Mouse Control System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dev%20Pratap%20Singh">Dev Pratap Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshika%20Hasija"> Harshika Hasija</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashwini%20S."> Ashwini S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computers using hand gestures and voice commands. The system leverages advanced computer vision techniques using the Media Pipe framework and OpenCV to detect and interpret real-time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the speech recognition library allows for seamless execution of tasks like web searches, location navigation, and gesture control in the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20assistant" title=" voice assistant"> voice assistant</a> </p> <a href="https://publications.waset.org/abstracts/193896/integrated-gesture-and-voice-activated-mouse-control-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">10</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3766</span> Hand Gesture Recognition Interface Based on IR Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang-Keun%20Ahn">Yang-Keun Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwang-Soon%20Choi"> Kwang-Soon Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Young-Choong%20Park"> Young-Choong Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwang-Mo%20Jung"> Kwang-Mo Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vision based user interfaces to control TVs and PCs have the advantage of being able to perform natural control without being limited to a specific device. Accordingly, various studies on hand gesture recognition using RGB cameras or depth cameras have been conducted. However, such cameras have the disadvantage of lacking in accuracy or the construction cost being large. The proposed method uses a low cost IR camera to accurately differentiate between the hand and the background. Also, complicated learning and template matching methodologies are not used, and the correlation between the fingertips extracted through curvatures is utilized to recognize Click and Move gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recognition" title="recognition">recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gestures" title=" hand gestures"> hand gestures</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared%20camera" title=" infrared camera"> infrared camera</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20cameras" title=" RGB cameras"> RGB cameras</a> </p> <a href="https://publications.waset.org/abstracts/13373/hand-gesture-recognition-interface-based-on-ir-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13373.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3765</span> Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saad%20M.%20Darwish">Saad M. Darwish</a>, <a href="https://publications.waset.org/abstracts/search?q=Magda%20M.%20Madbouly"> Magda M. Madbouly</a>, <a href="https://publications.waset.org/abstracts/search?q=Murad%20B.%20Khorsheed"> Murad B. Khorsheed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title="hand gesture recognition">hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title=" hand detection"> hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=type-2%20fuzzy%20logic" title=" type-2 fuzzy logic"> type-2 fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=hidden%20Markov%20Model" title=" hidden Markov Model "> hidden Markov Model </a> </p> <a href="https://publications.waset.org/abstracts/18565/hand-gesture-recognition-for-sign-language-a-new-higher-order-fuzzy-hmm-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18565.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3764</span> Information Retrieval from Internet Using Hand Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aniket%20S.%20Joshi">Aniket S. Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Aditya%20R.%20Mane"> Aditya R. Mane</a>, <a href="https://publications.waset.org/abstracts/search?q=Arjun%20Tukaram"> Arjun Tukaram </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the 21st century, in the era of e-world, people are continuously getting updated by daily information such as weather conditions, news, stock exchange market updates, new projects, cricket updates, sports and other such applications. In the busy situation, they want this information on the little use of keyboard, time. Today in order to get such information user have to repeat same mouse and keyboard actions which includes time and inconvenience. In India due to rural background many people are not much familiar about the use of computer and internet also. Also in small clinics, small offices, and hotels and in the airport there should be a system which retrieves daily information with the minimum use of keyboard and mouse actions. We plan to design application based project that can easily retrieve information with minimum use of keyboard and mouse actions and make our task more convenient and easier. This can be possible with an image processing application which takes real time hand gestures which will get matched by system and retrieve information. Once selected the functions with hand gestures, the system will report action information to user. In this project we use real time hand gesture movements to select required option which is stored on the screen in the form of RSS Feeds. Gesture will select the required option and the information will be popped and we got the information. A real time hand gesture makes the application handier and easier to use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color%20model" title=" HSV color model"> HSV color model</a>, <a href="https://publications.waset.org/abstracts/search?q=Blob%20detection" title=" Blob detection"> Blob detection</a> </p> <a href="https://publications.waset.org/abstracts/29069/information-retrieval-from-internet-using-hand-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29069.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3763</span> Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daleesha%20M.%20Viswanathan">Daleesha M. Viswanathan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumam%20Mary%20Idicula"> Sumam Mary Idicula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=orientation%20features" title="orientation features">orientation features</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20feature%20vector" title=" discrete feature vector"> discrete feature vector</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM." title=" HMM."> HMM.</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a> </p> <a href="https://publications.waset.org/abstracts/35653/hand-motion-trajectory-analysis-for-dynamic-hand-gestures-used-in-indian-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3762</span> Hand Gesture Detection via EmguCV Canny Pruning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20N.%20Mosola">N. N. Mosola</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Molete"> S. J. Molete</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20S.%20Masoebe"> L. S. Masoebe</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Letsae"> M. Letsae</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canny%20pruning" title="canny pruning">canny pruning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20tracking" title=" skin tracking"> skin tracking</a> </p> <a href="https://publications.waset.org/abstracts/91296/hand-gesture-detection-via-emgucv-canny-pruning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3761</span> Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aqsa%20Ali">Aqsa Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Aleem%20Mushtaq"> Aleem Mushtaq</a>, <a href="https://publications.waset.org/abstracts/search?q=Attaullah%20Memon"> Attaullah Memon</a>, <a href="https://publications.waset.org/abstracts/search?q=Monna"> Monna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (<em>TFT</em> LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=assistive%20hand%20gesture%20interpreter" title=" assistive hand gesture interpreter"> assistive hand gesture interpreter</a>, <a href="https://publications.waset.org/abstracts/search?q=human-machine%20interface" title=" human-machine interface"> human-machine interface</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=sensing%20glove" title=" sensing glove"> sensing glove</a> </p> <a href="https://publications.waset.org/abstracts/52683/hand-gesture-interpretation-using-sensing-glove-integrated-with-machine-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3760</span> Hand Gesture Interface for PC Control and SMS Notification Using MEMS Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keerthana%20E.">Keerthana E.</a>, <a href="https://publications.waset.org/abstracts/search?q=Lohithya%20S."> Lohithya S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshavardhini%20K.%20S."> Harshavardhini K. S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Saranya%20G."> Saranya G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Suganthi%20S."> Suganthi S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In an epoch of expanding human-machine interaction, the development of innovative interfaces that bridge the gap between physical gestures and digital control has gained significant momentum. This study introduces a distinct solution that leverages a combination of MEMS (Micro-Electro-Mechanical Systems) sensors, an Arduino Mega microcontroller, and a PC to create a hand gesture interface for PC control and SMS notification. The core of the system is an ADXL335 MEMS accelerometer sensor integrated with an Arduino Mega, which communicates with a PC via a USB cable. The ADXL335 provides real-time acceleration data, which is processed by the Arduino to detect specific hand gestures. These gestures, such as left, right, up, down, or custom patterns, are interpreted by the Arduino, and corresponding actions are triggered. In the context of SMS notifications, when a gesture indicative of a new SMS is recognized, the Arduino relays this information to the PC through the serial connection. The PC application, designed to monitor the Arduino's serial port, displays these SMS notifications in the serial monitor. This study offers an engaging and interactive means of interfacing with a PC by translating hand gestures into meaningful actions, opening up opportunities for intuitive computer control. Furthermore, the integration of SMS notifications adds a practical dimension to the system, notifying users of incoming messages as they interact with their computers. The use of MEMS sensors, Arduino, and serial communication serves as a promising foundation for expanding the capabilities of gesture-based control systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20gestures" title="hand gestures">hand gestures</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20cables" title=" multiple cables"> multiple cables</a>, <a href="https://publications.waset.org/abstracts/search?q=serial%20communication" title=" serial communication"> serial communication</a>, <a href="https://publications.waset.org/abstracts/search?q=sms%20%20notification" title=" sms notification"> sms notification</a> </p> <a href="https://publications.waset.org/abstracts/184620/hand-gesture-interface-for-pc-control-and-sms-notification-using-mems-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">69</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3759</span> Hand Controlled Mobile Robot Applied in Virtual Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jozsef%20Katona">Jozsef Katona</a>, <a href="https://publications.waset.org/abstracts/search?q=Attila%20Kovari"> Attila Kovari</a>, <a href="https://publications.waset.org/abstracts/search?q=Tibor%20Ujbanyi"> Tibor Ujbanyi</a>, <a href="https://publications.waset.org/abstracts/search?q=Gergely%20Sziladi"> Gergely Sziladi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> By the development of IT systems, human-computer interaction is also developing even faster and newer communication methods become available in human-machine interaction. In this article, the application of a hand gesture controlled human-computer interface is being introduced through the example of a mobile robot. The control of the mobile robot is implemented in a realistic virtual environment that is advantageous regarding the aspect of different tests, parallel examinations, so the purchase of expensive equipment is unnecessary. The usability of the implemented hand gesture control has been evaluated by test subjects. According to the opinion of the testing subjects, the system can be well used, and its application would be recommended on other application fields too. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human-machine%20interface%20%28HCI%29" title="human-machine interface (HCI)">human-machine interface (HCI)</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20robot" title=" mobile robot"> mobile robot</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20control" title=" hand control"> hand control</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20environment" title=" virtual environment"> virtual environment</a> </p> <a href="https://publications.waset.org/abstracts/75711/hand-controlled-mobile-robot-applied-in-virtual-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75711.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">298</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3758</span> Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zahra%20Alipour">Zahra Alipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirreza%20Moheb%20Afzali"> Amirreza Moheb Afzali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title="YOLOv8">YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=mediapipe" title=" mediapipe"> mediapipe</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20tracking" title=" finger tracking"> finger tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20estimation" title=" joint estimation"> joint estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=human-computer%20interaction%20%28HCI%29" title=" human-computer interaction (HCI)"> human-computer interaction (HCI)</a> </p> <a href="https://publications.waset.org/abstracts/194650/real-time-finger-tracking-evaluating-yolov8-and-mediapipe-for-enhanced-hci" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">5</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3757</span> Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Linghui%20Meng">Linghui Meng</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Atlas"> James Atlas</a>, <a href="https://publications.waset.org/abstracts/search?q=Deborah%20Munro"> Deborah Munro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=EMG" title="EMG">EMG</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=prosthetic%20control" title=" prosthetic control"> prosthetic control</a>, <a href="https://publications.waset.org/abstracts/search?q=electromyographic%20prosthetics" title=" electromyographic prosthetics"> electromyographic prosthetics</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20classification" title=" hand gesture classification"> hand gesture classification</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20neural%20networks" title=" computational neural networks"> computational neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=TSFE" title=" TSFE"> TSFE</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series%20feature%20extraction" title=" time series feature extraction"> time series feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20count" title=" channel count"> channel count</a>, <a href="https://publications.waset.org/abstracts/search?q=logistic%20regression" title=" logistic regression"> logistic regression</a>, <a href="https://publications.waset.org/abstracts/search?q=ninapro" title=" ninapro"> ninapro</a>, <a href="https://publications.waset.org/abstracts/search?q=classifiers" title=" classifiers"> classifiers</a> </p> <a href="https://publications.waset.org/abstracts/193326/preliminary-study-of-hand-gesture-classification-in-upper-limb-prosthetics-using-machine-learning-with-emg-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193326.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">31</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3756</span> Prototyping a Portable, Affordable Sign Language Glove</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vidhi%20Jain">Vidhi Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication between speakers and non-speakers of American Sign Language (ASL) can be problematic, inconvenient, and expensive. This project attempts to bridge the communication gap by designing a portable glove that captures the user’s ASL gestures and outputs the translated text on a smartphone. The glove is equipped with flex sensors, contact sensors, and a gyroscope to measure the flexion of the fingers, the contact between fingers, and the rotation of the hand. The glove’s Arduino UNO microcontroller analyzes the sensor readings to identify the gesture from a library of learned gestures. The Bluetooth module transmits the gesture to a smartphone. Using this device, one day speakers of ASL may be able to communicate with others in an affordable and convenient way. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title="sign language">sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=morse%20code" title=" morse code"> morse code</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title=" American sign language"> American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a> </p> <a href="https://publications.waset.org/abstracts/183474/prototyping-a-portable-affordable-sign-language-glove" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3755</span> A Novel Combined Finger Counting and Finite State Machine Technique for ASL Translation Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rania%20Ahmed%20Kadry%20Abdel%20Gawad%20Birry">Rania Ahmed Kadry Abdel Gawad Birry</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El-Habrouk"> Mohamed El-Habrouk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a brief survey of the techniques used for sign language recognition along with the types of sensors used to perform the task. It presents a modified method for identification of an isolated sign language gesture using Microsoft Kinect with the OpenNI framework. It presents the way of extracting robust features from the depth image provided by Microsoft Kinect and the OpenNI interface and to use them in creating a robust and accurate gesture recognition system, for the purpose of ASL translation. The Prime Sense’s Natural Interaction Technology for End-user - NITE™ - was also used in the C++ implementation of the system. The algorithm presents a simple finger counting algorithm for static signs as well as directional Finite State Machine (FSM) description of the hand motion in order to help in translating a sign language gesture. This includes both letters and numbers performed by a user, which in-turn may be used as an input for voice pronunciation systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20counting" title=" finger counting"> finger counting</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a> </p> <a href="https://publications.waset.org/abstracts/43466/a-novel-combined-finger-counting-and-finite-state-machine-technique-for-asl-translation-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3754</span> CONDUCTHOME: Gesture Interface Control of Home Automation Boxes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Branstett">J. Branstett</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Gagneux"> V. Gagneux</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Leleu"> A. Leleu</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Levadoux"> B. Levadoux</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Pascale"> J. Pascale</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the interface CONDUCTHOME which controls home automation systems with a Leap Motion using ‘invariant gesture protocols’. The function of this interface is to simplify the interaction of the user with its environment. A hardware part allows the Leap Motion to be carried around the house. A software part interacts with the home automation box and displays the useful information for the user. An objective of this work is the development a natural/invariant/simple gesture control interface to help elder people/people with disabilities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automation" title="automation">automation</a>, <a href="https://publications.waset.org/abstracts/search?q=ergonomics" title=" ergonomics"> ergonomics</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=interoperability" title=" interoperability"> interoperability</a> </p> <a href="https://publications.waset.org/abstracts/38302/conducthome-gesture-interface-control-of-home-automation-boxes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38302.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3753</span> Real-Time Gesture Recognition System Using Microsoft Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankita%20Wadhawan">Ankita Wadhawan</a>, <a href="https://publications.waset.org/abstracts/search?q=Parteek%20Kumar"> Parteek Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Umesh%20Kumar"> Umesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gesture is any body movement that expresses some attitude or any sentiment. Gestures as a sign language are used by deaf people for conveying messages which helps in eliminating the communication barrier between deaf people and normal persons. Nowadays, everybody is using mobile phone and computer as a very important gadget in their life. But there are some physically challenged people who are blind/deaf and the use of mobile phone or computer like device is very difficult for them. So, there is an immense need of a system which works on body gesture or sign language as input. In this research, Microsoft Kinect Sensor, SDK V2 and Hidden Markov Toolkit (HTK) are used to recognize the object, motion of object and human body joints through Touch less NUI (Natural User Interface) in real-time. The depth data collected from Microsoft Kinect has been used to recognize gestures of Indian Sign Language (ISL). The recorded clips are analyzed using depth, IR and skeletal data at different angles and positions. The proposed system has an average accuracy of 85%. The developed Touch less NUI provides an interface to recognize gestures and controls the cursor and click operation in computer just by waving hand gesture. This research will help deaf people to make use of mobile phones, computers and socialize among other persons in the society. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20user%20interface" title=" natural user interface"> natural user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/88362/real-time-gesture-recognition-system-using-microsoft-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3752</span> Hands-off Parking: Deep Learning Gesture-based System for Individuals with Mobility Needs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javier%20Romera">Javier Romera</a>, <a href="https://publications.waset.org/abstracts/search?q=Alberto%20Justo"> Alberto Justo</a>, <a href="https://publications.waset.org/abstracts/search?q=Ignacio%20Fidalgo"> Ignacio Fidalgo</a>, <a href="https://publications.waset.org/abstracts/search?q=Joshue%20Perez"> Joshue Perez</a>, <a href="https://publications.waset.org/abstracts/search?q=Javier%20Araluce"> Javier Araluce</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, the following paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for indepth gesture classification. This tandem of MediaPipe's extraction prowess and MPL's analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System (ROS), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20detection" title="gesture detection">gesture detection</a>, <a href="https://publications.waset.org/abstracts/search?q=mediapipe" title=" mediapipe"> mediapipe</a>, <a href="https://publications.waset.org/abstracts/search?q=multiperceptron%20layer" title=" multiperceptron layer"> multiperceptron layer</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20operating%20system" title=" robot operating system"> robot operating system</a> </p> <a href="https://publications.waset.org/abstracts/174862/hands-off-parking-deep-learning-gesture-based-system-for-individuals-with-mobility-needs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174862.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3751</span> Authoring Tactile Gestures: Case Study for Emotion Stimulation </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20Lentini">Rodrigo Lentini</a>, <a href="https://publications.waset.org/abstracts/search?q=Beatrice%20Ionascu"> Beatrice Ionascu</a>, <a href="https://publications.waset.org/abstracts/search?q=Friederike%20A.%20Eyssel"> Friederike A. Eyssel</a>, <a href="https://publications.waset.org/abstracts/search?q=Scandar%20Copti"> Scandar Copti</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamad%20Eid"> Mohamad Eid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The haptic modality has brought a new dimension to human computer interaction by engaging the human sense of touch. However, designing appropriate haptic stimuli, and in particular tactile stimuli, for various applications is still challenging. To tackle this issue, we present an intuitive system that facilitates the authoring of tactile gestures for various applications. The system transforms a hand gesture into a tactile gesture that can be rendering using a home-made haptic jacket. A case study is presented to demonstrate the ability of the system to develop tactile gestures that are recognizable by human subjects. Four tactile gestures are identified and tested to intensify the following four emotional responses: high valence – high arousal, high valence – low arousal, low valence – high arousal, and low valence – low arousal. A usability study with 20 participants demonstrated high correlation between the selected tactile gestures and the intended emotional reaction. Results from this study can be used in a wide spectrum of applications ranging from gaming to interpersonal communication and multimodal simulations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tactile%20stimulation" title="tactile stimulation">tactile stimulation</a>, <a href="https://publications.waset.org/abstracts/search?q=tactile%20gesture" title=" tactile gesture"> tactile gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20reactions" title=" emotion reactions"> emotion reactions</a>, <a href="https://publications.waset.org/abstracts/search?q=arousal" title=" arousal"> arousal</a>, <a href="https://publications.waset.org/abstracts/search?q=valence" title=" valence"> valence</a> </p> <a href="https://publications.waset.org/abstracts/52327/authoring-tactile-gestures-case-study-for-emotion-stimulation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52327.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3750</span> Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jazouli">M. Jazouli</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Elhoufi"> S. Elhoufi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Majda"> A. Majda</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Zarghili"> A. Zarghili</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Aalouane"> R. Aalouane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ASD" title="ASD">ASD</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=stereotypical%20motor%20movements" title=" stereotypical motor movements"> stereotypical motor movements</a> </p> <a href="https://publications.waset.org/abstracts/49346/stereotypical-motor-movement-recognition-using-microsoft-kinect-with-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3749</span> Investigating the Online Effect of Language on Gesture in Advanced Bilinguals of Two Structurally Different Languages in Comparison to L1 Native Speakers of L2 and Explores Whether Bilinguals Will Follow Target L2 Patterns in Speech and Co-speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Armita%20Ghobadi">Armita Ghobadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Samantha%20Emerson"> Samantha Emerson</a>, <a href="https://publications.waset.org/abstracts/search?q=Seyda%20Ozcaliskan"> Seyda Ozcaliskan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Being a bilingual involves mastery of both speech and gesture patterns in a second language (L2). We know from earlier work in first language (L1) production contexts that speech and co-speech gesture form a tightly integrated system: co-speech gesture mirrors the patterns observed in speech, suggesting an online effect of language on nonverbal representation of events in gesture during the act of speaking (i.e., “thinking for speaking”). Relatively less is known about the online effect of language on gesture in bilinguals speaking structurally different languages. The few existing studies—mostly with small sample sizes—suggests inconclusive findings: some show greater achievement of L2 patterns in gesture with more advanced L2 speech production, while others show preferences for L1 gesture patterns even in advanced bilinguals. In this study, we focus on advanced bilingual speakers of two structurally different languages (Spanish L1 with English L2) in comparison to L1 English speakers. We ask whether bilingual speakers will follow target L2 patterns not only in speech but also in gesture, or alternatively, follow L2 patterns in speech but resort to L1 patterns in gesture. We examined this question by studying speech and gestures produced by 23 advanced adult Spanish (L1)-English (L2) bilinguals (Mage=22; SD=7) and 23 monolingual English speakers (Mage=20; SD=2). Participants were shown 16 animated motion event scenes that included distinct manner and path components (e.g., "run over the bridge"). We recorded and transcribed all participant responses for speech and segmented it into sentence units that included at least one motion verb and its associated arguments. We also coded all gestures that accompanied each sentence unit. We focused on motion event descriptions as it shows strong crosslinguistic differences in the packaging of motion elements in speech and co-speech gesture in first language production contexts. English speakers synthesize manner and path into a single clause or gesture (he runs over the bridge; running fingers forward), while Spanish speakers express each component separately (manner-only: el corre=he is running; circle arms next to body conveying running; path-only: el cruza el puente=he crosses the bridge; trace finger forward conveying trajectory). We tallied all responses by group and packaging type, separately for speech and co-speech gesture. Our preliminary results (n=4/group) showed that productions in English L1 and Spanish L1 differed, with greater preference for conflated packaging in L1 English and separated packaging in L1 Spanish—a pattern that was also largely evident in co-speech gesture. Bilinguals’ production in L2 English, however, followed the patterns of the target language in speech—with greater preference for conflated packaging—but not in gesture. Bilinguals used separated and conflated strategies in gesture in roughly similar rates in their L2 English, showing an effect of both L1 and L2 on co-speech gesture. Our results suggest that online production of L2 language has more limited effects on L2 gestures and that mastery of native-like patterns in L2 gesture might take longer than native-like L2 speech patterns. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bilingualism" title="bilingualism">bilingualism</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-linguistic%20variation" title=" cross-linguistic variation"> cross-linguistic variation</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture" title=" gesture"> gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20acquisition" title=" second language acquisition"> second language acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=thinking%20for%20speaking%20hypothesis" title=" thinking for speaking hypothesis"> thinking for speaking hypothesis</a> </p> <a href="https://publications.waset.org/abstracts/166389/investigating-the-online-effect-of-language-on-gesture-in-advanced-bilinguals-of-two-structurally-different-languages-in-comparison-to-l1-native-speakers-of-l2-and-explores-whether-bilinguals-will-follow-target-l2-patterns-in-speech-and-co-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166389.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3748</span> A Holographic Infotainment System for Connected and Driverless Cars: An Exploratory Study of Gesture Based Interaction </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nicholas%20Lambert">Nicholas Lambert</a>, <a href="https://publications.waset.org/abstracts/search?q=Seungyeon%20Ryu"> Seungyeon Ryu</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehmet%20Mulla"> Mehmet Mulla</a>, <a href="https://publications.waset.org/abstracts/search?q=Albert%20Kim"> Albert Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an interactive in-car interface called HoloDash is presented. It is intended to provide information and infotainment in both autonomous vehicles and ‘connected cars’, vehicles equipped with Internet access via cellular services. The research focuses on the development of interactive avatars for this system and its gesture-based control system. This is a case study for the development of a possible human-centred means of presenting a connected or autonomous vehicle’s On-Board Diagnostics through a projected ‘holographic’ infotainment system. This system is termed a Holographic Human Vehicle Interface (HHIV), as it utilises a dashboard projection unit and gesture detection. The research also examines the suitability for gestures in an automotive environment, given that it might be used in both driver-controlled and driverless vehicles. Using Human Centred Design methods, questions were posed to test subjects and preferences discovered in terms of the gesture interface and the user experience for passengers within the vehicle. These affirm the benefits of this mode of visual communication for both connected and driverless cars. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture" title="gesture">gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=holographic%20interface" title=" holographic interface"> holographic interface</a>, <a href="https://publications.waset.org/abstracts/search?q=human-computer%20interaction" title=" human-computer interaction"> human-computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=user-centered%20design" title=" user-centered design"> user-centered design</a> </p> <a href="https://publications.waset.org/abstracts/87789/a-holographic-infotainment-system-for-connected-and-driverless-cars-an-exploratory-study-of-gesture-based-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3747</span> Sound Selection for Gesture Sonification and Manipulation of Virtual Objects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Benjamin%20Bressolette">Benjamin Bressolette</a>, <a href="https://publications.waset.org/abstracts/search?q=S%C2%B4ebastien%20Denjean"> S´ebastien Denjean</a>, <a href="https://publications.waset.org/abstracts/search?q=Vincent%20Roussarie"> Vincent Roussarie</a>, <a href="https://publications.waset.org/abstracts/search?q=Mitsuko%20Aramaki"> Mitsuko Aramaki</a>, <a href="https://publications.waset.org/abstracts/search?q=S%C3%B8lvi%20Ystad"> Sølvi Ystad</a>, <a href="https://publications.waset.org/abstracts/search?q=Richard%20Kronland-Martinet"> Richard Kronland-Martinet</a> </p> <p class="card-text"><strong>Abstract:</strong></p> New sensors and technologies – such as microphones, touchscreens or infrared sensors – are currently making their appearance in the automotive sector, introducing new kinds of Human-Machine Interfaces (HMIs). The interactions with such tools might be cognitively expensive, thus unsuitable for driving tasks. It could for instance be dangerous to use touchscreens with a visual feedback while driving, as it distracts the driver’s visual attention away from the road. Furthermore, new technologies in car cockpits modify the interactions of the users with the central system. In particular, touchscreens are preferred to arrays of buttons for space improvement and design purposes. However, the buttons’ tactile feedback is no more available to the driver, which makes such interfaces more difficult to manipulate while driving. Gestures combined with an auditory feedback might therefore constitute an interesting alternative to interact with the HMI. Indeed, gestures can be performed without vision, which means that the driver’s visual attention can be totally dedicated to the driving task. In fact, the auditory feedback can both inform the driver with respect to the task performed on the interface and on the performed gesture, which might constitute a possible solution to the lack of tactile information. As audition is a relatively unused sense in automotive contexts, gesture sonification can contribute to reducing the cognitive load thanks to the proposed multisensory exploitation. Our approach consists in using a virtual object (VO) to sonify the consequences of the gesture rather than the gesture itself. This approach is motivated by an ecological point of view: Gestures do not make sound, but their consequences do. In this experiment, the aim was to identify efficient sound strategies, to transmit dynamic information of VOs to users through sound. The swipe gesture was chosen for this purpose, as it is commonly used in current and new interfaces. We chose two VO parameters to sonify, the hand-VO distance and the VO velocity. Two kinds of sound parameters can be chosen to sonify the VO behavior: Spectral or temporal parameters. Pitch and brightness were tested as spectral parameters, and amplitude modulation as a temporal parameter. Performances showed a positive effect of sound compared to a no-sound situation, revealing the usefulness of sounds to accomplish the task. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auditory%20feedback" title="auditory feedback">auditory feedback</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20sonification" title=" gesture sonification"> gesture sonification</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20perception" title=" sound perception"> sound perception</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20object" title=" virtual object"> virtual object</a> </p> <a href="https://publications.waset.org/abstracts/54374/sound-selection-for-gesture-sonification-and-manipulation-of-virtual-objects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54374.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3746</span> Reverence Posture at Darius’ Relief in Persepolis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Behzad%20Moeini%20Sam">Behzad Moeini Sam</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Mohammadi%20Avendi"> Sara Mohammadi Avendi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The beliefs of the ancient peoples about gods and kings and how to perform rituals played an active part in the ancient civilizations. One of them in the ancient Near Eastern civilizations, which were accomplished, was paying homage to the gods and kings. The reverence posture during the Achaemenid period consisted of raising one right hand with the palm and the extended fingers facing the mouth. It is worth paying attention to the fact that the ancient empires such as Akkadian, Assyrian, Babylonian, and Persian should be regarded as successive versions of the same multinational power structure, each resulting from an internal power struggle within this structure. This article tries to show the reverence gesture with those of the ancient Near East. The working method is to study Darius one in Persepolis and pay homage to him and his similarities to those of the ancient Near East. Thus, it is logical to assume that the Reverence gesture follows the Sumerian and Akkadian ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Darius" title="Darius">Darius</a>, <a href="https://publications.waset.org/abstracts/search?q=Persepolis" title=" Persepolis"> Persepolis</a>, <a href="https://publications.waset.org/abstracts/search?q=Achaemenid" title=" Achaemenid"> Achaemenid</a>, <a href="https://publications.waset.org/abstracts/search?q=Proskynesis" title=" Proskynesis"> Proskynesis</a> </p> <a href="https://publications.waset.org/abstracts/181860/reverence-posture-at-darius-relief-in-persepolis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181860.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">50</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=125">125</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=126">126</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20gesture&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>