CINXE.COM
Search results for: kinect
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: kinect</title> <meta name="description" content="Search results for: kinect"> <meta name="keywords" content="kinect"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="kinect" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="kinect"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 46</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: kinect</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">46</span> Kinect Station: Using Microsoft Kinect V2 as a Total Station Theodolite for Distance and Angle Determination in a 3D Cartesian Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amin%20Amini">Amin Amini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A Kinect sensor has been utilized as a cheap and accurate alternative to 3D laser scanners and electronic distance measurement (EDM) systems. This research presents an inexpensive and easy-to-setup system that utilizes the Microsoft Kinect v2 sensor as a surveying and measurement tool and investigates the possibility of using such a device as a replacement for conventional theodolite systems. The system was tested in an indoor environment where its accuracy in distance and angle measurements was tested using virtual markers in a 3D Cartesian environment. The system has shown an average accuracy of 97.94 % in measuring distances and 99.11 % and 98.84 % accuracy for area and perimeter, respectively, within the Kinect’s surveying range of 1.5 to 6 meters. The research also tested the system competency for relative angle determination between two objects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=kinect%20v2" title="kinect v2">kinect v2</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20measurement" title=" 3D measurement"> 3D measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20map" title=" depth map"> depth map</a>, <a href="https://publications.waset.org/abstracts/search?q=ToF" title=" ToF"> ToF</a> </p> <a href="https://publications.waset.org/abstracts/172734/kinect-station-using-microsoft-kinect-v2-as-a-total-station-theodolite-for-distance-and-angle-determination-in-a-3d-cartesian-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">67</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">45</span> Automated Human Balance Assessment Using Contactless Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Justin%20Tang">Justin Tang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Balance tests are frequently used to diagnose concussions on the sidelines of sporting events. Manual scoring, however, is labor intensive and subjective, and many concussions go undetected. This study institutes a novel approach to conducting the Balance Error Scoring System (BESS) more quantitatively using Microsoft’s gaming system Kinect, which uses a contactless sensor and several cameras to receive data and estimate body limb positions. Using a machine learning approach, Visual Gesture Builder, and a deterministic approach, MATLAB, we tested whether the Kinect can differentiate between “correct” and erroneous stances of the BESS. We created the two separate solutions by recording test videos to teach the Kinect correct stances and by developing a code using Java. Twenty-two subjects were asked to perform a series of BESS tests while the Kinect was collecting data. The Kinect recorded the subjects and mapped key joints onto their bodies to obtain angles and measurements that are interpreted by the software. Through VGB and MATLAB, the videos are analyzed to enumerate the number of errors committed during testing. The resulting statistics demonstrate a high correlation between manual scoring and the Kinect approaches, indicating the viability of the use of remote tracking devices in conducting concussion tests. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated" title="automated">automated</a>, <a href="https://publications.waset.org/abstracts/search?q=concussion%20detection" title=" concussion detection"> concussion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=contactless%20sensors" title=" contactless sensors"> contactless sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=microsoft%20kinect" title=" microsoft kinect"> microsoft kinect</a> </p> <a href="https://publications.waset.org/abstracts/40866/automated-human-balance-assessment-using-contactless-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">317</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">44</span> Laban Movement Analysis Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bernstein%20Ran">Bernstein Ran</a>, <a href="https://publications.waset.org/abstracts/search?q=Shafir%20Tal"> Shafir Tal</a>, <a href="https://publications.waset.org/abstracts/search?q=Tsachor%20Rachelle"> Tsachor Rachelle</a>, <a href="https://publications.waset.org/abstracts/search?q=Studd%20Karen"> Studd Karen</a>, <a href="https://publications.waset.org/abstracts/search?q=Schuster%20Assaf"> Schuster Assaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laban%20movement%20analysis" title="Laban movement analysis">Laban movement analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=multitask%20learning" title=" multitask learning"> multitask learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect%20sensor" title=" Kinect sensor"> Kinect sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/25365/laban-movement-analysis-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25365.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">341</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">43</span> Interactive Shadow Play Animation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo%20Wan">Bo Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiu%20Wen"> Xiu Wen</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingling%20An"> Lingling An</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoling%20Ding"> Xiaoling Ding</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hadow%20play%20animation" title="hadow play animation">hadow play animation</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect" title=" Kinect"> Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=VRPN" title=" VRPN"> VRPN</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a> </p> <a href="https://publications.waset.org/abstracts/19293/interactive-shadow-play-animation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42</span> A Novel Combined Finger Counting and Finite State Machine Technique for ASL Translation Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rania%20Ahmed%20Kadry%20Abdel%20Gawad%20Birry">Rania Ahmed Kadry Abdel Gawad Birry</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El-Habrouk"> Mohamed El-Habrouk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a brief survey of the techniques used for sign language recognition along with the types of sensors used to perform the task. It presents a modified method for identification of an isolated sign language gesture using Microsoft Kinect with the OpenNI framework. It presents the way of extracting robust features from the depth image provided by Microsoft Kinect and the OpenNI interface and to use them in creating a robust and accurate gesture recognition system, for the purpose of ASL translation. The Prime Sense’s Natural Interaction Technology for End-user - NITE™ - was also used in the C++ implementation of the system. The algorithm presents a simple finger counting algorithm for static signs as well as directional Finite State Machine (FSM) description of the hand motion in order to help in translating a sign language gesture. This includes both letters and numbers performed by a user, which in-turn may be used as an input for voice pronunciation systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20counting" title=" finger counting"> finger counting</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a> </p> <a href="https://publications.waset.org/abstracts/43466/a-novel-combined-finger-counting-and-finite-state-machine-technique-for-asl-translation-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">41</span> Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shih-Wen%20Hsiao">Shih-Wen Hsiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Yi-Cheng%20Tsao"> Yi-Cheng Tsao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20scan" title="3D scan">3D scan</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20sensor" title=" depth sensor"> depth sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=fashion%20and%20garment%20design" title=" fashion and garment design"> fashion and garment design</a>, <a href="https://publications.waset.org/abstracts/search?q=mannequin" title=" mannequin"> mannequin</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20Kinect%20sensor" title=" multiple Kinect sensor"> multiple Kinect sensor</a> </p> <a href="https://publications.waset.org/abstracts/47447/applying-multiple-kinect-on-the-development-of-a-rapid-3d-mannequin-scan-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47447.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">366</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">40</span> Applying Kinect on the Development of a Customized 3D Mannequin</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shih-Wen%20Hsiao">Shih-Wen Hsiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Rong-Qi%20Chen"> Rong-Qi Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20mannequin" title="3D mannequin">3D mannequin</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect%20scanner" title=" kinect scanner"> kinect scanner</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20closest%20point" title=" interactive closest point"> interactive closest point</a>, <a href="https://publications.waset.org/abstracts/search?q=shape%20morphing" title=" shape morphing"> shape morphing</a>, <a href="https://publications.waset.org/abstracts/search?q=subdivision" title=" subdivision"> subdivision</a> </p> <a href="https://publications.waset.org/abstracts/26060/applying-kinect-on-the-development-of-a-customized-3d-mannequin" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26060.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">39</span> Effective Use of X-Box Kinect in Rehabilitation Centers of Riyadh</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reem%20Alshiha">Reem Alshiha</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanzila%20Saba"> Tanzila Saba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Physical rehabilitation is the process of helping people to recover and be able to go back to their former activities that have been delayed due to external factors such as car accidents, old age and victims of strokes (chronic diseases and accidents, and those related to sport activities).The cost of hiring a personal nurse or driving the patient to and from the hospital could be costly and time-consuming. Also, there are other factors to take into account such as forgetfulness, boredom and lack of motivation. In order to solve this dilemma, some experts came up with rehabilitation software to be used with Microsoft Kinect to help the patients and their families for in-home rehabilitation. In home rehabilitation software is becoming more and more popular, since it is more convenient for all parties affiliated with the patient. In contrast to the other costly market-based systems that have no portability, Microsoft’s Kinect is a portable motion sensor that reads body movements and interprets it. New software development has made rehabilitation games available to be used at home for the convenience of the patient. The game will benefit its users (rehabilitation patients) in saving time and money. There are many software's that are used with the Kinect for rehabilitation, but the software that is chosen in this research is Kinectotherapy. Kinectotherapy software is used for rehabilitation patients in Riyadh clinics to test its acceptance by patients and their physicians. In this study, we used Kinect because it was affordable, portable and easy to access in contrast to expensive market-based motion sensors. This paper explores the importance of in-home rehabilitation by using Kinect with Kinectotherapy software. The software targets both upper and lower limbs, but in this research, the main focus is on upper-limb functionality. However, the in-home rehabilitation is applicable to be used by all patients with motor disability, since the patient must have some self-reliance. The targeted subjects are patients with minor motor impairment that are somewhat independent in their mobility. The presented work is the first to consider the implementation of in-home rehabilitation with real-time feedback to the patient and physician. This research proposes the implementation of in-home rehabilitation in Riyadh, Saudi Arabia. The findings show that most of the patients are interested and motivated in using the in-home rehabilitation system in the future. The main value of the software application is due to these factors: improve patient engagement through stimulating rehabilitation, be a low cost rehabilitation tool and reduce the need for expensive one-to-one clinical contact. Rehabilitation is a crucial treatment that can improve the quality of life and confidence of the patient as well as their self-esteem. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=x-box" title="x-box">x-box</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=physical%20therapy" title=" physical therapy"> physical therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation%20software" title=" rehabilitation software"> rehabilitation software</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect"> kinect</a> </p> <a href="https://publications.waset.org/abstracts/69835/effective-use-of-x-box-kinect-in-rehabilitation-centers-of-riyadh" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69835.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">38</span> Real-Time Gesture Recognition System Using Microsoft Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankita%20Wadhawan">Ankita Wadhawan</a>, <a href="https://publications.waset.org/abstracts/search?q=Parteek%20Kumar"> Parteek Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Umesh%20Kumar"> Umesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gesture is any body movement that expresses some attitude or any sentiment. Gestures as a sign language are used by deaf people for conveying messages which helps in eliminating the communication barrier between deaf people and normal persons. Nowadays, everybody is using mobile phone and computer as a very important gadget in their life. But there are some physically challenged people who are blind/deaf and the use of mobile phone or computer like device is very difficult for them. So, there is an immense need of a system which works on body gesture or sign language as input. In this research, Microsoft Kinect Sensor, SDK V2 and Hidden Markov Toolkit (HTK) are used to recognize the object, motion of object and human body joints through Touch less NUI (Natural User Interface) in real-time. The depth data collected from Microsoft Kinect has been used to recognize gestures of Indian Sign Language (ISL). The recorded clips are analyzed using depth, IR and skeletal data at different angles and positions. The proposed system has an average accuracy of 85%. The developed Touch less NUI provides an interface to recognize gestures and controls the cursor and click operation in computer just by waving hand gesture. This research will help deaf people to make use of mobile phones, computers and socialize among other persons in the society. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20user%20interface" title=" natural user interface"> natural user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/88362/real-time-gesture-recognition-system-using-microsoft-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">37</span> PostureCheck with the Kinect and Proficio: Posture Modeling for Exercise Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elham%20Saraee">Elham Saraee</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabh%20Singh"> Saurabh Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Margrit%20Betke"> Margrit Betke</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Evaluation of a person’s posture while exercising is important in physical therapy. During a therapy session, a physical therapist or a monitoring system must assure that the person is performing an exercise correctly to achieve the desired therapeutic effect. In this work, we introduce a system called POSTURECHECK for exercise assessment in physical therapy. POSTURECHECK assesses the posture of a person who is exercising with the Proficio robotic arm while being recorded by the Microsoft Kinect interface. POSTURECHECK extracts unique features from the person’s upper body during the exercise, and classifies the sequence of postures as correct or incorrect using Bayesian estimation and majority voting. If POSTURECHECK recognizes an incorrect posture, it specifies what the user can do to correct it. The result of our experiment shows that POSTURECHECK is capable of recognizing the incorrect postures in real time while the user is performing an exercise. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20estimation" title="Bayesian estimation">Bayesian estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=majority%20voting" title=" majority voting"> majority voting</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=PostureCheck" title=" PostureCheck"> PostureCheck</a>, <a href="https://publications.waset.org/abstracts/search?q=Proficio%20robotic%20arm" title=" Proficio robotic arm"> Proficio robotic arm</a>, <a href="https://publications.waset.org/abstracts/search?q=upper%20body%20physical%20therapy" title=" upper body physical therapy"> upper body physical therapy</a> </p> <a href="https://publications.waset.org/abstracts/56218/posturecheck-with-the-kinect-and-proficio-posture-modeling-for-exercise-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">36</span> Proprioceptive Neuromuscular Facilitation Exercises of Upper Extremities Assessment Using Microsoft Kinect Sensor and Color Marker in a Virtual Reality Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Owlia">M. Owlia</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20H.%20Azarsa"> M. H. Azarsa</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Khabbazan"> M. Khabbazan</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Mirbagheri"> A. Mirbagheri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Proprioceptive neuromuscular facilitation exercises are a series of stretching techniques that are commonly used in rehabilitation and exercise therapy. Assessment of these exercises for true maneuvering requires extensive experience in this field and could not be down with patients themselves. In this paper, we developed software that uses Microsoft Kinect sensor, a spherical color marker, and real-time image processing methods to evaluate patient’s performance in generating true patterns of movements. The software also provides the patient with a visual feedback by showing his/her avatar in a Virtual Reality environment along with the correct path of moving hand, wrist and marker. Primary results during PNF exercise therapy of a patient in a room environment shows the ability of the system to identify any deviation of maneuvering path and direction of the hand from the one that has been performed by an expert physician. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=proprioceptive%20neuromuscular%20facilitation" title=" proprioceptive neuromuscular facilitation"> proprioceptive neuromuscular facilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=upper%20extremities%20assessment" title=" upper extremities assessment"> upper extremities assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a> </p> <a href="https://publications.waset.org/abstracts/53955/proprioceptive-neuromuscular-facilitation-exercises-of-upper-extremities-assessment-using-microsoft-kinect-sensor-and-color-marker-in-a-virtual-reality-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53955.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">273</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">35</span> Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jazouli">M. Jazouli</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Elhoufi"> S. Elhoufi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Majda"> A. Majda</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Zarghili"> A. Zarghili</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Aalouane"> R. Aalouane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ASD" title="ASD">ASD</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=stereotypical%20motor%20movements" title=" stereotypical motor movements"> stereotypical motor movements</a> </p> <a href="https://publications.waset.org/abstracts/49346/stereotypical-motor-movement-recognition-using-microsoft-kinect-with-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> Natural Interaction Game-Based Learning of Elasticity with Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Savari">Maryam Savari</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamad%20Nizam%20Ayub"> Mohamad Nizam Ayub</a>, <a href="https://publications.waset.org/abstracts/search?q=Ainuddin%20Wahid%20Abdul%20Wahab"> Ainuddin Wahid Abdul Wahab</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Game-based Learning (GBL) is an alternative that provides learners with an opportunity to experience a volatile environment in a safe and secure place. A volatile environment requires a different technique to facilitate learning and prevent injury and other hazards. Subjects involving elasticity are always considered hazardous and can cause injuries,for instance a bouncing ball. Elasticity is a topic that necessitates hands-on practicality for learners to experience the effects of elastic objects. In this paper the scope is to investigate the natural interaction between learners and elastic objects in a safe environment using GBL. During interaction, the potentials of natural contact in the process of learning were explored and gestures exhibited during the learning process were identified. GBL was developed using Kinect technology to teach elasticity to primary school children aged 7 to 12. The system detects body gestures and defines the meanings of motions exhibited during the learning process. The qualitative approach was deployed to constantly monitor the interaction between the student and the system. Based on the results, it was found that Natural Interaction GBL (Ni-GBL) is engaging for students to learn, making their learning experience more active and joyful. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=elasticity" title="elasticity">elasticity</a>, <a href="https://publications.waset.org/abstracts/search?q=Game-Based%20Learning%20%28GBL%29" title=" Game-Based Learning (GBL)"> Game-Based Learning (GBL)</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect%20technology" title=" kinect technology"> kinect technology</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20interaction" title=" natural interaction "> natural interaction </a> </p> <a href="https://publications.waset.org/abstracts/22347/natural-interaction-game-based-learning-of-elasticity-with-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22347.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">484</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Application of Adaptive Particle Filter for Localizing a Mobile Robot Using 3D Camera Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maysam%20Shahsavari">Maysam Shahsavari</a>, <a href="https://publications.waset.org/abstracts/search?q=Seyed%20Jamalaldin%20Haddadi"> Seyed Jamalaldin Haddadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are several methods to localize a mobile robot such as relative, absolute and probabilistic. In this paper, particle filter due to its simple implementation and the fact that it does not need to know to the starting position will be used. This method estimates the position of the mobile robot using a probabilistic distribution, relying on a known map of the environment instead of predicting it. Afterwards, it updates this estimation by reading input sensors and control commands. To receive information from the surrounding world, distance to obstacles, for example, a Kinect is used which is much cheaper than a laser range finder. Finally, after explaining the Adaptive Particle Filter method and its implementation in detail, we will compare this method with the dead reckoning method and show that this method is much more suitable for situations in which we have a map of the environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title="particle filter">particle filter</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=methods" title=" methods"> methods</a>, <a href="https://publications.waset.org/abstracts/search?q=odometry" title=" odometry"> odometry</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect "> kinect </a> </p> <a href="https://publications.waset.org/abstracts/53041/application-of-adaptive-particle-filter-for-localizing-a-mobile-robot-using-3d-camera-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maha%20Jazouli">Maha Jazouli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Suicide is one of the most important causes of death in the prison environment, both in Canada and internationally. Rates of attempts of suicide and self-harm have been on the rise in recent years, with hangings being the most frequent method resorted to. The objective of this article is to propose a method to automatically detect in real time suicidal behaviors. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Our proposed system gives us satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=suicide%20detection" title="suicide detection">suicide detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect%20azure" title=" Kinect azure"> Kinect azure</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20camera" title=" RGB-D camera"> RGB-D camera</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a> </p> <a href="https://publications.waset.org/abstracts/143744/automatic-detection-of-suicidal-behaviors-using-an-rgb-d-camera-azure-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">188</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> Constrained RGBD SLAM with a Prior Knowledge of the Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kathia%20Melbouci">Kathia Melbouci</a>, <a href="https://publications.waset.org/abstracts/search?q=Sylvie%20Naudet%20Collette"> Sylvie Naudet Collette</a>, <a href="https://publications.waset.org/abstracts/search?q=Vincent%20Gay-Bellile"> Vincent Gay-Bellile</a>, <a href="https://publications.waset.org/abstracts/search?q=Omar%20Ait-Aider"> Omar Ait-Aider</a>, <a href="https://publications.waset.org/abstracts/search?q=Michel%20Dhome"> Michel Dhome</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we handle the problem of real time localization and mapping in indoor environment assisted by a partial prior 3D model, using an RGBD sensor. The proposed solution relies on a feature-based RGBD SLAM algorithm to localize the camera and update the 3D map of the scene. To improve the accuracy and the robustness of the localization, we propose to combine in a local bundle adjustment process, geometric information provided by a prior coarse 3D model of the scene (e.g. generated from the 2D floor plan of the building) along with RGBD data from a Kinect camera. The proposed approach is evaluated on a public benchmark dataset as well as on real scene acquired by a Kinect sensor. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SLAM" title="SLAM">SLAM</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20localization" title=" global localization"> global localization</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20sensor" title=" 3D sensor"> 3D sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=bundle%20adjustment" title=" bundle adjustment"> bundle adjustment</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20model" title=" 3D model"> 3D model</a> </p> <a href="https://publications.waset.org/abstracts/44987/constrained-rgbd-slam-with-a-prior-knowledge-of-the-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44987.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sanjana%20S.%20Mallya">Sanjana S. Mallya</a>, <a href="https://publications.waset.org/abstracts/search?q=Roshan%20Arvind%20Sivakumar"> Roshan Arvind Sivakumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3d%20scanning" title="3d scanning">3d scanning</a>, <a href="https://publications.waset.org/abstracts/search?q=mesh%20generation" title=" mesh generation"> mesh generation</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20kinect" title=" Microsoft kinect"> Microsoft kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=orthotics" title=" orthotics"> orthotics</a>, <a href="https://publications.waset.org/abstracts/search?q=registration" title=" registration"> registration</a> </p> <a href="https://publications.waset.org/abstracts/85992/3d-design-of-orthotic-braces-and-casts-in-medical-applications-using-microsoft-kinect-sensor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85992.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> Integrating Neural Linguistic Programming with Exergaming</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shyam%20Sajan">Shyam Sajan</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20Bijlani"> Kamal Bijlani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The widespread effects of digital media help people to explore the world more and get entertained with no effort. People became fond of these kind of sedentary life style. The increase in sedentary time and a decrease in physical activities has negative impacts on human health. Even though the addiction to video games has been exploited in exergames, to make people exercise and enjoy game challenges, the contribution is restricted only to physical wellness. This paper proposes creation and implementation of a game with the help of digital media in a virtual environment. The game is designed by collaborating ideas from neural linguistic programming and Stroop effect that can also be used to identify a person’s mental state, to improve concentration and to eliminate various phobias. The multiplayer game is played in a virtual environment created with Kinect sensor, to make the game more motivating and interactive. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=exergaming" title="exergaming">exergaming</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect%20Sensor" title=" Kinect Sensor"> Kinect Sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=Neural%20Linguistic%20Programming" title=" Neural Linguistic Programming"> Neural Linguistic Programming</a>, <a href="https://publications.waset.org/abstracts/search?q=Stroop%20Effect" title=" Stroop Effect"> Stroop Effect</a> </p> <a href="https://publications.waset.org/abstracts/29670/integrating-neural-linguistic-programming-with-exergaming" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Natural User Interface Adapter: Enabling Natural User Interface for Non-Natural User Interface Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vijay%20Kumar%20Kolagani">Vijay Kumar Kolagani</a>, <a href="https://publications.waset.org/abstracts/search?q=Yingcai%20Xiao"> Yingcai Xiao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Adaptation of Natural User Interface (NUI) has been slow and limited. NUI devices like Microsoft’s Kinect and Ultraleap’s Leap Motion can only interact with a handful applications that were specifically designed and implemented for them. A NUI device just can’t be used to directly control millions of applications that are not designed to take NUI input. This is in the similar situation like the adaptation of color TVs. At the early days of color TV, the broadcasting format was in RGB, which was not viewable by blackand-white TVs. TV broadcasters were reluctant to produce color programs due to limited viewership. TV viewers were reluctant to buy color TVs because there were limited programs to watch. Color TV’s breakthrough moment came after the adaptation of NTSC standard which allowed color broadcasts to be compatible with the millions of existing black-and-white TVs. This research presents a framework to use NUI devices to control existing non-NUI applications without reprogramming them. The methodology is to create an adapter to convert input from NUI devices into input compatible with that generated by CLI (Command Line Input) and GUI (Graphical User Interface) devices. The CLI/GUI compatible input is then sent to the active application through the operating system just like any input from a CLI/GUI device to control the non-NUI program that the user is controlling. A sample adapter has been created to convert input from Kinect to keyboard strokes, so one can use the input from Kinect to control any applications that take keyboard input, such as Microsoft’s PowerPoint. When the users use the adapter to control their PowerPoint presentations, they can free themselves from standing behind a computer to use its keyboard and can roam around in front of the audience to use hand gestures to control the PowerPoint. It is hopeful such adapters can accelerate the adaptation of NUI devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=command%20line%20input" title="command line input">command line input</a>, <a href="https://publications.waset.org/abstracts/search?q=graphical%20user%20interface" title=" graphical user interface"> graphical user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20user%20interface" title=" natural user interface"> natural user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=NUI%20adapter" title=" NUI adapter"> NUI adapter</a> </p> <a href="https://publications.waset.org/abstracts/193556/natural-user-interface-adapter-enabling-natural-user-interface-for-non-natural-user-interface-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193556.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">14</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> Early Detection of Lymphedema in Post-Surgery Oncology Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sneha%20Noble">Sneha Noble</a>, <a href="https://publications.waset.org/abstracts/search?q=Rahul%20Krishnan"> Rahul Krishnan</a>, <a href="https://publications.waset.org/abstracts/search?q=Uma%20G."> Uma G.</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20K.%20Vijaykumar"> D. K. Vijaykumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast-Cancer related Lymphedema is a major problem that affects many women. Lymphedema is the swelling that generally occurs in the arms or legs caused by the removal of or damage to lymph nodes as a part of cancer treatment. Treating it at the earliest possible stage is the best way to manage the condition and prevent it from leading to pain, recurrent infection, reduced mobility, and impaired function. So, this project aims to focus on the multi-modal approaches to identify the risks of Lymphedema in post-surgical oncology patients and prevent it at the earliest. The Kinect IR Sensor is utilized to capture the images of the body and after image processing techniques, the region of interest is obtained. Then, performing the voxelization method will provide volume measurements in pre-operative and post-operative periods in patients. The formation of a mathematical model will help in the comparison of values. Clinical pathological data of patients will be investigated to assess the factors responsible for the development of lymphedema and its risks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kinect%20IR%20sensor" title="Kinect IR sensor">Kinect IR sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=Lymphedema" title=" Lymphedema"> Lymphedema</a>, <a href="https://publications.waset.org/abstracts/search?q=voxelization" title=" voxelization"> voxelization</a>, <a href="https://publications.waset.org/abstracts/search?q=lymph%20nodes" title=" lymph nodes"> lymph nodes</a> </p> <a href="https://publications.waset.org/abstracts/159744/early-detection-of-lymphedema-in-post-surgery-oncology-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Development of a Computer Vision System for the Blind and Visually Impaired Person</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20C.%20Belleza">Rodrigo C. Belleza</a>, <a href="https://publications.waset.org/abstracts/search?q=Jr."> Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Roselyn%20A.%20Maa%C3%B1o"> Roselyn A. Maaño</a>, <a href="https://publications.waset.org/abstracts/search?q=Karl%20Patrick%20E.%20Camota"> Karl Patrick E. Camota</a>, <a href="https://publications.waset.org/abstracts/search?q=Darwin%20Kim%20Q.%20Bulawan"> Darwin Kim Q. Bulawan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may result from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithms" title="algorithms">algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20systems" title=" embedded systems"> embedded systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a> </p> <a href="https://publications.waset.org/abstracts/2016/development-of-a-computer-vision-system-for-the-blind-and-visually-impaired-person" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Improved Acoustic Source Sensing and Localization Based On Robot Locomotion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=V.%20Ramu%20Reddy">V. Ramu Reddy</a>, <a href="https://publications.waset.org/abstracts/search?q=Parijat%20Deshpande"> Parijat Deshpande</a>, <a href="https://publications.waset.org/abstracts/search?q=Ranjan%20Dasgupta"> Ranjan Dasgupta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents different methodology for an acoustic source sensing and localization in an unknown environment. The developed methodology includes an acoustic based sensing and localization system, a converging target localization based on the recursive direction of arrival (DOA) error minimization, and a regressive obstacle avoidance function. Our method is able to augment the existing proven localization techniques and improve results incrementally by utilizing robot locomotion and is capable of converging to a position estimate with greater accuracy using fewer measurements. The results also evinced the DOA error minimization at each iteration, improvement in time for reaching the destination and the efficiency of this target localization method as gradually converging to the real target position. Initially, the system is tested using Kinect mounted on turntable with DOA markings which serve as a ground truth and then our approach is validated using a FireBird VI (FBVI) mobile robot on which Kinect is used to obtain bearing information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acoustic%20source%20localization" title="acoustic source localization">acoustic source localization</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20sensing" title=" acoustic sensing"> acoustic sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=recursive%20direction%20of%20arrival" title=" recursive direction of arrival"> recursive direction of arrival</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20locomotion" title=" robot locomotion"> robot locomotion</a> </p> <a href="https://publications.waset.org/abstracts/43889/improved-acoustic-source-sensing-and-localization-based-on-robot-locomotion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">492</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Autonomous Kuka Youbot Navigation Based on Machine Learning and Path Planning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Gordon">Carlos Gordon</a>, <a href="https://publications.waset.org/abstracts/search?q=Patricio%20Encalada"> Patricio Encalada</a>, <a href="https://publications.waset.org/abstracts/search?q=Henry%20Lema"> Henry Lema</a>, <a href="https://publications.waset.org/abstracts/search?q=Diego%20Leon"> Diego Leon</a>, <a href="https://publications.waset.org/abstracts/search?q=Dennis%20Chicaiza"> Dennis Chicaiza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The following work presents a proposal of autonomous navigation of mobile robots implemented in an omnidirectional robot Kuka Youbot. We have been able to perform the integration of robotic operative system (ROS) and machine learning algorithms. ROS mainly provides two distributions; ROS hydro and ROS Kinect. ROS hydro allows managing the nodes of odometry, kinematics, and path planning with statistical and probabilistic, global and local algorithms based on Adaptive Monte Carlo Localization (AMCL) and Dijkstra. Meanwhile, ROS Kinect is responsible for the detection block of dynamic objects which can be in the points of the planned trajectory obstructing the path of Kuka Youbot. The detection is managed by artificial vision module under a trained neural network based on the single shot multibox detector system (SSD), where the main dynamic objects for detection are human beings and domestic animals among other objects. When the objects are detected, the system modifies the trajectory or wait for the decision of the dynamic obstacle. Finally, the obstacles are skipped from the planned trajectory, and the Kuka Youbot can reach its goal thanks to the machine learning algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20navigation" title="autonomous navigation">autonomous navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=path%20planning" title=" path planning"> path planning</a>, <a href="https://publications.waset.org/abstracts/search?q=robotic%20operative%20system" title=" robotic operative system"> robotic operative system</a>, <a href="https://publications.waset.org/abstracts/search?q=open%20source%20computer%20vision%20library" title=" open source computer vision library"> open source computer vision library</a> </p> <a href="https://publications.waset.org/abstracts/101726/autonomous-kuka-youbot-navigation-based-on-machine-learning-and-path-planning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101726.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Sign Language Recognition of Static Gestures Using Kinect™ and Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rohit%20Semwal">Rohit Semwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Shivam%20Arora"> Shivam Arora</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurav"> Saurav</a>, <a href="https://publications.waset.org/abstracts/search?q=Sangita%20Roy"> Sangita Roy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work proposes a supervised framework with deep convolutional neural networks (CNNs) for vision-based sign language recognition of static gestures. Our approach addresses the acquisition and segmentation of correct inputs for the CNN-based classifier. Microsoft Kinect™ sensor, despite complex environmental conditions, can track hands efficiently. Skin Colour based segmentation is applied on cropped images of hands in different poses, used to depict different sign language gestures. The segmented hand images are used as an input for our classifier. The CNN classifier proposed in the paper is able to classify the input images with a high degree of accuracy. The system was trained and tested on 39 static sign language gestures, including 26 letters of the alphabet and 13 commonly used words. This paper includes a problem definition for building the proposed system, which acts as a sign language translator between deaf/mute and the rest of the society. It is then followed by a focus on reviewing existing knowledge in the area and work done by other researchers. It also describes the working principles behind different components of CNNs in brief. The architecture and system design specifications of the proposed system are discussed in the subsequent sections of the paper to give the reader a clear picture of the system in terms of the capability required. The design then gives the top-level details of how the proposed system meets the requirements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title="sign language">sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/150342/sign-language-recognition-of-static-gestures-using-kinect-and-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">157</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> UKIYO-E: User Knowledge Improvement Based on Youth Oriented Entertainment, Art Appreciation Support by Interacting with Picture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haruya%20Tamaki">Haruya Tamaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Tsugunosuke%20Sakai"> Tsugunosuke Sakai</a>, <a href="https://publications.waset.org/abstracts/search?q=Ryuichi%20Yoshida"> Ryuichi Yoshida</a>, <a href="https://publications.waset.org/abstracts/search?q=Ryohei%20Egusa"> Ryohei Egusa</a>, <a href="https://publications.waset.org/abstracts/search?q=Shigenori%20Inagaki"> Shigenori Inagaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Etsuji%20Yamaguchi"> Etsuji Yamaguchi</a>, <a href="https://publications.waset.org/abstracts/search?q=Fusako%20Kusunoki"> Fusako Kusunoki</a>, <a href="https://publications.waset.org/abstracts/search?q=Miki%20Namatame"> Miki Namatame</a>, <a href="https://publications.waset.org/abstracts/search?q=Masanori%20Sugimoto"> Masanori Sugimoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Hiroshi%20Mizoguchi"> Hiroshi Mizoguchi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Art appreciation is important as part of children education. Art appreciation can enrich sensibility and creativity. To enrich sensibility and creativity, the children have to learning knowledge of picture such as social and historical backgrounds and author intention. High learning effect can acquire by actively learning. In short, it is important that encourage learning of the knowledge about pictures actively. It is necessary that children feel like interest to encourage learning of the knowledge about pictures actively. In a general art museum, comments on pictures are done through writing. Thus, we expect that this method cannot arouse the interest of the children in pictures, because children feel like boring. In brief, learning about the picture information is difficult. Therefore, we are developing an art-appreciation support system that will encourage learning of the knowledge about pictures actively by children feel like interest. This system uses that Interacting with Pictures to learning of the knowledge about pictures. To Interacting with Pictures, children have to utterance by themselves. We expect that will encourage learning of the knowledge about pictures actively by Interacting with Pictures. To more actively learning, children can choose who talking with by information that location and movement of the children. This system must be able to acquire real-time knowledge of the location, movement, and voice of the children. We utilize the Microsoft’s Kinect v2 sensor and its library, namely, Kinect for Windows SDK and Speech Platform SDK v11 for this purpose. By using these sensor and library, we can determine the location, movement, and voice of the children. As the first step of this system, we developed ukiyo-e game that use ukiyo-e to appreciation object. Ukiyo-e is a traditional Japanese graphic art that has influenced the western society. Therefore, we believe that the ukiyo-e game will be appreciated. In this study, we applied talking to pictures to learn information about the pictures because we believe that learning information about the pictures by talking to the pictures is more interesting than commenting on the pictures using only texts. However, we cannot confirm if talking to the pictures is more interesting than commenting using texts only. Thus, we evaluated through EDA measurement whether the user develops an interest in the pictures while talking to them using voice recognition or by commenting on the pictures using texts only. Hence, we evaluated that children have interest to picture while talking to them using voice recognition through EDA measurement. In addition, we quantitatively evaluate that enjoyed this game or not and learning information about the pictures for primary schoolchildren. In this paper, we summarize these two evaluation results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=actively%20learning" title="actively learning">actively learning</a>, <a href="https://publications.waset.org/abstracts/search?q=art%20appreciation" title=" art appreciation"> art appreciation</a>, <a href="https://publications.waset.org/abstracts/search?q=EDA" title=" EDA"> EDA</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect%20V2" title=" Kinect V2"> Kinect V2</a> </p> <a href="https://publications.waset.org/abstracts/48592/ukiyo-e-user-knowledge-improvement-based-on-youth-oriented-entertainment-art-appreciation-support-by-interacting-with-picture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48592.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alexandros%20Lioulemes">Alexandros Lioulemes</a>, <a href="https://publications.waset.org/abstracts/search?q=Michail%20Theofanidis"> Michail Theofanidis</a>, <a href="https://publications.waset.org/abstracts/search?q=Varun%20Kanal"> Varun Kanal</a>, <a href="https://publications.waset.org/abstracts/search?q=Konstantinos%20Tsiakas"> Konstantinos Tsiakas</a>, <a href="https://publications.waset.org/abstracts/search?q=Maher%20Abujelala"> Maher Abujelala</a>, <a href="https://publications.waset.org/abstracts/search?q=Chris%20Collander"> Chris Collander</a>, <a href="https://publications.waset.org/abstracts/search?q=William%20B.%20Townsend"> William B. Townsend</a>, <a href="https://publications.waset.org/abstracts/search?q=Angie%20Boisselle"> Angie Boisselle</a>, <a href="https://publications.waset.org/abstracts/search?q=Fillia%20Makedon"> Fillia Makedon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human-robot%20interaction" title="human-robot interaction">human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=kinematics" title=" kinematics"> kinematics</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamics" title=" dynamics"> dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=haptic%20control" title=" haptic control"> haptic control</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation%20robotics" title=" rehabilitation robotics"> rehabilitation robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a> </p> <a href="https://publications.waset.org/abstracts/58367/magni-dynamics-a-vision-based-kinematic-and-dynamic-upper-limb-model-for-intelligent-robotic-rehabilitation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58367.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> A Three-modal Authentication Method for Industrial Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Luo%20Jiaoyang">Luo Jiaoyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Hongyang"> Yu Hongyang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20image" title=" distance image"> distance image</a> </p> <a href="https://publications.waset.org/abstracts/163879/a-three-modal-authentication-method-for-industrial-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Kinetic Façade Design Using 3D Scanning to Convert Physical Models into Digital Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Do-Jin%20Jang">Do-Jin Jang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Ah%20Kim"> Sung-Ah Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In designing a kinetic façade, it is hard for the designer to make digital models due to its complex geometry with motion. This paper aims to present a methodology of converting a point cloud of a physical model into a single digital model with a certain topology and motion. The method uses a Microsoft Kinect sensor, and color markers were defined and applied to three paper folding-inspired designs. Although the resulted digital model cannot represent the whole folding range of the physical model, the method supports the designer to conduct a performance-oriented design process with the rough physical model in the reduced folding range. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=design%20media" title="design media">design media</a>, <a href="https://publications.waset.org/abstracts/search?q=kinetic%20facades" title=" kinetic facades"> kinetic facades</a>, <a href="https://publications.waset.org/abstracts/search?q=tangible%20user%20interface" title=" tangible user interface"> tangible user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20scanning" title=" 3D scanning"> 3D scanning</a> </p> <a href="https://publications.waset.org/abstracts/70846/kinetic-facade-design-using-3d-scanning-to-convert-physical-models-into-digital-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70846.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Video Games Technologies Approach for Their Use in the Classroom</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Vargas-Herrera">Daniel Vargas-Herrera</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivette%20Caldelas"> Ivette Caldelas</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20Brambila-Paz"> Fernando Brambila-Paz</a>, <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20Montufar-Chaveznava"> Rodrigo Montufar-Chaveznava</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present the advances corresponding to the implementation of a set of educational materials based on video games technologies. Essentially these materials correspond to projects developed and under development as bachelor thesis of some Computer Engineering students of the Engineering School. All materials are based on the Unity SDK; integrating some devices such as kinect, leap motion, oculus rift, data gloves and Google cardboard. In detail, we present a virtual reality application for neurosciences students (suitable for neural rehabilitation), and virtual scenes for the Google cardboard, which will be used by the psychology students for phobias treatment. The objective is these materials will be located at a server to be available for all students, in the classroom or in the cloud, considering the use of smartphones has been widely extended between students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title="virtual reality">virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20technologies" title=" interactive technologies"> interactive technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=educational%20materials" title=" educational materials"> educational materials</a> </p> <a href="https://publications.waset.org/abstracts/55917/video-games-technologies-approach-for-their-use-in-the-classroom" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55917.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">657</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Global Based Histogram for 3D Object Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somar%20Boubou">Somar Boubou</a>, <a href="https://publications.waset.org/abstracts/search?q=Tatsuo%20Narikiyo"> Tatsuo Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Michihiro%20Kawanishi"> Michihiro Kawanishi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we address the problem of 3D object recognition with depth sensors such as Kinect or Structure sensor. Compared with traditional approaches based on local descriptors, which depends on local information around the object key points, we propose a global features based descriptor. Proposed descriptor, which we name as Differential Histogram of Normal Vectors (DHONV), is designed particularly to capture the surface geometric characteristics of the 3D objects represented by depth images. We describe the 3D surface of an object in each frame using a 2D spatial histogram capturing the normalized distribution of differential angles of the surface normal vectors. The object recognition experiments on the benchmark RGB-D object dataset and a self-collected dataset show that our proposed descriptor outperforms two others descriptors based on spin-images and histogram of normal vectors with linear-SVM classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision%20in%20control" title="vision in control">vision in control</a>, <a href="https://publications.waset.org/abstracts/search?q=robotics" title=" robotics"> robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20histogram%20of%20normal%20vectors" title=" differential histogram of normal vectors"> differential histogram of normal vectors</a> </p> <a href="https://publications.waset.org/abstracts/47486/global-based-histogram-for-3d-object-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=kinect&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=kinect&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>