CINXE.COM

Search results for: Kinect

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Kinect</title> <meta name="description" content="Search results for: Kinect"> <meta name="keywords" content="Kinect"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Kinect" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Kinect"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 20</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Kinect</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Regius%20Asiimwe">Regius Asiimwe</a>, <a href="https://publications.waset.org/search?q=Amir%20Anvar"> Amir Anvar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes the process used in the automation of the Maritime UAV commands using the Kinect sensor. The AR Drone is a Quadrocopter manufactured by Parrot [1] to be controlled using the Apple operating systems such as iPhones and Ipads. However, this project uses the Microsoft Kinect SDK and Microsoft Visual Studio C# (C sharp) software, which are compatible with Windows Operating System for the automation of the navigation and control of the AR drone. The navigation and control software for the Quadrocopter runs on a windows 7 computer. The project is divided into two sections; the Quadrocopter control system and the Kinect sensor control system. The Kinect sensor is connected to the computer using a USB cable from which commands can be sent to and from the Kinect sensors. The AR drone has Wi-Fi capabilities from which it can be connected to the computer to enable transfer of commands to and from the Quadrocopter. The project was implemented in C#, a programming language that is commonly used in the automation systems. The language was chosen because there are more libraries already established in C# for both the AR drone and the Kinect sensor. The study will contribute toward research in automation of systems using the Quadrocopter and the Kinect sensor for navigation involving a human operator in the loop. The prototype created has numerous applications among which include the inspection of vessels such as ship, airplanes and areas that are not accessible by human operators. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=UAV" title="UAV">UAV</a>, <a href="https://publications.waset.org/search?q=AR%20drone" title=" AR drone"> AR drone</a>, <a href="https://publications.waset.org/search?q=Kinect%20Sensors" title=" Kinect Sensors"> Kinect Sensors</a>, <a href="https://publications.waset.org/search?q=Automation" title=" Automation"> Automation</a>, <a href="https://publications.waset.org/search?q=Real%0Atime" title=" Real time"> Real time</a>, <a href="https://publications.waset.org/search?q=C%20sharp" title=" C sharp"> C sharp</a>, <a href="https://publications.waset.org/search?q=Microsoft%20Kinect%20SDK." title=" Microsoft Kinect SDK."> Microsoft Kinect SDK.</a> </p> <a href="https://publications.waset.org/5700/automation-of-the-maritime-uav-command-control-navigation-operations-simulated-in-real-time-using-kinect-sensor-a-feasibility-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5700/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5700/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5700/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5700/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5700/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5700/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5700/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5700/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5700/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5700/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2931</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Capture and Feedback in Flying Disc Throw with use of Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yasuhisa%20Tamura">Yasuhisa Tamura</a>, <a href="https://publications.waset.org/search?q=Koji%20Yamaoka"> Koji Yamaoka</a>, <a href="https://publications.waset.org/search?q=Masataka%20Uehara"> Masataka Uehara</a>, <a href="https://publications.waset.org/search?q=Takeshi%20Shima"> Takeshi Shima</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper proposes a three-dimensional motion capture and feedback system of flying disc throwing action learners with use of Kinect device. Rather than conventional 3-D motion capture system, Kinect has advantages of cost merit, easy system development and operation. A novice learner of flying disc is trained to keep arm movement in steady height, to twist the waist, and to stretch the elbow according to the waist angle. The proposing system captures learners- body movement, checks their skeleton positions in pre-motion / motion / post-motion in several ways, and displays feedback messages to refine their actions.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Flying%20disc" title="Flying disc">Flying disc</a>, <a href="https://publications.waset.org/search?q=throwing%20movement" title=" throwing movement"> throwing movement</a>, <a href="https://publications.waset.org/search?q=Kinect" title=" Kinect"> Kinect</a>, <a href="https://publications.waset.org/search?q=capture" title=" capture"> capture</a>, <a href="https://publications.waset.org/search?q=feedback" title=" feedback"> feedback</a> </p> <a href="https://publications.waset.org/4115/capture-and-feedback-in-flying-disc-throw-with-use-of-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4115/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4115/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4115/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4115/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4115/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4115/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4115/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4115/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4115/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4115/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4115.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2160</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Laban Movement Analysis Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ran%20Bernstein">Ran Bernstein</a>, <a href="https://publications.waset.org/search?q=Tal%20Shafir"> Tal Shafir</a>, <a href="https://publications.waset.org/search?q=Rachelle%20Tsachor"> Rachelle Tsachor</a>, <a href="https://publications.waset.org/search?q=Karen%20Studd"> Karen Studd</a>, <a href="https://publications.waset.org/search?q=Assaf%20Schuster"> Assaf Schuster</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Laban%20Movement%20Analysis" title="Laban Movement Analysis">Laban Movement Analysis</a>, <a href="https://publications.waset.org/search?q=Kinect" title=" Kinect"> Kinect</a>, <a href="https://publications.waset.org/search?q=Machine%0D%0ALearning." title=" Machine Learning."> Machine Learning.</a> </p> <a href="https://publications.waset.org/10002336/laban-movement-analysis-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002336/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002336/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002336/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002336/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002336/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002336/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002336/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002336/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002336/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002336/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002336.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2833</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Interactive Shadow Play Animation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Bo%20Wan">Bo Wan</a>, <a href="https://publications.waset.org/search?q=Xiu%20Wen"> Xiu Wen</a>, <a href="https://publications.waset.org/search?q=Lingling%20An"> Lingling An</a>, <a href="https://publications.waset.org/search?q=Xiaoling%20Ding"> Xiaoling Ding</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gesture%20recognition" title="Gesture recognition">Gesture recognition</a>, <a href="https://publications.waset.org/search?q=Kinect" title=" Kinect"> Kinect</a>, <a href="https://publications.waset.org/search?q=shadow%20play%20animation" title=" shadow play animation"> shadow play animation</a>, <a href="https://publications.waset.org/search?q=VRPN." title=" VRPN."> VRPN.</a> </p> <a href="https://publications.waset.org/10000231/interactive-shadow-play-animation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000231/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000231/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000231/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000231/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000231/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000231/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000231/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000231/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000231/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000231/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000231.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2706</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Shih-Wen%20Hsiao">Shih-Wen Hsiao</a>, <a href="https://publications.waset.org/search?q=Yi-Cheng%20Tsao"> Yi-Cheng Tsao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=3D%20scan" title="3D scan">3D scan</a>, <a href="https://publications.waset.org/search?q=depth%20sensor" title=" depth sensor"> depth sensor</a>, <a href="https://publications.waset.org/search?q=fashion%20and%20garment%20design" title=" fashion and garment design"> fashion and garment design</a>, <a href="https://publications.waset.org/search?q=mannequin" title=" mannequin"> mannequin</a>, <a href="https://publications.waset.org/search?q=multiple%20kinect%20sensor." title=" multiple kinect sensor."> multiple kinect sensor.</a> </p> <a href="https://publications.waset.org/10004786/applying-multiple-kinect-on-the-development-of-a-rapid-3d-mannequin-scan-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004786/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004786/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004786/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004786/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004786/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004786/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004786/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004786/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004786/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004786/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004786.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2276</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Applying Kinect on the Development of a Customized 3D Mannequin</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Shih-Wen%20Hsiao">Shih-Wen Hsiao</a>, <a href="https://publications.waset.org/search?q=Rong-Qi%20Chen"> Rong-Qi Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=3D%20Mannequin" title="3D Mannequin">3D Mannequin</a>, <a href="https://publications.waset.org/search?q=kinect%20scanner" title=" kinect scanner"> kinect scanner</a>, <a href="https://publications.waset.org/search?q=interactive%20closest%0D%0Apoint" title=" interactive closest point"> interactive closest point</a>, <a href="https://publications.waset.org/search?q=shape%20morphing" title=" shape morphing"> shape morphing</a>, <a href="https://publications.waset.org/search?q=subdivision." title=" subdivision."> subdivision.</a> </p> <a href="https://publications.waset.org/10001858/applying-kinect-on-the-development-of-a-customized-3d-mannequin" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10001858/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10001858/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10001858/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10001858/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10001858/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10001858/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10001858/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10001858/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10001858/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10001858/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10001858.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2062</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Proprioceptive Neuromuscular Facilitation Exercises of Upper Extremities Assessment Using Microsoft Kinect Sensor and Color Marker in a Virtual Reality Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Owlia">M. Owlia</a>, <a href="https://publications.waset.org/search?q=M.%20H.%20Azarsa"> M. H. Azarsa</a>, <a href="https://publications.waset.org/search?q=M.%20Khabbazan"> M. Khabbazan</a>, <a href="https://publications.waset.org/search?q=A.%20Mirbagheri"> A. Mirbagheri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Proprioceptive neuromuscular facilitation exercises are a series of stretching techniques that are commonly used in rehabilitation and exercise therapy. Assessment of these exercises for true maneuvering requires extensive experience in this field and could not be down with patients themselves. In this paper, we developed software that uses Microsoft Kinect sensor, a spherical color marker, and real-time image processing methods to evaluate patient&rsquo;s performance in generating true patterns of movements. The software also provides the patient with a visual feedback by showing his/her avatar in a Virtual Reality environment along with the correct path of moving hand, wrist and marker. Primary results during PNF exercise therapy of a patient in a room environment shows the ability of the system to identify any deviation of maneuvering path and direction of the hand from the one that has been performed by an expert physician.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20processing" title="Image processing">Image processing</a>, <a href="https://publications.waset.org/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a>, <a href="https://publications.waset.org/search?q=proprioceptive%20neuromuscular%20facilitation" title=" proprioceptive neuromuscular facilitation"> proprioceptive neuromuscular facilitation</a>, <a href="https://publications.waset.org/search?q=upper%20extremities%20assessment" title=" upper extremities assessment"> upper extremities assessment</a>, <a href="https://publications.waset.org/search?q=virtual%20reality." title=" virtual reality. "> virtual reality. </a> </p> <a href="https://publications.waset.org/10005861/proprioceptive-neuromuscular-facilitation-exercises-of-upper-extremities-assessment-using-microsoft-kinect-sensor-and-color-marker-in-a-virtual-reality-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10005861/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10005861/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10005861/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10005861/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10005861/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10005861/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10005861/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10005861/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10005861/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10005861/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10005861.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1937</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Jazouli">M. Jazouli</a>, <a href="https://publications.waset.org/search?q=S.%20Elhoufi"> S. Elhoufi</a>, <a href="https://publications.waset.org/search?q=A.%20Majda"> A. Majda</a>, <a href="https://publications.waset.org/search?q=A.%20Zarghili"> A. Zarghili</a>, <a href="https://publications.waset.org/search?q=R.%20Aalouane"> R. Aalouane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=ASD" title="ASD">ASD</a>, <a href="https://publications.waset.org/search?q=stereotypical%20motor%20movements" title=" stereotypical motor movements"> stereotypical motor movements</a>, <a href="https://publications.waset.org/search?q=repetitive%20gesture" title=" repetitive gesture"> repetitive gesture</a>, <a href="https://publications.waset.org/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/search?q=artificial%20neural%20network." title=" artificial neural network."> artificial neural network.</a> </p> <a href="https://publications.waset.org/10004750/stereotypical-motor-movement-recognition-using-microsoft-kinect-with-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004750/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004750/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004750/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004750/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004750/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004750/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004750/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004750/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004750/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004750/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004750.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1906</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Maha%20Jazouli">Maha Jazouli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Suicide is one of the leading causes of death among prisoners, both in Canada and internationally. In recent years, rates of attempts of suicide and self-harm suicide have increased, with hangings being the most frequently used method. The objective of this article is to propose a method to automatically detect suicidal behaviors in real time. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Tests show that the proposed system gives satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Suicide%20detection" title="Suicide detection">Suicide detection</a>, <a href="https://publications.waset.org/search?q=Kinect%20Azure" title=" Kinect Azure"> Kinect Azure</a>, <a href="https://publications.waset.org/search?q=RGB-D%20camera" title=" RGB-D camera"> RGB-D camera</a>, <a href="https://publications.waset.org/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/search?q=gesture%20recognition." title=" gesture recognition."> gesture recognition.</a> </p> <a href="https://publications.waset.org/10012744/automatic-detection-of-suicidal-behaviors-using-an-rgb-d-camera-azure-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012744/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012744/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012744/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012744/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012744/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012744/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012744/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012744/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012744/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012744/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012744.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Development of a Computer Vision System for the Blind and Visually Impaired Person</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Roselyn%20A.%20Maa%C3%B1o">Roselyn A. Maaño</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may results from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Algorithms" title="Algorithms">Algorithms</a>, <a href="https://publications.waset.org/search?q=Blind" title=" Blind"> Blind</a>, <a href="https://publications.waset.org/search?q=Computer%20Vision" title=" Computer Vision"> Computer Vision</a>, <a href="https://publications.waset.org/search?q=Embedded%20Systems" title=" Embedded Systems"> Embedded Systems</a>, <a href="https://publications.waset.org/search?q=Image%20Analysis." title=" Image Analysis. "> Image Analysis. </a> </p> <a href="https://publications.waset.org/17415/development-of-a-computer-vision-system-for-the-blind-and-visually-impaired-person" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/17415/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/17415/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/17415/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/17415/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/17415/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/17415/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/17415/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/17415/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/17415/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/17415/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/17415.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3610</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Autonomous Robots- Visual Perception in Underground Terrains Using Statistical Region Merging</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Omowunmi%20E.%20Isafiade">Omowunmi E. Isafiade</a>, <a href="https://publications.waset.org/search?q=Isaac%20O.%20Osunmakinde"> Isaac O. Osunmakinde</a>, <a href="https://publications.waset.org/search?q=Antoine%20B.%20Bagula"> Antoine B. Bagula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Robots- visual perception is a field that is gaining increasing attention from researchers. This is partly due to emerging trends in the commercial availability of 3D scanning systems or devices that produce a high information accuracy level for a variety of applications. In the history of mining, the mortality rate of mine workers has been alarming and robots exhibit a great deal of potentials to tackle safety issues in mines. However, an effective vision system is crucial to safe autonomous navigation in underground terrains. This work investigates robots- perception in underground terrains (mines and tunnels) using statistical region merging (SRM) model. SRM reconstructs the main structural components of an imagery by a simple but effective statistical analysis. An investigation is conducted on different regions of the mine, such as the shaft, stope and gallery, using publicly available mine frames, with a stream of locally captured mine images. An investigation is also conducted on a stream of underground tunnel image frames, using the XBOX Kinect 3D sensors. The Kinect sensors produce streams of red, green and blue (RGB) and depth images of 640 x 480 resolution at 30 frames per second. Integrating the depth information to drivability gives a strong cue to the analysis, which detects 3D results augmenting drivable and non-drivable regions in 2D. The results of the 2D and 3D experiment with different terrains, mines and tunnels, together with the qualitative and quantitative evaluation, reveal that a good drivable region can be detected in dynamic underground terrains. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Drivable%20Region%20Detection" title="Drivable Region Detection">Drivable Region Detection</a>, <a href="https://publications.waset.org/search?q=Kinect%20Sensor" title=" Kinect Sensor"> Kinect Sensor</a>, <a href="https://publications.waset.org/search?q=Robots%27%0APerception" title=" Robots&#039; Perception"> Robots&#039; Perception</a>, <a href="https://publications.waset.org/search?q=SRM" title=" SRM"> SRM</a>, <a href="https://publications.waset.org/search?q=Underground%20Terrains." title=" Underground Terrains."> Underground Terrains.</a> </p> <a href="https://publications.waset.org/994/autonomous-robots-visual-perception-in-underground-terrains-using-statistical-region-merging" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/994/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/994/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/994/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/994/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/994/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/994/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/994/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/994/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/994/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/994/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/994.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1838</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Alexandros%20Lioulemes">Alexandros Lioulemes</a>, <a href="https://publications.waset.org/search?q=Michail%20Theofanidis"> Michail Theofanidis</a>, <a href="https://publications.waset.org/search?q=Varun%20Kanal"> Varun Kanal</a>, <a href="https://publications.waset.org/search?q=Konstantinos%20Tsiakas"> Konstantinos Tsiakas</a>, <a href="https://publications.waset.org/search?q=Maher%20Abujelala"> Maher Abujelala</a>, <a href="https://publications.waset.org/search?q=Chris%20Collander"> Chris Collander</a>, <a href="https://publications.waset.org/search?q=William%20B.%20Townsend"> William B. Townsend</a>, <a href="https://publications.waset.org/search?q=Angie%20Boisselle"> Angie Boisselle</a>, <a href="https://publications.waset.org/search?q=Fillia%20Makedon"> Fillia Makedon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a home-based robot-rehabilitation instrument, called &rdquo;MAGNI Dynamics&rdquo;, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user&rsquo;s performance to different stiffness factors. The vision module uses the Kinect&rsquo;s skeletal tracking to monitor the user&rsquo;s effort in an unobtrusive and safe way, by estimating the torque that affects the user&rsquo;s arm. The system&rsquo;s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user&rsquo;s recovery process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Human-robot%20interaction" title="Human-robot interaction">Human-robot interaction</a>, <a href="https://publications.waset.org/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/search?q=kinematics" title=" kinematics"> kinematics</a>, <a href="https://publications.waset.org/search?q=dynamics" title=" dynamics"> dynamics</a>, <a href="https://publications.waset.org/search?q=haptic%20control" title=" haptic control"> haptic control</a>, <a href="https://publications.waset.org/search?q=rehabilitation%20robotics" title=" rehabilitation robotics"> rehabilitation robotics</a>, <a href="https://publications.waset.org/search?q=artificial%0D%0Aintelligence." title=" artificial intelligence."> artificial intelligence.</a> </p> <a href="https://publications.waset.org/10006973/magni-dynamics-a-vision-based-kinematic-and-dynamic-upper-limb-model-for-intelligent-robotic-rehabilitation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10006973/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10006973/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10006973/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10006973/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10006973/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10006973/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10006973/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10006973/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10006973/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10006973/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10006973.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1320</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Kinetic Façade Design Using 3D Scanning to Convert Physical Models into Digital Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Do-Jin%20Jang">Do-Jin Jang</a>, <a href="https://publications.waset.org/search?q=Sung-Ah%20Kim"> Sung-Ah Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In designing a kinetic fa&ccedil;ade, it is hard for the designer to make digital models due to its complex geometry with motion. This paper aims to present a methodology of converting a point cloud of a physical model into a single digital model with a certain topology and motion. The method uses a Microsoft Kinect sensor, and color markers were defined and applied to three paper folding-inspired designs. Although the resulted digital model cannot represent the whole folding range of the physical model, the method supports the designer to conduct a performance-oriented design process with the rough physical model in the reduced folding range.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Design%20media" title="Design media">Design media</a>, <a href="https://publications.waset.org/search?q=kinetic%20fa%C3%A7ades" title=" kinetic façades"> kinetic façades</a>, <a href="https://publications.waset.org/search?q=tangible%20user%20interface" title=" tangible user interface"> tangible user interface</a>, <a href="https://publications.waset.org/search?q=3D%20scanning." title=" 3D scanning."> 3D scanning.</a> </p> <a href="https://publications.waset.org/10006972/kinetic-facade-design-using-3d-scanning-to-convert-physical-models-into-digital-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10006972/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10006972/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10006972/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10006972/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10006972/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10006972/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10006972/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10006972/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10006972/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10006972/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10006972.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1420</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Augmented Reality Interaction System in 3D Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sunhyoung%20Lee">Sunhyoung Lee</a>, <a href="https://publications.waset.org/search?q=Askar%20Akshabayev"> Askar Akshabayev</a>, <a href="https://publications.waset.org/search?q=Beisenbek%20Baisakov"> Beisenbek Baisakov</a>, <a href="https://publications.waset.org/search?q=Youngjoon%20Han"> Youngjoon Han</a>, <a href="https://publications.waset.org/search?q=Hernsoo%20Hahn"> Hernsoo Hahn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>It is important to give input information without other device in AR system. One solution is using hand for augmented reality application. Many researchers have proposed different solutions for hand interface in augmented reality. Analyze Histogram and connecting factor is can be example for that. Various Direction searching is one of robust way to recognition hand but it takes too much calculating time. And background should be distinguished with skin color. This paper proposes a hand tracking method to control the 3D object in augmented reality using depth device and skin color. Also in this work discussed relationship between several markers, which is based on relationship between camera and marker. One marker used for displaying virtual object and three markers for detecting hand gesture and manipulating the virtual object.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Augmented%20Reality" title="Augmented Reality">Augmented Reality</a>, <a href="https://publications.waset.org/search?q=depth%20map" title=" depth map"> depth map</a>, <a href="https://publications.waset.org/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/search?q=marker" title=" marker"> marker</a>, <a href="https://publications.waset.org/search?q=YCbCr%20color%20model." title=" YCbCr color model."> YCbCr color model.</a> </p> <a href="https://publications.waset.org/9801/augmented-reality-interaction-system-in-3d-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9801/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9801/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9801/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9801/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9801/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9801/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9801/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9801/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9801/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9801/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9801.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1873</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> BECOME: Body Experience-Based Co-Operation between Juveniles through Mutually Excited Team Gameplay</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tsugunosuke%20Sakai">Tsugunosuke Sakai</a>, <a href="https://publications.waset.org/search?q=Haruya%20Tamaki"> Haruya Tamaki</a>, <a href="https://publications.waset.org/search?q=Ryuichi%20Yoshida"> Ryuichi Yoshida</a>, <a href="https://publications.waset.org/search?q=Ryohei%20Egusa"> Ryohei Egusa</a>, <a href="https://publications.waset.org/search?q=Etsuji%20Yamaguchi"> Etsuji Yamaguchi</a>, <a href="https://publications.waset.org/search?q=Shigenori%20Inagaki"> Shigenori Inagaki</a>, <a href="https://publications.waset.org/search?q=Fusako%20Kusunoki"> Fusako Kusunoki</a>, <a href="https://publications.waset.org/search?q=Miki%20Namatame"> Miki Namatame</a>, <a href="https://publications.waset.org/search?q=Masanori%20Sugimoto"> Masanori Sugimoto</a>, <a href="https://publications.waset.org/search?q=Hiroshi%20Mizoguchi"> Hiroshi Mizoguchi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We aim to develop a full-body interaction game that could let children cooperate and interact with other children in small groups. As the first step for our aim, the objective of the full-body interaction game developed in this study is to make interaction between children. The game requires two children to jump together with the same timing. We let children experience the game and answer the questionnaires. The children using several strategies to coordinate the timing of their jumps were observed. These included shouting time, watching each other, and jumping in a constant rhythm as if they were skipping rope. In this manner, we observed the children playing the game while cooperating with each other. The results of a questionnaire to evaluate the proposed interactive game indicate that the jumping game was a very enjoyable experience in which the participants could immerse themselves. Therefore, the game enabled children to experience cooperation with others by using body movements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Children" title="Children">Children</a>, <a href="https://publications.waset.org/search?q=cooperation" title=" cooperation"> cooperation</a>, <a href="https://publications.waset.org/search?q=full-body%20interaction%20game" title=" full-body interaction game"> full-body interaction game</a>, <a href="https://publications.waset.org/search?q=kinect%20sensor." title=" kinect sensor."> kinect sensor.</a> </p> <a href="https://publications.waset.org/10005290/become-body-experience-based-co-operation-between-juveniles-through-mutually-excited-team-gameplay" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10005290/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10005290/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10005290/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10005290/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10005290/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10005290/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10005290/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10005290/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10005290/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10005290/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10005290.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1345</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Bruce%20X.%20B.%20Yu">Bruce X. B. Yu</a>, <a href="https://publications.waset.org/search?q=Yan%20Liu"> Yan Liu</a>, <a href="https://publications.waset.org/search?q=Keith%20C.%20C.%20Chan"> Keith C. C. Chan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Daily%20activity%20recognition" title="Daily activity recognition">Daily activity recognition</a>, <a href="https://publications.waset.org/search?q=healthcare" title=" healthcare"> healthcare</a>, <a href="https://publications.waset.org/search?q=IoT%20sensors" title=" IoT sensors"> IoT sensors</a>, <a href="https://publications.waset.org/search?q=transfer%20learning." title=" transfer learning."> transfer learning.</a> </p> <a href="https://publications.waset.org/10011309/vision-based-daily-routine-recognition-for-healthcare-with-transfer-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011309/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011309/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011309/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011309/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011309/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011309/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011309/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011309/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011309/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011309/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011309.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">895</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Near Field Focusing Behaviour of Airborne Ultrasonic Phased Arrays Influenced by Airflows</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=D.%20Sun">D. Sun</a>, <a href="https://publications.waset.org/search?q=T.%20F.%20Lu"> T. F. Lu</a>, <a href="https://publications.waset.org/search?q=A.%20Zander"> A. Zander</a>, <a href="https://publications.waset.org/search?q=M.%20Trinkle"> M. Trinkle</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper investigates the potential use of airborne ultrasonic phased arrays for imaging in outdoor environments as a means of overcoming the limitations experienced by kinect sensors, which may fail to work in the outdoor environments due to the oversaturation of the infrared photo diodes. Ultrasonic phased arrays have been well studied for static media, yet there appears to be no comparable examination in the literature of the impact of a flowing medium on the focusing behaviour of near field focused ultrasonic arrays. This paper presents a method for predicting the sound pressure fields produced by a single ultrasound element or an ultrasonic phased array influenced by airflows. The approach can be used to determine the actual focal point location of an array exposed in a known flow field. From the presented simulation results based upon this model, it can be concluded that uniform flows in the direction orthogonal to the acoustic propagation have a noticeable influence on the sound pressure field, which is reflected in the twisting of the steering angle of the array. Uniform flows in the same direction as the acoustic propagation have negligible influence on the array. For an array impacted by a turbulent flow, determining the location of the focused sound field becomes difficult due to the irregularity and continuously changing direction and the speed of the turbulent flow. In some circumstances, ultrasonic phased arrays impacted by turbulent flows may not be capable of producing a focused sound field.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Airborne" title="Airborne">Airborne</a>, <a href="https://publications.waset.org/search?q=airflow" title=" airflow"> airflow</a>, <a href="https://publications.waset.org/search?q=focused%20sound%20field" title=" focused sound field"> focused sound field</a>, <a href="https://publications.waset.org/search?q=ultrasonic%20phased%20array." title=" ultrasonic phased array. "> ultrasonic phased array. </a> </p> <a href="https://publications.waset.org/10004271/near-field-focusing-behaviour-of-airborne-ultrasonic-phased-arrays-influenced-by-airflows" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004271/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004271/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004271/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004271/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004271/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004271/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004271/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004271/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004271/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004271/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004271.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1626</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=David%20L.%20Olson">David L. Olson</a>, <a href="https://publications.waset.org/search?q=Stephen%20B.%20H.%20Bruder"> Stephen B. H. Bruder</a>, <a href="https://publications.waset.org/search?q=Adam%20S.%20Watkins"> Adam S. Watkins</a>, <a href="https://publications.waset.org/search?q=Cleon%20E.%20Davis"> Cleon E. Davis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft. </p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Autonomous%20mobile%20robotics" title="Autonomous mobile robotics">Autonomous mobile robotics</a>, <a href="https://publications.waset.org/search?q=dead%20reckoning" title=" dead reckoning"> dead reckoning</a>, <a href="https://publications.waset.org/search?q=depth%20camera" title=" depth camera"> depth camera</a>, <a href="https://publications.waset.org/search?q=inertial%20navigation" title=" inertial navigation"> inertial navigation</a>, <a href="https://publications.waset.org/search?q=Kalman%20filtering" title=" Kalman filtering"> Kalman filtering</a>, <a href="https://publications.waset.org/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/search?q=sensor%20fusion." title=" sensor fusion."> sensor fusion.</a> </p> <a href="https://publications.waset.org/10012561/depth-camera-aided-dead-reckoning-localization-of-autonomous-mobile-robots-in-unstructured-global-navigation-satellite-system-denied-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012561/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012561/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012561/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012561/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012561/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012561/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012561/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012561/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012561/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012561/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012561.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">720</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=R.%20Estrada">R. Estrada</a>, <a href="https://publications.waset.org/search?q=A.%20Henriquez"> A. Henriquez</a>, <a href="https://publications.waset.org/search?q=R.%20Becerra"> R. Becerra</a>, <a href="https://publications.waset.org/search?q=C.%20Laguna"> C. Laguna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Color%20depth%20sensor" title="Color depth sensor">Color depth sensor</a>, <a href="https://publications.waset.org/search?q=human%20computer%20interface" title=" human computer interface"> human computer interface</a>, <a href="https://publications.waset.org/search?q=interactive%20surface" title=" interactive surface"> interactive surface</a>, <a href="https://publications.waset.org/search?q=spatial%20augmented%20reality." title=" spatial augmented reality. "> spatial augmented reality. </a> </p> <a href="https://publications.waset.org/10011682/method-for-auto-calibrate-projector-and-color-depth-systems-for-spatial-augmented-reality-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011682/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011682/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011682/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011682/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011682/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011682/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011682/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011682/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011682/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011682/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011682.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">599</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Insaf%20Ajili">Insaf Ajili</a>, <a href="https://publications.waset.org/search?q=Malik%20Mallem"> Malik Mallem</a>, <a href="https://publications.waset.org/search?q=Jean-Yves%20Didier"> Jean-Yves Didier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Human%20Motion%20Recognition" title="Human Motion Recognition">Human Motion Recognition</a>, <a href="https://publications.waset.org/search?q=Motion%20representation" title=" Motion representation"> Motion representation</a>, <a href="https://publications.waset.org/search?q=Laban%20Movement%20Analysis" title=" Laban Movement Analysis"> Laban Movement Analysis</a>, <a href="https://publications.waset.org/search?q=Discrete%20Hidden%20Markov%20Model." title=" Discrete Hidden Markov Model."> Discrete Hidden Markov Model.</a> </p> <a href="https://publications.waset.org/10009500/an-efficient-motion-recognition-system-based-on-lma-technique-and-a-discrete-hidden-markov-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10009500/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10009500/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10009500/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10009500/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10009500/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10009500/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10009500/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10009500/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10009500/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10009500/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10009500.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">728</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10