CINXE.COM
Search results for: motion capture
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: motion capture</title> <meta name="description" content="Search results for: motion capture"> <meta name="keywords" content="motion capture"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="motion capture" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="motion capture"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2486</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: motion capture</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2486</span> Human Motion Capture: New Innovations in the Field of Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Najm%20Alotaibi">Najm Alotaibi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20capture" title="human motion capture">human motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision-based" title=" vision-based"> vision-based</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/22770/human-motion-capture-new-innovations-in-the-field-of-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22770.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2485</span> ISME: Integrated Style Motion Editor for 3D Humanoid Character</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ismahafezi%20Ismail">Ismahafezi Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Shahrizal%20Sunar"> Mohd Shahrizal Sunar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The motion of a realistic 3D humanoid character is very important especially for the industries developing computer animations and games. However, this type of motion is seen with a very complex dimensional data as well as body position, orientation, and joint rotation. Integrated Style Motion Editor (ISME), on the other hand, is a method used to alter the 3D humanoid motion capture data utilised in computer animation and games development. Therefore, this study was carried out with the purpose of demonstrating a method that is able to manipulate and deform different motion styles by integrating Key Pose Deformation Technique and Trajectory Control Technique. This motion editing method allows the user to generate new motions from the original motion capture data using a simple interface control. Unlike the previous method, our method produces a realistic humanoid motion style in real time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20animation" title="computer animation">computer animation</a>, <a href="https://publications.waset.org/abstracts/search?q=humanoid%20motion" title=" humanoid motion"> humanoid motion</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20editing" title=" motion editing"> motion editing</a> </p> <a href="https://publications.waset.org/abstracts/54401/isme-integrated-style-motion-editor-for-3d-humanoid-character" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54401.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2484</span> A Study on the Establishment of a 4-Joint Based Motion Capture System and Data Acquisition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyeong-Ri%20Ko">Kyeong-Ri Ko</a>, <a href="https://publications.waset.org/abstracts/search?q=Seong%20Bong%20Bae"> Seong Bong Bae</a>, <a href="https://publications.waset.org/abstracts/search?q=Jang%20Sik%20Choi"> Jang Sik Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung%20Bum%20Pan"> Sung Bum Pan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A simple method for testing the posture imbalance of the human body is to check for differences in the bilateral shoulder and pelvic height of the target. In this paper, to check for spinal disorders the authors have studied ways to establish a motion capture system to obtain and express motions of 4-joints, and to acquire data based on this system. The 4 sensors are attached to the both shoulders and pelvis. To verify the established system, the normal and abnormal postures of the targets listening to a lecture were obtained using the established 4-joint based motion capture system. From the results, it was confirmed that the motions taken by the target was identical to the 3-dimensional simulation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inertial%20sensor" title="inertial sensor">inertial sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20data%20acquisition" title=" motion data acquisition"> motion data acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=posture%20imbalance" title=" posture imbalance"> posture imbalance</a> </p> <a href="https://publications.waset.org/abstracts/4802/a-study-on-the-establishment-of-a-4-joint-based-motion-capture-system-and-data-acquisition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4802.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">515</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2483</span> Motion Capture Based Wizard of Oz Technique for Humanoid Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafal%20Stegierski">Rafal Stegierski</a>, <a href="https://publications.waset.org/abstracts/search?q=Krzysztof%20Dmitruk"> Krzysztof Dmitruk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper focuses on robotic tele-presence system build around humanoid robot operated with controller-less Wizard of Oz technique. Proposed solution gives possibility to quick start acting as a operator with short, if any, initial training. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=robotics" title="robotics">robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=Wizard%20of%20Oz" title=" Wizard of Oz"> Wizard of Oz</a>, <a href="https://publications.waset.org/abstracts/search?q=humanoid%20robots" title=" humanoid robots"> humanoid robots</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20robot%20interaction" title=" human robot interaction"> human robot interaction</a> </p> <a href="https://publications.waset.org/abstracts/16596/motion-capture-based-wizard-of-oz-technique-for-humanoid-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16596.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">481</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2482</span> Development and Evaluation of Virtual Basketball Game Using Motion Capture Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shunsuke%20Aoki">Shunsuke Aoki</a>, <a href="https://publications.waset.org/abstracts/search?q=Taku%20Ri"> Taku Ri</a>, <a href="https://publications.waset.org/abstracts/search?q=Tatsuya%20Yamazaki"> Tatsuya Yamazaki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> These days, along with the development of e-sports, video games as a competitive sport is attracting attention. But, in many cases, action in the screen does not match the real motion of operation. Inclusiveness of player motion is needed to increase reality and excitement for sports games. Therefore, in this study, the authors propose a method to recognize player motion by using the motion capture technology and develop a virtual basketball game. The virtual basketball game consists of a screen with nine targets, players, depth sensors, and no ball. The players pretend a two-handed basketball shot without a ball aiming at one of the nine targets on the screen. Time-series data of three-dimensional coordinates of player joints are captured by the depth sensor. 20 joints data are measured for each player to estimate the shooting motion in real-time. The trajectory of the thrown virtual ball is calculated based on the time-series data and hitting on the target is judged as success or failure. The virtual basketball game can be played by 2 to 4 players as a competitive game among the players. The developed game was exhibited to the public for evaluation on the authors' university open campus days. 339 visitors participated in the exhibition and enjoyed the virtual basketball game over the two days. A questionnaire survey on the developed game was conducted for the visitors who experienced the game. As a result of the survey, about 97.3% of the players found the game interesting regardless of whether they had experienced actual basketball before or not. In addition, it is found that women are easy to comfort for shooting motion. The virtual game with motion capture technology has the potential to become a universal entertainment between e-sports and actual sports. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=basketball" title="basketball">basketball</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=questionnaire%20survey" title=" questionnaire survey"> questionnaire survey</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20ga" title=" video ga"> video ga</a> </p> <a href="https://publications.waset.org/abstracts/108335/development-and-evaluation-of-virtual-basketball-game-using-motion-capture-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108335.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2481</span> A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yaghoubi%20N.">Yaghoubi N.</a>, <a href="https://publications.waset.org/abstracts/search?q=Moore%20Z."> Moore Z.</a>, <a href="https://publications.waset.org/abstracts/search?q=Van%20Der%20Veen%20S.%20M."> Van Der Veen S. M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Pidcoe%20P.%20E."> Pidcoe P. E.</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20J.%20S."> Thomas J. S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Dexheimer%20B."> Dexheimer B.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha鈥橝yin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15掳, 30掳, 45掳). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm 卤 .02 SE (p = .03) at 30 degrees hip flexion to .194 cm 卤 .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm 卤 .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lumbar" title="lumbar">lumbar</a>, <a href="https://publications.waset.org/abstracts/search?q=vivetracker" title=" vivetracker"> vivetracker</a>, <a href="https://publications.waset.org/abstracts/search?q=viconsystem" title=" viconsystem"> viconsystem</a>, <a href="https://publications.waset.org/abstracts/search?q=3dmotion" title=" 3dmotion"> 3dmotion</a>, <a href="https://publications.waset.org/abstracts/search?q=ROM" title=" ROM"> ROM</a> </p> <a href="https://publications.waset.org/abstracts/170963/a-preliminary-kinematic-comparison-of-vive-and-vicon-systems-for-the-accurate-tracking-of-lumbar-motion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2480</span> Laban Movement Analysis Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bernstein%20Ran">Bernstein Ran</a>, <a href="https://publications.waset.org/abstracts/search?q=Shafir%20Tal"> Shafir Tal</a>, <a href="https://publications.waset.org/abstracts/search?q=Tsachor%20Rachelle"> Tsachor Rachelle</a>, <a href="https://publications.waset.org/abstracts/search?q=Studd%20Karen"> Studd Karen</a>, <a href="https://publications.waset.org/abstracts/search?q=Schuster%20Assaf"> Schuster Assaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft鈥檚 Kinect V2 sensor. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laban%20movement%20analysis" title="Laban movement analysis">Laban movement analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=multitask%20learning" title=" multitask learning"> multitask learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect%20sensor" title=" Kinect sensor"> Kinect sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/25365/laban-movement-analysis-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25365.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">341</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2479</span> Inertial Motion Capture System for Biomechanical Analysis in Rehabilitation and Sports</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mario%20Sandro%20F.%20Rocha">Mario Sandro F. Rocha</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20S.%20Ande"> Carlos S. Ande</a>, <a href="https://publications.waset.org/abstracts/search?q=Anderson%20A.%20Oliveira"> Anderson A. Oliveira</a>, <a href="https://publications.waset.org/abstracts/search?q=Felipe%20M.%20Bersotti"> Felipe M. Bersotti</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucas%20O.%20Venzel"> Lucas O. Venzel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The inertial motion capture systems (mocap) are among the most suitable tools for quantitative clinical analysis in rehabilitation and sports medicine. The inertial measuring units (IMUs), composed by accelerometers, gyroscopes, and magnetometers, are able to measure spatial orientations and calculate displacements with sufficient precision for applications in biomechanical analysis of movement. Furthermore, this type of system is relatively affordable and has the advantages of portability and independence from external references. In this work, we present the last version of our inertial motion capture system, based on the foregoing technology, with a unity interface designed for rehabilitation and sports. In our hardware architecture, only one serial port is required. First, the board client must be connected to the computer by a USB cable. Next, an available serial port is configured and opened to establish the communication between the client and the application, and then the client starts scanning for the active MOCAP_S servers around. The servers play the role of the inertial measuring units that capture the movements of the body and send the data to the client, which in turn create a package composed by the ID of the server, the current timestamp, and the motion capture data defined in the client pre-configuration of the capture session. In the current version, we can measure the game rotation vector (grv) and linear acceleration (lacc), and we also have a step detector that can be abled or disabled. The grv data are processed and directly linked to the bones of the 3D model, and, along with the data of lacc and step detector, they are also used to perform the calculations of displacements and other variables shown on the graphical user interface. Our user interface was designed to calculate and present variables that are important for rehabilitation and sports, such as cadence, speed, total gait cycle, gait cycle length, obliquity and rotation, and center of gravity displacement. Our goal is to present a low-cost portable and wearable system with a friendly interface for application in biomechanics and sports, which also performs as a product of high precision and low consumption of energy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biomechanics" title="biomechanics">biomechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=inertial%20sensors" title=" inertial sensors"> inertial sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a> </p> <a href="https://publications.waset.org/abstracts/112465/inertial-motion-capture-system-for-biomechanical-analysis-in-rehabilitation-and-sports" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112465.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2478</span> Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azadeh%20Rouhandeh">Azadeh Rouhandeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Chris%20Joslin"> Chris Joslin</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhen%20Qu"> Zhen Qu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuu%20Ono"> Yuu Ono</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hip%20joint%20center" title="hip joint center">hip joint center</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=soft%20tissue%20artefact" title=" soft tissue artefact"> soft tissue artefact</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound%20depth%20measurement" title=" ultrasound depth measurement"> ultrasound depth measurement</a> </p> <a href="https://publications.waset.org/abstracts/10347/quantification-of-soft-tissue-artefacts-using-motion-capture-data-and-ultrasound-depth-measurements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10347.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">281</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2477</span> Stress Evaluation at Lower Extremity during Walking with Unstable Shoe</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sangbaek%20Park">Sangbaek Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Seungju%20Lee"> Seungju Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Soo-Won%20Chae"> Soo-Won Chae</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unstable shoes are known to strengthen lower extremity muscles and improve gait ability and to change the user鈥檚 gait pattern. The change in gait pattern affects human body enormously because the walking is repetitive and steady locomotion in daily life. It is possible to estimate the joint motion including joint moment, force and inertia effect using kinematic and kinetic analysis. However, the change of internal stress at the articular cartilage has not been possible to estimate. The purpose of this research is to evaluate the internal stress of human body during gait with unstable shoes. In this study, FE analysis was combined with motion capture experiment to obtain the boundary condition and loading condition during walking. Motion capture experiments were performed with a participant during walking with normal shoes and with unstable shoes. Inverse kinematics and inverse kinetic analysis was performed with OpenSim. The joint angle and muscle forces were estimated as results of inverse kinematics and kinetics analysis. A detailed finite element (FE) lower extremity model was constructed. The joint coordinate system was added to the FE model and the joint coordinate system was coincided with OpenSim model鈥檚 coordinate system. Finally, the joint angles at each phase of gait were used to transform the FE model鈥檚 posture according to actual posture from motion capture. The FE model was transformed into the postures of three major phases (1st peak of ground reaction force, mid stance and 2nd peak of ground reaction force). The direction and magnitude of muscle force were estimated by OpenSim and were applied to the FE model鈥檚 attachment point of each muscle. Then FE analysis was performed to compare the stress at knee cartilage during gait with normal shoes and unstable shoes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=finite%20element%20analysis" title="finite element analysis">finite element analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=gait%20analysis" title=" gait analysis"> gait analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20model" title=" human model"> human model</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a> </p> <a href="https://publications.waset.org/abstracts/51809/stress-evaluation-at-lower-extremity-during-walking-with-unstable-shoe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51809.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">323</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2476</span> Effect of the Cross-Sectional Geometry on Heat Transfer and Particle Motion of Circulating Fluidized Bed Riser for CO2 Capture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seungyeong%20Choi">Seungyeong Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Namkyu%20Lee"> Namkyu Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Il%20Shim"> Dong Il Shim</a>, <a href="https://publications.waset.org/abstracts/search?q=Young%20Mun%20Lee"> Young Mun Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Ki%20Park"> Yong-Ki Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyung%20Hee%20Cho"> Hyung Hee Cho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Effect of the cross-sectional geometry on heat transfer and particle motion of circulating fluidized bed riser for CO<sub>2</sub> capture was investigated. Numerical simulation using Eulerian-eulerian method with kinetic theory of granular flow was adopted to analyze gas-solid flow consisting in circulating fluidized bed riser. Circular, square, and rectangular cross-sectional geometry cases of the same area were carried out. Rectangular cross-sectional geometries were analyzed having aspect ratios of 1: 2, 1: 4, 1: 8, and 1:16. The cross-sectional geometry significantly influenced the particle motion and heat transfer. The downward flow pattern of solid particles near the wall was changed. The gas-solid mixing degree of the riser with the rectangular cross section of the high aspect ratio was the lowest. There were differences in bed-to-wall heat transfer coefficient according to rectangular geometry with different aspect ratios. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bed%20geometry" title="bed geometry">bed geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20fluid%20dynamics" title=" computational fluid dynamics"> computational fluid dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=circulating%20fluidized%20bed%20riser" title=" circulating fluidized bed riser"> circulating fluidized bed riser</a>, <a href="https://publications.waset.org/abstracts/search?q=heat%20transfer" title=" heat transfer"> heat transfer</a> </p> <a href="https://publications.waset.org/abstracts/80529/effect-of-the-cross-sectional-geometry-on-heat-transfer-and-particle-motion-of-circulating-fluidized-bed-riser-for-co2-capture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80529.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2475</span> Ontology as Knowledge Capture Tool in Organizations: A Literature Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Margaretha">Maria Margaretha</a>, <a href="https://publications.waset.org/abstracts/search?q=Dana%20Indra%20Sensuse"> Dana Indra Sensuse</a>, <a href="https://publications.waset.org/abstracts/search?q=Lukman"> Lukman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Knowledge capture is a step in knowledge life cycle to get knowledge in the organization. Tacit and explicit knowledge are needed to organize in a path, so the organization will be easy to choose which knowledge will be use. There are many challenges to capture knowledge in the organization, such as researcher must know which knowledge has been validated by an expert, how to get tacit knowledge from experts and make it explicit knowledge, and so on. Besides that, the technology will be a reliable tool to help the researcher to capture knowledge. Some paper wrote how ontology in knowledge management can be used for proposed framework to capture and reuse knowledge. Organization has to manage their knowledge, process capture and share will decide their position in the business area. This paper will describe further from literature review about the tool of ontology that will help the organization to capture its knowledge. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=knowledge%20capture" title="knowledge capture">knowledge capture</a>, <a href="https://publications.waset.org/abstracts/search?q=ontology" title=" ontology"> ontology</a>, <a href="https://publications.waset.org/abstracts/search?q=technology" title=" technology"> technology</a>, <a href="https://publications.waset.org/abstracts/search?q=organization" title=" organization"> organization</a> </p> <a href="https://publications.waset.org/abstracts/20921/ontology-as-knowledge-capture-tool-in-organizations-a-literature-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20921.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2474</span> Classification of Equations of Motion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amritpal%20Singh%20Nafria">Amritpal Singh Nafria</a>, <a href="https://publications.waset.org/abstracts/search?q=Rohit%20Sharma"> Rohit Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=Md.%20Shami%20Ansari"> Md. Shami Ansari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Up to now only five different equations of motion can be derived from velocity time graph without needing to know the normal and frictional forces acting at the point of contact. In this paper we obtained all possible requisite conditions to be considering an equation as an equation of motion. After that we classified equations of motion by considering two equations as fundamental kinematical equations of motion and other three as additional kinematical equations of motion. After deriving these five equations of motion, we examine the easiest way of solving a wide variety of useful numerical problems. At the end of the paper, we discussed the importance and educational benefits of classification of equations of motion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=velocity-time%20graph" title="velocity-time graph">velocity-time graph</a>, <a href="https://publications.waset.org/abstracts/search?q=fundamental%20equations" title=" fundamental equations"> fundamental equations</a>, <a href="https://publications.waset.org/abstracts/search?q=additional%20equations" title=" additional equations"> additional equations</a>, <a href="https://publications.waset.org/abstracts/search?q=requisite%20conditions" title=" requisite conditions"> requisite conditions</a>, <a href="https://publications.waset.org/abstracts/search?q=importance%20and%20educational%20benefits" title=" importance and educational benefits"> importance and educational benefits</a> </p> <a href="https://publications.waset.org/abstracts/15102/classification-of-equations-of-motion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">787</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2473</span> Quantitative Analysis of Camera Setup for Optical Motion Capture Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20T.%20Pitale">J. T. Pitale</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghassab"> S. Ghassab</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Ay"> H. Ay</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Berme"> N. Berme</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title="motion capture">motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=cameras" title=" cameras"> cameras</a>, <a href="https://publications.waset.org/abstracts/search?q=biomechanics" title=" biomechanics"> biomechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=gait%20analysis" title=" gait analysis"> gait analysis</a> </p> <a href="https://publications.waset.org/abstracts/40226/quantitative-analysis-of-camera-setup-for-optical-motion-capture-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2472</span> Modeling Flow and Deposition Characteristics of Solid CO2 during Choked Flow of CO2 Pipeline in CCS</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teng%20lin">Teng lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Yuxing"> Li Yuxing</a>, <a href="https://publications.waset.org/abstracts/search?q=Han%20Hui">Han Hui</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhao%20Pengfei"> Zhao Pengfei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Datong"> Zhang Datong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the development of carbon capture and storage (CCS), the flow assurance of CO2 transportation becomes more important, particularly for supercritical CO2 pipelines. The relieving system using the choke valve is applied to control the pressure in CO2 pipeline. However, the temperature of fluid would drop rapidly because of Joule-Thomson cooling (JTC), which may cause solid CO2 form and block the pipe. In this paper, a Computational Fluid Dynamic (CFD) model, using the modified Lagrangian method, Reynold's Stress Transport model (RSM) for turbulence and stochastic tracking model (STM) for particle trajectory, was developed to predict the deposition characteristic of solid carbon dioxide. The model predictions were in good agreement with the experiment data published in the literature. It can be observed that the particle distribution affected the deposition behavior. In the region of the sudden expansion, the smaller particles accumulated tightly on the wall were dominant for pipe blockage. On the contrary, the size of solid CO2 particles deposited near the outlet usually was bigger and the stacked structure was looser. According to the calculation results, the movement of the particles can be regarded as the main four types: turbulent motion close to the sudden expansion structure, balanced motion at sudden expansion-middle region, inertial motion near the outlet and the escape. Furthermore the particle deposits accumulated primarily in the sudden expansion region, reattachment region and outlet region because of the four type of motion. Also the Stokes number had an effect on the deposition ratio and it is recommended for Stokes number to avoid 3-8St. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=carbon%20capture%20and%20storage" title="carbon capture and storage">carbon capture and storage</a>, <a href="https://publications.waset.org/abstracts/search?q=carbon%20dioxide%20pipeline" title=" carbon dioxide pipeline"> carbon dioxide pipeline</a>, <a href="https://publications.waset.org/abstracts/search?q=gas-particle%20flow" title=" gas-particle flow"> gas-particle flow</a>, <a href="https://publications.waset.org/abstracts/search?q=deposition" title=" deposition "> deposition </a> </p> <a href="https://publications.waset.org/abstracts/41429/modeling-flow-and-deposition-characteristics-of-solid-co2-during-choked-flow-of-co2-pipeline-in-ccs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41429.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">369</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2471</span> Adaptive Motion Planning for 6-DOF Robots Based on Trigonometric Functions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jincan%20Li">Jincan Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Mingyu%20Gao"> Mingyu Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhiwei%20He"> Zhiwei He</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuxiang%20Yang"> Yuxiang Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhongfei%20Yu"> Zhongfei Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Liu"> Yuanyuan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Building an appropriate motion model is crucial for trajectory planning of robots and determines the operational quality directly. An adaptive acceleration and deceleration motion planning based on trigonometric functions for the end-effector of 6-DOF robots in Cartesian coordinate system is proposed in this paper. This method not only achieves the smooth translation motion and rotation motion by constructing a continuous jerk model, but also automatically adjusts the parameters of trigonometric functions according to the variable inputs and the kinematic constraints. The results of computer simulation show that this method is correct and effective to achieve the adaptive motion planning for linear trajectories. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=kinematic%20constraints" title="kinematic constraints">kinematic constraints</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20planning" title=" motion planning"> motion planning</a>, <a href="https://publications.waset.org/abstracts/search?q=trigonometric%20function" title=" trigonometric function"> trigonometric function</a>, <a href="https://publications.waset.org/abstracts/search?q=6-DOF%20robots" title=" 6-DOF robots"> 6-DOF robots</a> </p> <a href="https://publications.waset.org/abstracts/87082/adaptive-motion-planning-for-6-dof-robots-based-on-trigonometric-functions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87082.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">271</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2470</span> Stereo Motion Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yudhajit%20Datta">Yudhajit Datta</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamsi%20Iyer"> Hamsi Iyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Bandi"> Jonathan Bandi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sethia"> Ankit Sethia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=kalman%20filter" title="kalman filter">kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20vision" title=" stereo vision"> stereo vision</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title=" motion tracking"> motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=matlab" title=" matlab"> matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title=" camera calibration"> camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision%20system%20toolbox" title=" computer vision system toolbox "> computer vision system toolbox </a> </p> <a href="https://publications.waset.org/abstracts/18999/stereo-motion-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18999.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">327</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2469</span> A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marcio%20Leal">Marcio Leal</a>, <a href="https://publications.waset.org/abstracts/search?q=Marta%20Villamil"> Marta Villamil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20time%20warping" title=" dynamic time warping"> dynamic time warping</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared" title=" infrared"> infrared</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a> </p> <a href="https://publications.waset.org/abstracts/94322/a-motion-dictionary-to-real-time-recognition-of-sign-language-alphabet-using-dynamic-time-warping-and-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94322.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">216</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2468</span> Design of a Low Cost Motion Data Acquisition Setup for Mechatronic Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Baris%20Can%20Yalcin">Baris Can Yalcin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion sensors have been commonly used as a valuable component in mechatronic systems, however, many mechatronic designs and applications that need motion sensors cost enormous amount of money, especially high-tech systems. Design of a software for communication protocol between data acquisition card and motion sensor is another issue that has to be solved. This study presents how to design a low cost motion data acquisition setup consisting of MPU 6050 motion sensor (gyro and accelerometer in 3 axes) and Arduino Mega2560 microcontroller. Design parameters are calibration of the sensor, identification and communication between sensor and data acquisition card, interpretation of data collected by the sensor. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=design" title="design">design</a>, <a href="https://publications.waset.org/abstracts/search?q=mechatronics" title=" mechatronics"> mechatronics</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20sensor" title=" motion sensor"> motion sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20acquisition" title=" data acquisition"> data acquisition</a> </p> <a href="https://publications.waset.org/abstracts/10243/design-of-a-low-cost-motion-data-acquisition-setup-for-mechatronic-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">588</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2467</span> Motion Effects of Arabic Typography on Screen-Based Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Hassan">Ibrahim Hassan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion typography is one of the most important types of visual communication based on display. Through the digital display media, we can control the text properties (size, direction, thickness, color, etc.). The use of motion typography in visual communication made it have several images. We need to adjust the terminology and clarify the different differences between them, so relying on the word motion typography -considered a general term- is not enough to separate the different communicative functions of the moving text. In this paper, we discuss the different effects of motion typography on Arabic writing and how we can achieve harmony between the movement and the letterform, and we will, during our experiments, present a new type of text movement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabic%20typography" title="Arabic typography">Arabic typography</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20typography" title=" motion typography"> motion typography</a>, <a href="https://publications.waset.org/abstracts/search?q=kinetic%20typography" title=" kinetic typography"> kinetic typography</a>, <a href="https://publications.waset.org/abstracts/search?q=fluid%20typography" title=" fluid typography"> fluid typography</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20typography" title=" temporal typography"> temporal typography</a> </p> <a href="https://publications.waset.org/abstracts/142182/motion-effects-of-arabic-typography-on-screen-based-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2466</span> Identification of Knee Dynamic Profiles in High Performance Athletes with the Use of Motion Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G.%20Espri%C3%BA-P%C3%A9rez">G. Espri煤-P茅rez</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20A.%20Vargas-Oviedo"> F. A. Vargas-Oviedo</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20Zenteno-Aguirrez%C3%A1bal"> I. Zenteno-Aguirrez谩bal</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20D.%20Moya-Bencomo"> M. D. Moya-Bencomo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the injuries with a higher incidence among university-level athletes in the North of Mexico is presented in the knee. This injury generates absenteeism in training and competitions for at least 8 weeks. There is no active quantitative methodology, or protocol, that directly contributes to the clinical evaluation performed by the medical personnel at the prevalence of knee injuries. The main objective is to contribute with a quantitative tool that allows further development of preventive and corrective measures to these injuries. The study analyzed 55 athletes for 6 weeks, belonging to the disciplines of basketball, volleyball, soccer and swimming. Using a motion capture system (Nexus庐, Vicon庐), a three-dimensional analysis was developed that allows the measurement of the range of movement of the joint. To focus on the performance of the lower limb, eleven different movements were chosen from the Functional Performance Test, Functional Movement Screen, and the Cincinnati Jump Test. The research identifies the profile of the natural movement of a healthy knee, with the use of medical guidance, and its differences between each sport. The data recovered by the single-leg crossover hop managed to differentiate the type of knee movement among athletes. A maximum difference of 60掳 of offset was found in the adduction movement between male and female athletes of the same discipline. The research also seeks to serve as a guideline for the implementation of protocols that help identify the recovery level of such injuries. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cincinnati%20jump%20test" title="Cincinnati jump test">Cincinnati jump test</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20movement%20screen" title=" functional movement screen"> functional movement screen</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20performance%20test" title=" functional performance test"> functional performance test</a>, <a href="https://publications.waset.org/abstracts/search?q=knee" title=" knee"> knee</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture%20system" title=" motion capture system"> motion capture system</a> </p> <a href="https://publications.waset.org/abstracts/109927/identification-of-knee-dynamic-profiles-in-high-performance-athletes-with-the-use-of-motion-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109927.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2465</span> A New Center of Motion in Cabling Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alireza%20Abbasi%20Moshaii">Alireza Abbasi Moshaii</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Najafi"> Farshid Najafi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper a new model for centre of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is very good. It will be described in the following. The accuracy of the idea is proved by an experiment. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion. Such as dancer, physician or repair robots. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=centre%20of%20motion" title="centre of motion">centre of motion</a>, <a href="https://publications.waset.org/abstracts/search?q=robotic%20cables" title=" robotic cables"> robotic cables</a>, <a href="https://publications.waset.org/abstracts/search?q=permanent%20touching" title=" permanent touching"> permanent touching</a>, <a href="https://publications.waset.org/abstracts/search?q=mechatronics%20engineering" title=" mechatronics engineering"> mechatronics engineering</a> </p> <a href="https://publications.waset.org/abstracts/24087/a-new-center-of-motion-in-cabling-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24087.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2464</span> Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emily%20Rowe">Emily Rowe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Balance assessments can be used to help evaluate a person鈥檚 risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20balance" title="dynamic balance">dynamic balance</a>, <a href="https://publications.waset.org/abstracts/search?q=inertial%20sensors" title=" inertial sensors"> inertial sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=portable" title=" portable"> portable</a>, <a href="https://publications.waset.org/abstracts/search?q=pressure%20mat" title=" pressure mat"> pressure mat</a>, <a href="https://publications.waset.org/abstracts/search?q=reliability" title=" reliability"> reliability</a>, <a href="https://publications.waset.org/abstracts/search?q=stepping" title=" stepping"> stepping</a>, <a href="https://publications.waset.org/abstracts/search?q=validity" title=" validity"> validity</a>, <a href="https://publications.waset.org/abstracts/search?q=wearables" title=" wearables"> wearables</a> </p> <a href="https://publications.waset.org/abstracts/160542/reliability-and-validity-of-a-portable-inertial-sensor-and-pressure-mat-system-for-measuring-dynamic-balance-parameters-during-stepping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160542.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2463</span> Comparing the Motion of Solar System with Water Droplet Motion to Predict the Future of Solar System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Areena%20Bhatti">Areena Bhatti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The geometric arrangement of planet and moon is the result of a self-organizing system. In our solar system, the planets and moons are constantly orbiting around the sun. The aim of this theory is to compare the motion of a solar system with the motion of water droplet when poured into a water body. The basic methodology is to compare both motions to know how they are related to each other. The difference between both systems will be that one is extremely fast, and the other is extremely slow. The role of this theory is that by looking at the fast system we can conclude how slow the system will get to an end. Just like ripples are formed around water droplet that move away from the droplet and water droplet forming those ripples become small in size will tell us how solar system will behave in the same way. So it is concluded that large and small systems can work under the same process but with different motions of time, and motion of the solar system is the slowest form of water droplet motion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion" title="motion">motion</a>, <a href="https://publications.waset.org/abstracts/search?q=water" title=" water"> water</a>, <a href="https://publications.waset.org/abstracts/search?q=sun" title=" sun"> sun</a>, <a href="https://publications.waset.org/abstracts/search?q=time" title=" time"> time</a> </p> <a href="https://publications.waset.org/abstracts/111769/comparing-the-motion-of-solar-system-with-water-droplet-motion-to-predict-the-future-of-solar-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111769.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">151</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2462</span> Relevant LMA Features for Human Motion Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Ajili">Insaf Ajili</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Mallem"> Malik Mallem</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Yves%20Didier"> Jean-Yves Didier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discriminative%20LMA%20features" title="discriminative LMA features">discriminative LMA features</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20reduction" title=" features reduction"> features reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20recognition" title=" human motion recognition"> human motion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a> </p> <a href="https://publications.waset.org/abstracts/96299/relevant-lma-features-for-human-motion-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96299.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2461</span> Mixed Sub-Fractional Brownian Motion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Zili">Mounir Zili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We will introduce a new extension of the Brownian motion, that could serve to get a good model of many natural phenomena. It is a linear combination of a finite number of sub-fractional Brownian motions; that is why we will call it the mixed sub-fractional Brownian motion. We will present some basic properties of this process. Among others, we will check that our process is non-Markovian and that it has non-stationary increments. We will also give the conditions under which it is a semimartingale. Finally, the main features of its sample paths will be specified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mixed%20Gaussian%20processes" title="mixed Gaussian processes">mixed Gaussian processes</a>, <a href="https://publications.waset.org/abstracts/search?q=Sub-fractional%20Brownian%20motion" title=" Sub-fractional Brownian motion"> Sub-fractional Brownian motion</a>, <a href="https://publications.waset.org/abstracts/search?q=sample%20paths" title=" sample paths"> sample paths</a> </p> <a href="https://publications.waset.org/abstracts/32479/mixed-sub-fractional-brownian-motion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32479.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">488</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2460</span> Influence of Bra Band Tension and Underwire Angles on Breast Motion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheuk%20Wing%20Lee">Cheuk Wing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Kit%20Lun%20Yick"> Kit Lun Yick</a>, <a href="https://publications.waset.org/abstracts/search?q=Sun%20Pui%20Ng"> Sun Pui Ng</a>, <a href="https://publications.waset.org/abstracts/search?q=Joanne%20Yip"> Joanne Yip</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Daily activities and exercise may result in large displacements of the breasts, which lead to breast pain and discomfort. Therefore, a proper bra design and fit can help to control excessive breast motion to prevent the over-stretching of the connective tissues. Nevertheless, bra fit problems, such as excessively high tension of the shoulder straps and a tight underband could have substantially negative effects on the wear comfort and health of the wearer. The purpose of this study is to, therefore, examine the effects of bra band tension on breast displacement. Usually, human wear trials are carried out, but there are inconsistencies during testing. Therefore, a soft manikin torso is used to examine breast displacement at walking speeds of 2.30 km/h and 4.08 km/h. The breast displacement itself is determined by using a VICON motion capture system. The 3D geometric changes of the underwire bra band tension and the corresponding control of breast movement are also analyzed by using a 3D handheld scanner along with Rapidform software. The results indicate that an appropriate bra band tension can help to reduce breast displacement and provide a comfortable angle for the underwire. The findings can be used by designers and bra engineers as a reference source to advance bra design and development. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bra%20band" title="bra band">bra band</a>, <a href="https://publications.waset.org/abstracts/search?q=bra%20features" title=" bra features"> bra features</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20displacement" title=" breast displacement"> breast displacement</a>, <a href="https://publications.waset.org/abstracts/search?q=underwire%20angle" title=" underwire angle"> underwire angle</a> </p> <a href="https://publications.waset.org/abstracts/93789/influence-of-bra-band-tension-and-underwire-angles-on-breast-motion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2459</span> Motion Planning of SCARA Robots for Trajectory Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Giovanni%20Incerti">Giovanni Incerti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents a method for a simple and immediate motion planning of a SCARA robot, whose end-effector has to move along a given trajectory; the calculation procedure requires the user to define in analytical form or by points the trajectory to be followed and to assign the curvilinear abscissa as function of the time. On the basis of the geometrical characteristics of the robot, a specifically developed program determines the motion laws of the actuators that enable the robot to generate the required movement; this software can be used in all industrial applications for which a SCARA robot has to be frequently reprogrammed, in order to generate various types of trajectories with different motion times. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20planning" title="motion planning">motion planning</a>, <a href="https://publications.waset.org/abstracts/search?q=SCARA%20robot" title=" SCARA robot"> SCARA robot</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20tracking" title=" trajectory tracking"> trajectory tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=analytical%20form" title=" analytical form"> analytical form</a> </p> <a href="https://publications.waset.org/abstracts/19726/motion-planning-of-scara-robots-for-trajectory-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19726.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2458</span> Mixed-Sub Fractional Brownian Motion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Zili">Mounir Zili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We will introduce a new extension of the Brownian motion, that could serve to get a good model of many natural phenomena. It is a linear combination of a finite number of sub-fractional Brownian motions; that is why we will call it the mixed sub-fractional Brownian motion. We will present some basic properties of this process. Among others, we will check that our process is non-markovian and that it has non-stationary increments. We will also give the conditions under which it is a semi-martingale. Finally, the main features of its sample paths will be specified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractal%20dimensions" title="fractal dimensions">fractal dimensions</a>, <a href="https://publications.waset.org/abstracts/search?q=mixed%20gaussian%20processes" title=" mixed gaussian processes"> mixed gaussian processes</a>, <a href="https://publications.waset.org/abstracts/search?q=sample%20paths" title=" sample paths"> sample paths</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-fractional%20brownian%20motion" title=" sub-fractional brownian motion "> sub-fractional brownian motion </a> </p> <a href="https://publications.waset.org/abstracts/36677/mixed-sub-fractional-brownian-motion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">420</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2457</span> Detection of Image Blur and Its Restoration for Image Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image restoration in the process of communication is one of the emerging fields in the image processing. The motion analysis processing is the simplest case to detect motion in an image. Applications of motion analysis widely spread in many areas such as surveillance, remote sensing, film industry, navigation of autonomous vehicles, etc. The scene may contain multiple moving objects, by using motion analysis techniques the blur caused by the movement of the objects can be enhanced by filling-in occluded regions and reconstruction of transparent objects, and it also removes the motion blurring. This paper presents the design and comparison of various motion detection and enhancement filters. Median filter, Linear image deconvolution, Inverse filter, Pseudoinverse filter, Wiener filter, Lucy Richardson filter and Blind deconvolution filters are used to remove the blur. In this work, we have considered different types and different amount of blur for the analysis. Mean Square Error (MSE) and Peak Signal to Noise Ration (PSNR) are used to evaluate the performance of the filters. The designed system has been implemented in Matlab software and tested for synthetic and real-time images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20analysis" title=" motion analysis"> motion analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title=" motion detection"> motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20estimation" title=" motion estimation"> motion estimation</a> </p> <a href="https://publications.waset.org/abstracts/59485/detection-of-image-blur-and-its-restoration-for-image-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">287</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=82">82</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=83">83</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20capture&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>