CINXE.COM

Search results for: motion tracking

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: motion tracking</title> <meta name="description" content="Search results for: motion tracking"> <meta name="keywords" content="motion tracking"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="motion tracking" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="motion tracking"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2132</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: motion tracking</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2132</span> Object Tracking in Motion Blurred Images with Adaptive Mean Shift and Wavelet Feature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iman%20Iraei">Iman Iraei</a>, <a href="https://publications.waset.org/abstracts/search?q=Mina%20Sharifi"> Mina Sharifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But, the problem is that mean shift could not track the selected object accurately in blurred scenes. So, for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature, we should use Harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mean%20shift" title="mean shift">mean shift</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=blur%20extent" title=" blur extent"> blur extent</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20transform" title=" wavelet transform"> wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20blur" title=" motion blur"> motion blur</a> </p> <a href="https://publications.waset.org/abstracts/81408/object-tracking-in-motion-blurred-images-with-adaptive-mean-shift-and-wavelet-feature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2131</span> Motion Planning of SCARA Robots for Trajectory Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Giovanni%20Incerti">Giovanni Incerti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents a method for a simple and immediate motion planning of a SCARA robot, whose end-effector has to move along a given trajectory; the calculation procedure requires the user to define in analytical form or by points the trajectory to be followed and to assign the curvilinear abscissa as function of the time. On the basis of the geometrical characteristics of the robot, a specifically developed program determines the motion laws of the actuators that enable the robot to generate the required movement; this software can be used in all industrial applications for which a SCARA robot has to be frequently reprogrammed, in order to generate various types of trajectories with different motion times. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20planning" title="motion planning">motion planning</a>, <a href="https://publications.waset.org/abstracts/search?q=SCARA%20robot" title=" SCARA robot"> SCARA robot</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20tracking" title=" trajectory tracking"> trajectory tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=analytical%20form" title=" analytical form"> analytical form</a> </p> <a href="https://publications.waset.org/abstracts/19726/motion-planning-of-scara-robots-for-trajectory-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19726.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2130</span> Human Motion Capture: New Innovations in the Field of Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Najm%20Alotaibi">Najm Alotaibi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20capture" title="human motion capture">human motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision-based" title=" vision-based"> vision-based</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/22770/human-motion-capture-new-innovations-in-the-field-of-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22770.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2129</span> Stereo Motion Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yudhajit%20Datta">Yudhajit Datta</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamsi%20Iyer"> Hamsi Iyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Bandi"> Jonathan Bandi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sethia"> Ankit Sethia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=kalman%20filter" title="kalman filter">kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20vision" title=" stereo vision"> stereo vision</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title=" motion tracking"> motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=matlab" title=" matlab"> matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title=" camera calibration"> camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision%20system%20toolbox" title=" computer vision system toolbox "> computer vision system toolbox </a> </p> <a href="https://publications.waset.org/abstracts/18999/stereo-motion-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18999.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">327</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2128</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2127</span> Motion-Based Detection and Tracking of Multiple Pedestrians</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Harras">A. Harras</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tsuji"> A. Tsuji</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tracking of moving people has gained a matter of great importance due to rapid technological advancements in the field of computer vision. The objective of this study is to design a motion based detection and tracking multiple walking pedestrians randomly in different directions. In our proposed method, Gaussian mixture model (GMM) is used to determine moving persons in image sequences. It reacts to changes that take place in the scene like different illumination; moving objects start and stop often, etc. Background noise in the scene is eliminated through applying morphological operations and the motions of tracked people which is determined by using the Kalman filter. The Kalman filter is applied to predict the tracked location in each frame and to determine the likelihood of each detection. We used a benchmark data set for the evaluation based on a side wall stationary camera. The actual scenes from the data set are taken on a street including up to eight people in front of the camera in different two scenes, the duration is 53 and 35 seconds, respectively. In the case of walking pedestrians in close proximity, the proposed method has achieved the detection ratio of 87%, and the tracking ratio is 77 % successfully. When they are deferred from each other, the detection ratio is increased to 90% and the tracking ratio is also increased to 79%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20detection" title="automatic detection">automatic detection</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=pedestrians" title=" pedestrians"> pedestrians</a>, <a href="https://publications.waset.org/abstracts/search?q=counting" title=" counting"> counting</a> </p> <a href="https://publications.waset.org/abstracts/82912/motion-based-detection-and-tracking-of-multiple-pedestrians" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">257</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2126</span> Object Trajectory Extraction by Using Mean of Motion Vectors Form Compressed Video Bitstream</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ching-Ting%20Hsu">Ching-Ting Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hua%20Ho"> Wei-Hua Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Yi-Chun%20Chang"> Yi-Chun Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video object tracking is one of the popular research topics in computer graphics area. The trajectory can be applied in security, traffic control, even the sports training. The trajectory for sports training can be utilized to analyze the athlete’s performance without traditional sensors. There are many relevant works which utilize mean shift algorithm with background subtraction. This kind of the schemes should select a kernel function which may affect the accuracy and performance. In this paper, we consider the motion information in the pre-coded bitstream. The proposed algorithm extracts the trajectory by composing the motion vectors from the pre-coded bitstream. We gather the motion vectors from the overlap area of the object and calculate mean of the overlapped motion vectors. We implement and simulate our proposed algorithm in H.264 video codec. The performance is better than relevant works and keeps the accuracy of the object trajectory. The experimental results show that the proposed trajectory extraction can extract trajectory form the pre-coded bitstream in high accuracy and achieve higher performance other relevant works. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.264" title="H.264">H.264</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20bitstream" title=" video bitstream"> video bitstream</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20object%20tracking" title=" video object tracking"> video object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=sports%20training" title=" sports training"> sports training</a> </p> <a href="https://publications.waset.org/abstracts/34740/object-trajectory-extraction-by-using-mean-of-motion-vectors-form-compressed-video-bitstream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">428</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2125</span> Iterative Linear Quadratic Regulator (iLQR) vs LQR Controllers for Quadrotor Path Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wesam%20Jasim">Wesam Jasim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongbing%20Gu"> Dongbing Gu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an iterative linear quadratic regulator optimal control technique to solve the problem of quadrotors path tracking. The dynamic motion equations are represented based on unit quaternion representation and include some modelled aerodynamical effects as a nonlinear part. Simulation results prove the ability and effectiveness of iLQR to stabilize the quadrotor and successfully track different paths. It also shows that iLQR controller outperforms LQR controller in terms of fast convergence and tracking errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iLQR%20controller" title="iLQR controller">iLQR controller</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20control" title=" optimal control"> optimal control</a>, <a href="https://publications.waset.org/abstracts/search?q=path%20tracking" title=" path tracking"> path tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=quadrotor%20UAVs" title=" quadrotor UAVs"> quadrotor UAVs</a> </p> <a href="https://publications.waset.org/abstracts/51436/iterative-linear-quadratic-regulator-ilqr-vs-lqr-controllers-for-quadrotor-path-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2124</span> Vision Based People Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boukerch%20Haroun">Boukerch Haroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Luo%20Qing%20Sheng"> Luo Qing Sheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Hua%20Shi"> Li Hua Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Boukraa%20Sebti"> Boukraa Sebti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the &ldquo;HOG&rdquo; descriptors. The second part is the tracking of a moving person it&rsquo;s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camshift%20algorithm" title="camshift algorithm">camshift algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a> </p> <a href="https://publications.waset.org/abstracts/2264/vision-based-people-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2123</span> Automated Tracking and Statistics of Vehicles at the Signalized Intersection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Zhang">Qiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojian%20Hu1"> Xiaojian Hu1</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Intersection is the place where vehicles and pedestrians must pass through, turn and evacuate. Obtaining the motion data of vehicles near the intersection is of great significance for transportation research. Since there are usually many targets and there are more conflicts between targets, this makes it difficult to obtain vehicle motion parameters in traffic videos of intersections. According to the characteristics of traffic videos, this paper applies video technology to realize the automated track, count and trajectory extraction of vehicles to collect traffic data by roadside surveillance cameras installed near the intersections. Based on the video recognition method, the vehicles in each lane near the intersection are tracked with extracting trajectory and counted respectively in various degrees of occlusion and visibility. The performances are compared with current recognized CPU-based algorithms of real-time tracking-by-detection. The speed of the presented system is higher than the others and the system has a better real-time performance. The accuracy of direction has reached about 94.99% on average, and the accuracy of classification and statistics has reached about 75.12% on average. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tracking%20and%20statistics" title="tracking and statistics">tracking and statistics</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle" title=" vehicle"> vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=signalized%20intersection" title=" signalized intersection"> signalized intersection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20parameter" title=" motion parameter"> motion parameter</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory" title=" trajectory"> trajectory</a> </p> <a href="https://publications.waset.org/abstracts/136436/automated-tracking-and-statistics-of-vehicles-at-the-signalized-intersection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2122</span> Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Fang">Cheng Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingwei%20Quan"> Lingwei Quan</a>, <a href="https://publications.waset.org/abstracts/search?q=Cunyue%20Lu"> Cunyue Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20tracking" title=" pose tracking"> pose tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/112839/online-pose-estimation-and-tracking-approach-with-siamese-region-proposal-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2121</span> Tracking Filtering Algorithm Based on ConvLSTM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ailing%20Yang">Ailing Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Penghan%20Song"> Penghan Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Aihua%20Cai"> Aihua Cai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The nonlinear maneuvering target tracking problem is mainly a state estimation problem when the target motion model is uncertain. Traditional solutions include Kalman filtering based on Bayesian filtering framework and extended Kalman filtering. However, these methods need prior knowledge such as kinematics model and state system distribution, and their performance is poor in state estimation of nonprior complex dynamic systems. Therefore, in view of the problems existing in traditional algorithms, a convolution LSTM target state estimation (SAConvLSTM-SE) algorithm based on Self-Attention memory (SAM) is proposed to learn the historical motion state of the target and the error distribution information measured at the current time. The measured track point data of airborne radar are processed into data sets. After supervised training, the data-driven deep neural network based on SAConvLSTM can directly obtain the target state at the next moment. Through experiments on two different maneuvering targets, we find that the network has stronger robustness and better tracking accuracy than the existing tracking methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=maneuvering%20target" title="maneuvering target">maneuvering target</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20estimation" title=" state estimation"> state estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a> </p> <a href="https://publications.waset.org/abstracts/164893/tracking-filtering-algorithm-based-on-convlstm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164893.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2120</span> A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mulugeta%20K.%20Tefera">Mulugeta K. Tefera</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaolong%20Yang"> Xiaolong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Liu"> Jian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20modeling" title="background modeling">background modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-frame%20difference" title=" inter-frame difference"> inter-frame difference</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20and%20tracking" title=" object detection and tracking"> object detection and tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/78578/a-real-time-moving-object-detection-and-tracking-scheme-and-its-implementation-for-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2119</span> Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuan-Hsiang%20Chang">Yuan-Hsiang Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pin-Chi%20Lin"> Pin-Chi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Li-Der%20Jeng"> Li-Der Jeng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20detection" title="motion detection">motion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title=" motion tracking"> motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20analysis" title=" trajectory analysis"> trajectory analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/13650/automatic-motion-trajectory-analysis-for-dual-human-interaction-using-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">548</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2118</span> Two Degree of Freedom Spherical Mechanism Design for Exact Sun Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Osman%20Acar">Osman Acar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sun tracking systems are the systems following the sun ray by a right angle or by predetermined certain angle. In this study, we used theoretical trajectory of sun for latitude of central Anatolia in Turkey. A two degree of freedom spherical mechanism was designed to have a large workspace able to follow the sun's theoretical motion by the right angle during the whole year. An inverse kinematic analysis was generated to find the positions of mechanism links for the predicted trajectory. Force and torque analysis were shown for the first day of the year. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sun%20tracking" title="sun tracking">sun tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=theoretical%20sun%20trajectory" title=" theoretical sun trajectory"> theoretical sun trajectory</a>, <a href="https://publications.waset.org/abstracts/search?q=spherical%20mechanism" title=" spherical mechanism"> spherical mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20kinematic%20analysis" title=" inverse kinematic analysis"> inverse kinematic analysis</a> </p> <a href="https://publications.waset.org/abstracts/37062/two-degree-of-freedom-spherical-mechanism-design-for-exact-sun-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37062.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2117</span> Augmented ADRC for Trajectory Tracking of a Novel Hydraulic Spherical Motion Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bin%20Bian">Bin Bian</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Wang"> Liang Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A hydraulic spherical motion mechanism (HSMM) is proposed. Unlike traditional systems using serial or parallel mechanisms for multi-DOF rotations, the HSMM is capable of implementing continuous 2-DOF rotational motions in a single joint without the intermediate transmission mechanisms. It has some advantages of compact structure, low inertia and high stiffness. However, as HSMM is a nonlinear and multivariable system, it is very complicate to realize accuracy control. Therefore, an augmented active disturbance rejection controller (ADRC) is proposed in this paper. Compared with the traditional PD control method, three compensation items, i.e., dynamics compensation term, disturbance compensation term and nonlinear error elimination term, are added into the proposed algorithm to improve the control performance. The ADRC algorithm aims at offsetting the effects of external disturbance and realizing accurate control. Euler angles are applied to describe the orientation of rotor. Lagrange equations are utilized to establish the dynamic model of the HSMM. The stability of this algorithm is validated with detailed derivation. Simulation model is formulated in Matlab/Simulink. The results show that the proposed control algorithm has better competence of trajectory tracking in the presence of uncertainties. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hydraulic%20spherical%20motion%20mechanism" title="hydraulic spherical motion mechanism">hydraulic spherical motion mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20model" title=" dynamic model"> dynamic model</a>, <a href="https://publications.waset.org/abstracts/search?q=active%20disturbance%20rejection%20control" title=" active disturbance rejection control"> active disturbance rejection control</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20tracking" title=" trajectory tracking"> trajectory tracking</a> </p> <a href="https://publications.waset.org/abstracts/126959/augmented-adrc-for-trajectory-tracking-of-a-novel-hydraulic-spherical-motion-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126959.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">105</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2116</span> Adaptive Online Object Tracking via Positive and Negative Models Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li">Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang"> Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20drift" title=" tracking drift"> tracking drift</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20squares%20analysis" title=" partial least squares analysis"> partial least squares analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20and%20negative%20models%20matching" title=" positive and negative models matching"> positive and negative models matching</a> </p> <a href="https://publications.waset.org/abstracts/19382/adaptive-online-object-tracking-via-positive-and-negative-models-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2115</span> Lateral Control of Electric Vehicle Based on Fuzzy Logic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hartani%20Kada">Hartani Kada</a>, <a href="https://publications.waset.org/abstracts/search?q=Merah%20Abdelkader"> Merah Abdelkader</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aiming at the high nonlinearities and unmatched uncertainties of the intelligent electric vehicles’ dynamic system, this paper presents a lateral motion control algorithm for intelligent electric vehicles with four in-wheel motors. A fuzzy logic procedure is presented and formulated to realize lateral control in lane change. The vehicle dynamics model and a desired target tracking model were established in this paper. A fuzzy logic controller was designed for integrated active front steering (AFS) and direct yaw moment control (DYC) in order to improve vehicle handling performance and stability, and a fuzzy controller for the automatic steering problem. The simulation results demonstrate the strong robustness and excellent tracking performance of the control algorithm that is proposed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic" title="fuzzy logic">fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=lateral%20control" title=" lateral control"> lateral control</a>, <a href="https://publications.waset.org/abstracts/search?q=AFS" title=" AFS"> AFS</a>, <a href="https://publications.waset.org/abstracts/search?q=DYC" title=" DYC"> DYC</a>, <a href="https://publications.waset.org/abstracts/search?q=electric%20car%20technology" title=" electric car technology"> electric car technology</a>, <a href="https://publications.waset.org/abstracts/search?q=longitudinal%20control" title=" longitudinal control"> longitudinal control</a>, <a href="https://publications.waset.org/abstracts/search?q=lateral%20motion" title=" lateral motion"> lateral motion</a> </p> <a href="https://publications.waset.org/abstracts/14474/lateral-control-of-electric-vehicle-based-on-fuzzy-logic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2114</span> Assessment of Kinetic Trajectory of the Median Nerve from Wrist Ultrasound Images Using Two Dimensional Baysian Speckle Tracking Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Li-Kai%20Kuo">Li-Kai Kuo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shyh-Hau%20Wang"> Shyh-Hau Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The kinetic trajectory of the median nerve (MN) in the wrist has shown to be capable of being applied to assess the carpal tunnel syndrome (CTS), and was found able to be detected by high-frequency ultrasound image via motion tracking technique. Yet, previous study may not quickly perform the measurement due to the use of a single element transducer for ultrasound image scanning. Therefore, previous system is not appropriate for being applied to clinical application. In the present study, B-mode ultrasound images of the wrist corresponding to movements of fingers from flexion to extension were acquired by clinical applicable real-time scanner. The kinetic trajectories of MN were off-line estimated utilizing two dimensional Baysian speckle tracking (TDBST) technique. The experiments were carried out from ten volunteers by ultrasound scanner at 12 MHz frequency. Results verified from phantom experiments have demonstrated that TDBST technique is able to detect the movement of MN based on signals of the past and present information and then to reduce the computational complications associated with the effect of such image quality as the resolution and contrast variations. Moreover, TDBST technique tended to be more accurate than that of the normalized cross correlation tracking (NCCT) technique used in previous study to detect movements of the MN in the wrist. In response to fingers’ flexion movement, the kinetic trajectory of the MN moved toward the ulnar-palmar direction, and then toward the radial-dorsal direction corresponding to the extensional movement. TDBST technique and the employed ultrasound image scanner have verified to be feasible to sensitively detect the kinetic trajectory and displacement of the MN. It thus could be further applied to diagnose CTS clinically and to improve the measurements to assess 3D trajectory of the MN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=baysian%20speckle%20tracking" title="baysian speckle tracking">baysian speckle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=carpal%20tunnel%20syndrome" title=" carpal tunnel syndrome"> carpal tunnel syndrome</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20nerve" title=" median nerve"> median nerve</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title=" motion tracking"> motion tracking</a> </p> <a href="https://publications.waset.org/abstracts/28816/assessment-of-kinetic-trajectory-of-the-median-nerve-from-wrist-ultrasound-images-using-two-dimensional-baysian-speckle-tracking-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">495</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2113</span> Deep Learning Based Fall Detection Using Simplified Human Posture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kripesh%20Adhikari">Kripesh Adhikari</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Bouchachia"> Hamid Bouchachia</a>, <a href="https://publications.waset.org/abstracts/search?q=Hammadi%20Nait-Charif"> Hammadi Nait-Charif</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Falls are one of the major causes of injury and death among elderly people aged 65 and above. A support system to identify such kind of abnormal activities have become extremely important with the increase in ageing population. Pose estimation is a challenging task and to add more to this, it is even more challenging when pose estimations are performed on challenging poses that may occur during fall. Location of the body provides a clue where the person is at the time of fall. This paper presents a vision-based tracking strategy where available joints are grouped into three different feature points depending upon the section they are located in the body. The three feature points derived from different joints combinations represents the upper region or head region, mid-region or torso and lower region or leg region. Tracking is always challenging when a motion is involved. Hence the idea is to locate the regions in the body in every frame and consider it as the tracking strategy. Grouping these joints can be beneficial to achieve a stable region for tracking. The location of the body parts provides a crucial information to distinguish normal activities from falls. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fall%20detection" title="fall detection">fall detection</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/104451/deep-learning-based-fall-detection-using-simplified-human-posture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2112</span> ISME: Integrated Style Motion Editor for 3D Humanoid Character</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ismahafezi%20Ismail">Ismahafezi Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Shahrizal%20Sunar"> Mohd Shahrizal Sunar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The motion of a realistic 3D humanoid character is very important especially for the industries developing computer animations and games. However, this type of motion is seen with a very complex dimensional data as well as body position, orientation, and joint rotation. Integrated Style Motion Editor (ISME), on the other hand, is a method used to alter the 3D humanoid motion capture data utilised in computer animation and games development. Therefore, this study was carried out with the purpose of demonstrating a method that is able to manipulate and deform different motion styles by integrating Key Pose Deformation Technique and Trajectory Control Technique. This motion editing method allows the user to generate new motions from the original motion capture data using a simple interface control. Unlike the previous method, our method produces a realistic humanoid motion style in real time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20animation" title="computer animation">computer animation</a>, <a href="https://publications.waset.org/abstracts/search?q=humanoid%20motion" title=" humanoid motion"> humanoid motion</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20editing" title=" motion editing"> motion editing</a> </p> <a href="https://publications.waset.org/abstracts/54401/isme-integrated-style-motion-editor-for-3d-humanoid-character" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54401.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2111</span> High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jeronimo%20Cox">Jeronimo Cox</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomonari%20Furukawa"> Tomonari Furukawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title="motion tracking">motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20fusion" title=" sensor fusion"> sensor fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetometer" title=" magnetometer"> magnetometer</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20estimation" title=" state estimation"> state estimation</a> </p> <a href="https://publications.waset.org/abstracts/161291/high-speed-motion-tracking-with-magnetometer-in-nonuniform-magnetic-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161291.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2110</span> Visual Servoing for Quadrotor UAV Target Tracking: Effects of Target Information Sharing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jason%20R.%20King">Jason R. King</a>, <a href="https://publications.waset.org/abstracts/search?q=Hugh%20H.%20T.%20Liu"> Hugh H. T. Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research presents simulation and experimental work in the visual servoing of a quadrotor Unmanned Aerial Vehicle (UAV) to stabilize overtop of a moving target. Most previous work in the field assumes static or slow-moving, unpredictable targets. In this experiment, the target is assumed to be a friendly ground robot moving freely on a horizontal plane, which shares information with the UAV. This information includes velocity and acceleration information of the ground target to aid the quadrotor in its tracking task. The quadrotor is assumed to have a downward-facing camera which is fixed to the frame of the quadrotor. Only onboard sensing for the quadrotor is utilized for the experiment, with a VICON motion capture system in place used only to measure ground truth and evaluate the performance of the controller. The experimental platform consists of an ArDrone 2.0 and a Create Roomba, communicating using Robot Operating System (ROS). The addition of the target’s information is demonstrated to help the quadrotor in its tracking task using simulations of the dynamic model of a quadrotor in Matlab Simulink. A nested PID control loop is utilized for inner-loop control the quadrotor, similar to previous works at the Flight Systems and Controls Laboratory (FSC) at the University of Toronto Institute for Aerospace Studies (UTIAS). Experiments are performed with ground truth provided by an indoor motion capture system, and the results are analyzed. It is demonstrated that a velocity controller which incorporates the additional information is able to perform better than the controllers which do not have access to the target’s information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quadrotor" title="quadrotor">quadrotor</a>, <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title=" target tracking"> target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV" title=" UAV"> UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=UAS" title=" UAS"> UAS</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20servoing" title=" visual servoing"> visual servoing</a> </p> <a href="https://publications.waset.org/abstracts/56269/visual-servoing-for-quadrotor-uav-target-tracking-effects-of-target-information-sharing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56269.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">341</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2109</span> Classification of Equations of Motion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amritpal%20Singh%20Nafria">Amritpal Singh Nafria</a>, <a href="https://publications.waset.org/abstracts/search?q=Rohit%20Sharma"> Rohit Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=Md.%20Shami%20Ansari"> Md. Shami Ansari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Up to now only five different equations of motion can be derived from velocity time graph without needing to know the normal and frictional forces acting at the point of contact. In this paper we obtained all possible requisite conditions to be considering an equation as an equation of motion. After that we classified equations of motion by considering two equations as fundamental kinematical equations of motion and other three as additional kinematical equations of motion. After deriving these five equations of motion, we examine the easiest way of solving a wide variety of useful numerical problems. At the end of the paper, we discussed the importance and educational benefits of classification of equations of motion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=velocity-time%20graph" title="velocity-time graph">velocity-time graph</a>, <a href="https://publications.waset.org/abstracts/search?q=fundamental%20equations" title=" fundamental equations"> fundamental equations</a>, <a href="https://publications.waset.org/abstracts/search?q=additional%20equations" title=" additional equations"> additional equations</a>, <a href="https://publications.waset.org/abstracts/search?q=requisite%20conditions" title=" requisite conditions"> requisite conditions</a>, <a href="https://publications.waset.org/abstracts/search?q=importance%20and%20educational%20benefits" title=" importance and educational benefits"> importance and educational benefits</a> </p> <a href="https://publications.waset.org/abstracts/15102/classification-of-equations-of-motion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">787</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2108</span> Human Tracking across Heterogeneous Systems Based on Mobile Agent Technologies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tappei%20Yotsumoto">Tappei Yotsumoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Atsushi%20Nomura"> Atsushi Nomura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kozo%20Tanigawa"> Kozo Tanigawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenichi%20Takahashi"> Kenichi Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Takao%20Kawamura"> Takao Kawamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazunori%20Sugahara"> Kazunori Sugahara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In a human tracking system, expanding a monitoring range of one system is complicating the management of devices and increasing its cost. Therefore, we propose a method to realize a wide-range human tracking by connecting small systems. In this paper, we examined an agent deploy method and information contents across the heterogeneous human tracking systems. By implementing the proposed method, we can construct a human tracking system across heterogeneous systems, and the system can track a target continuously between systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20tracking%20system" title="human tracking system">human tracking system</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agent" title=" mobile agent"> mobile agent</a>, <a href="https://publications.waset.org/abstracts/search?q=monitoring" title=" monitoring"> monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=heterogeneous%20systems" title=" heterogeneous systems"> heterogeneous systems</a> </p> <a href="https://publications.waset.org/abstracts/11702/human-tracking-across-heterogeneous-systems-based-on-mobile-agent-technologies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11702.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">536</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2107</span> Adaptive Motion Planning for 6-DOF Robots Based on Trigonometric Functions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jincan%20Li">Jincan Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Mingyu%20Gao"> Mingyu Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhiwei%20He"> Zhiwei He</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuxiang%20Yang"> Yuxiang Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhongfei%20Yu"> Zhongfei Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Liu"> Yuanyuan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Building an appropriate motion model is crucial for trajectory planning of robots and determines the operational quality directly. An adaptive acceleration and deceleration motion planning based on trigonometric functions for the end-effector of 6-DOF robots in Cartesian coordinate system is proposed in this paper. This method not only achieves the smooth translation motion and rotation motion by constructing a continuous jerk model, but also automatically adjusts the parameters of trigonometric functions according to the variable inputs and the kinematic constraints. The results of computer simulation show that this method is correct and effective to achieve the adaptive motion planning for linear trajectories. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=kinematic%20constraints" title="kinematic constraints">kinematic constraints</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20planning" title=" motion planning"> motion planning</a>, <a href="https://publications.waset.org/abstracts/search?q=trigonometric%20function" title=" trigonometric function"> trigonometric function</a>, <a href="https://publications.waset.org/abstracts/search?q=6-DOF%20robots" title=" 6-DOF robots"> 6-DOF robots</a> </p> <a href="https://publications.waset.org/abstracts/87082/adaptive-motion-planning-for-6-dof-robots-based-on-trigonometric-functions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87082.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">271</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2106</span> Hand Motion Tracking as a Human Computer Interation for People with Cerebral Palsy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ana%20%20Teixeira">Ana Teixeira</a>, <a href="https://publications.waset.org/abstracts/search?q=Joao%20Orvalho"> Joao Orvalho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes experiments using Scratch games, to check the feasibility of employing cerebral palsy users gestures as an alternative of interaction with a computer carried out by students of Master Human Computer Interaction (HCI) of IPC Coimbra. The main focus of this work is to study the usability of a Web Camera as a motion tracking device to achieve a virtual human-computer interaction used by individuals with CP. An approach for Human-computer Interaction (HCI) is present, where individuals with cerebral palsy react and interact with a scratch game through the use of a webcam as an external interaction device. Motion tracking interaction is an emerging technology that is becoming more useful, effective and affordable. However, it raises new questions from the HCI viewpoint, for example, which environments are most suitable for interaction by users with disabilities. In our case, we put emphasis on the accessibility and usability aspects of such interaction devices to meet the special needs of people with disabilities, and specifically people with CP. Despite the fact that our work has just started, preliminary results show that, in general, computer vision interaction systems are very useful; in some cases, these systems are the only way by which some people can interact with a computer. The purpose of the experiments was to verify two hypothesis: 1) people with cerebral palsy can interact with a computer using their natural gestures, 2) scratch games can be a research tool in experiments with disabled young people. A game in Scratch with three levels is created to be played through the use of a webcam. This device permits the detection of certain key points of the user’s body, which allows to assume the head, arms and specially the hands as the most important aspects of recognition. Tests with 5 individuals of different age and gender were made throughout 3 days through periods of 30 minutes with each participant. For a more extensive and reliable statistical analysis, the number of both participants and repetitions in further investigations should be increased. However, already at this stage of research, it is possible to draw some conclusions. First, and the most important, is that simple scratch games on the computer can be a research tool that allows investigating the interaction with computer performed by young persons with CP using intentional gestures. Measurements performed with the assistance of games are attractive for young disabled users. The second important conclusion is that they are able to play scratch games using their gestures. Therefore, the proposed interaction method is promising for them as a human-computer interface. In the future, we plan to include the development of multimodal interfaces that combine various computer vision devices with other input devices improvements in the existing systems to accommodate more the special needs of individuals, in addition, to perform experiments on a larger number of participants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title="motion tracking">motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=cerebral%20palsy" title=" cerebral palsy"> cerebral palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a> </p> <a href="https://publications.waset.org/abstracts/53050/hand-motion-tracking-as-a-human-computer-interation-for-people-with-cerebral-palsy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53050.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2105</span> Design of a Low Cost Motion Data Acquisition Setup for Mechatronic Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Baris%20Can%20Yalcin">Baris Can Yalcin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion sensors have been commonly used as a valuable component in mechatronic systems, however, many mechatronic designs and applications that need motion sensors cost enormous amount of money, especially high-tech systems. Design of a software for communication protocol between data acquisition card and motion sensor is another issue that has to be solved. This study presents how to design a low cost motion data acquisition setup consisting of MPU 6050 motion sensor (gyro and accelerometer in 3 axes) and Arduino Mega2560 microcontroller. Design parameters are calibration of the sensor, identification and communication between sensor and data acquisition card, interpretation of data collected by the sensor. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=design" title="design">design</a>, <a href="https://publications.waset.org/abstracts/search?q=mechatronics" title=" mechatronics"> mechatronics</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20sensor" title=" motion sensor"> motion sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20acquisition" title=" data acquisition"> data acquisition</a> </p> <a href="https://publications.waset.org/abstracts/10243/design-of-a-low-cost-motion-data-acquisition-setup-for-mechatronic-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">588</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2104</span> Development of Application Architecture for RFID Based Indoor Tracking Using Passive RFID Tag</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sumaya%20Ismail">Sumaya Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Aijaz%20Ahmad%20Rehi"> Aijaz Ahmad Rehi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Abstract The location tracking and positioning systems have technologically grown exponentially in recent decade. In particular, Global Position system (GPS) has become a universal norm to be a part of almost every software application directly or indirectly for the location based modules. However major drawback of GPS based system is their inability of working in indoor environments. Researchers are thus focused on the alternative technologies which can be used in indoor environments for a vast range of application domains which require indoor location tracking. One of the most popular technology used for indoor tracking is radio frequency identification (RFID). Due to its numerous advantages, including its cost effectiveness, it is considered as a technology of choice in indoor location tracking systems. To contribute to the emerging trend of the research, this paper proposes an application architecture of passive RFID tag based indoor location tracking system. For the proof of concept, a test bed will be developed to in this study. In addition, various indoor location tracking algorithms will be used to assess their appropriateness in the proposed application architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RFID" title="RFID">RFID</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20location%20tracking" title=" indoor location tracking"> indoor location tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=application%20architecture" title=" application architecture"> application architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=passive%20RFID%20tag" title=" passive RFID tag"> passive RFID tag</a> </p> <a href="https://publications.waset.org/abstracts/164777/development-of-application-architecture-for-rfid-based-indoor-tracking-using-passive-rfid-tag" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2103</span> A Framework for Improving Trade Contractors’ Productivity Tracking Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sophia%20Hayes">Sophia Hayes</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenny%20L.%20Liang"> Kenny L. Liang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahil%20Sharma"> Sahil Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=Austin%20Shema"> Austin Shema</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmoud%20Bader"> Mahmoud Bader</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Elbarkouky"> Mohamed Elbarkouky</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Despite being one of the most significant economic contributors of the country, Canada&rsquo;s construction industry is lagging behind other sectors when it comes to labor productivity improvements. The construction industry is very collaborative as a general contractor, will hire trade contractors to perform most of a project&rsquo;s work; meaning low productivity from one contractor can have a domino effect on the shared success of a project. To address this issue and encourage trade contractors to improve their productivity tracking methods, an investigative study was done on the productivity views and tracking methods of various trade contractors. Additionally, an in-depth review was done on four standard tracking methods used in the construction industry: cost codes, benchmarking, the job productivity measurement (JPM) standard, and WorkFace Planning (WFP). The four tracking methods were used as a baseline in comparing the trade contractors&rsquo; responses, determining gaps within their current tracking methods, and for making improvement recommendations. 15 interviews were conducted with different trades to analyze how contractors value productivity. The results of these analyses indicated that there seem to be gaps within the construction industry when it comes to an understanding of the purpose and value in productivity tracking. The trade contractors also shared their current productivity tracking systems; which were then compared to the four standard tracking methods used in the construction industry. Gaps were identified in their various tracking methods and using a framework; recommendations were made based on the type of trade on how to improve how they track productivity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=labor%20productivity" title="labor productivity">labor productivity</a>, <a href="https://publications.waset.org/abstracts/search?q=productivity%20tracking%20methods" title=" productivity tracking methods"> productivity tracking methods</a>, <a href="https://publications.waset.org/abstracts/search?q=trade%20contractors" title=" trade contractors"> trade contractors</a>, <a href="https://publications.waset.org/abstracts/search?q=construction" title=" construction "> construction </a> </p> <a href="https://publications.waset.org/abstracts/111890/a-framework-for-improving-trade-contractors-productivity-tracking-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111890.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">192</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=71">71</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=72">72</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=motion%20tracking&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10