CINXE.COM
Search results for: hand tracking
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: hand tracking</title> <meta name="description" content="Search results for: hand tracking"> <meta name="keywords" content="hand tracking"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="hand tracking" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="hand tracking"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4584</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: hand tracking</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4584</span> Vision-Based Hand Segmentation Techniques for Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jebali">M. Jebali</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jemni"> M. Jemni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20segmentation" title=" hand segmentation"> hand segmentation</a> </p> <a href="https://publications.waset.org/abstracts/26490/vision-based-hand-segmentation-techniques-for-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4583</span> Adaptive Online Object Tracking via Positive and Negative Models Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li">Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang"> Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20drift" title=" tracking drift"> tracking drift</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20squares%20analysis" title=" partial least squares analysis"> partial least squares analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20and%20negative%20models%20matching" title=" positive and negative models matching"> positive and negative models matching</a> </p> <a href="https://publications.waset.org/abstracts/19382/adaptive-online-object-tracking-via-positive-and-negative-models-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4582</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4581</span> Object Tracking in Motion Blurred Images with Adaptive Mean Shift and Wavelet Feature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iman%20Iraei">Iman Iraei</a>, <a href="https://publications.waset.org/abstracts/search?q=Mina%20Sharifi"> Mina Sharifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But, the problem is that mean shift could not track the selected object accurately in blurred scenes. So, for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature, we should use Harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mean%20shift" title="mean shift">mean shift</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=blur%20extent" title=" blur extent"> blur extent</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20transform" title=" wavelet transform"> wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20blur" title=" motion blur"> motion blur</a> </p> <a href="https://publications.waset.org/abstracts/81408/object-tracking-in-motion-blurred-images-with-adaptive-mean-shift-and-wavelet-feature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4580</span> Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zahra%20Alipour">Zahra Alipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Amirreza%20Moheb%20Afzali"> Amirreza Moheb Afzali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=YOLOv8" title="YOLOv8">YOLOv8</a>, <a href="https://publications.waset.org/abstracts/search?q=mediapipe" title=" mediapipe"> mediapipe</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20tracking" title=" finger tracking"> finger tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20estimation" title=" joint estimation"> joint estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=human-computer%20interaction%20%28HCI%29" title=" human-computer interaction (HCI)"> human-computer interaction (HCI)</a> </p> <a href="https://publications.waset.org/abstracts/194650/real-time-finger-tracking-evaluating-yolov8-and-mediapipe-for-enhanced-hci" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">5</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4579</span> Human Tracking across Heterogeneous Systems Based on Mobile Agent Technologies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tappei%20Yotsumoto">Tappei Yotsumoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Atsushi%20Nomura"> Atsushi Nomura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kozo%20Tanigawa"> Kozo Tanigawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenichi%20Takahashi"> Kenichi Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Takao%20Kawamura"> Takao Kawamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazunori%20Sugahara"> Kazunori Sugahara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In a human tracking system, expanding a monitoring range of one system is complicating the management of devices and increasing its cost. Therefore, we propose a method to realize a wide-range human tracking by connecting small systems. In this paper, we examined an agent deploy method and information contents across the heterogeneous human tracking systems. By implementing the proposed method, we can construct a human tracking system across heterogeneous systems, and the system can track a target continuously between systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20tracking%20system" title="human tracking system">human tracking system</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agent" title=" mobile agent"> mobile agent</a>, <a href="https://publications.waset.org/abstracts/search?q=monitoring" title=" monitoring"> monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=heterogeneous%20systems" title=" heterogeneous systems"> heterogeneous systems</a> </p> <a href="https://publications.waset.org/abstracts/11702/human-tracking-across-heterogeneous-systems-based-on-mobile-agent-technologies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11702.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">536</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4578</span> Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daleesha%20M.%20Viswanathan">Daleesha M. Viswanathan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumam%20Mary%20Idicula"> Sumam Mary Idicula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=orientation%20features" title="orientation features">orientation features</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20feature%20vector" title=" discrete feature vector"> discrete feature vector</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM." title=" HMM."> HMM.</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a> </p> <a href="https://publications.waset.org/abstracts/35653/hand-motion-trajectory-analysis-for-dynamic-hand-gestures-used-in-indian-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4577</span> Development of Application Architecture for RFID Based Indoor Tracking Using Passive RFID Tag</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sumaya%20Ismail">Sumaya Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Aijaz%20Ahmad%20Rehi"> Aijaz Ahmad Rehi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Abstract The location tracking and positioning systems have technologically grown exponentially in recent decade. In particular, Global Position system (GPS) has become a universal norm to be a part of almost every software application directly or indirectly for the location based modules. However major drawback of GPS based system is their inability of working in indoor environments. Researchers are thus focused on the alternative technologies which can be used in indoor environments for a vast range of application domains which require indoor location tracking. One of the most popular technology used for indoor tracking is radio frequency identification (RFID). Due to its numerous advantages, including its cost effectiveness, it is considered as a technology of choice in indoor location tracking systems. To contribute to the emerging trend of the research, this paper proposes an application architecture of passive RFID tag based indoor location tracking system. For the proof of concept, a test bed will be developed to in this study. In addition, various indoor location tracking algorithms will be used to assess their appropriateness in the proposed application architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RFID" title="RFID">RFID</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20location%20tracking" title=" indoor location tracking"> indoor location tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=application%20architecture" title=" application architecture"> application architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=passive%20RFID%20tag" title=" passive RFID tag"> passive RFID tag</a> </p> <a href="https://publications.waset.org/abstracts/164777/development-of-application-architecture-for-rfid-based-indoor-tracking-using-passive-rfid-tag" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4576</span> A Framework for Improving Trade Contractors’ Productivity Tracking Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sophia%20Hayes">Sophia Hayes</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenny%20L.%20Liang"> Kenny L. Liang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahil%20Sharma"> Sahil Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=Austin%20Shema"> Austin Shema</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmoud%20Bader"> Mahmoud Bader</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Elbarkouky"> Mohamed Elbarkouky</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Despite being one of the most significant economic contributors of the country, Canada’s construction industry is lagging behind other sectors when it comes to labor productivity improvements. The construction industry is very collaborative as a general contractor, will hire trade contractors to perform most of a project’s work; meaning low productivity from one contractor can have a domino effect on the shared success of a project. To address this issue and encourage trade contractors to improve their productivity tracking methods, an investigative study was done on the productivity views and tracking methods of various trade contractors. Additionally, an in-depth review was done on four standard tracking methods used in the construction industry: cost codes, benchmarking, the job productivity measurement (JPM) standard, and WorkFace Planning (WFP). The four tracking methods were used as a baseline in comparing the trade contractors’ responses, determining gaps within their current tracking methods, and for making improvement recommendations. 15 interviews were conducted with different trades to analyze how contractors value productivity. The results of these analyses indicated that there seem to be gaps within the construction industry when it comes to an understanding of the purpose and value in productivity tracking. The trade contractors also shared their current productivity tracking systems; which were then compared to the four standard tracking methods used in the construction industry. Gaps were identified in their various tracking methods and using a framework; recommendations were made based on the type of trade on how to improve how they track productivity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=labor%20productivity" title="labor productivity">labor productivity</a>, <a href="https://publications.waset.org/abstracts/search?q=productivity%20tracking%20methods" title=" productivity tracking methods"> productivity tracking methods</a>, <a href="https://publications.waset.org/abstracts/search?q=trade%20contractors" title=" trade contractors"> trade contractors</a>, <a href="https://publications.waset.org/abstracts/search?q=construction" title=" construction "> construction </a> </p> <a href="https://publications.waset.org/abstracts/111890/a-framework-for-improving-trade-contractors-productivity-tracking-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111890.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">192</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4575</span> Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Fang">Cheng Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingwei%20Quan"> Lingwei Quan</a>, <a href="https://publications.waset.org/abstracts/search?q=Cunyue%20Lu"> Cunyue Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20tracking" title=" pose tracking"> pose tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/112839/online-pose-estimation-and-tracking-approach-with-siamese-region-proposal-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4574</span> The Tracking and Hedging Performances of Gold ETF Relative to Some Other Instruments in the UK</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abimbola%20Adedeji">Abimbola Adedeji</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Shauqi%20Zubir"> Ahmad Shauqi Zubir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the profitability and risk between investing in gold exchange traded funds (ETFs) and gold mutual funds compares to gold prices. The main focus in determining whether there are similarities or differences between those financial products is the tracking error. The importance of understanding the similarities or differences between the gold ETFs, gold mutual funds and gold prices is derived from the fact that gold ETFs and gold mutual funds are used as substitutions for investors who are looking to profit from gold prices although they are short in capital. 10 hypotheses were tested. There are 3 types of tracking error used. Tracking error 1 and 3 gives results that differentiate between types of ETFs and mutual funds, hence yielding the answers in answering the hypotheses that were developed. However, tracking error 2 failed to give the answer that could shed light on the questions raised in this study. All of the results in tracking error 2 technique only telling us that the difference between the ups and downs of the financial instruments are similar, statistically to the physical gold prices movement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gold%20etf" title="gold etf">gold etf</a>, <a href="https://publications.waset.org/abstracts/search?q=gold%20mutual%20funds" title=" gold mutual funds"> gold mutual funds</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20error" title=" tracking error"> tracking error</a> </p> <a href="https://publications.waset.org/abstracts/27595/the-tracking-and-hedging-performances-of-gold-etf-relative-to-some-other-instruments-in-the-uk" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27595.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">422</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4573</span> UAV Based Visual Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Dalmia">Vaibhav Dalmia</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Phirke"> Manoj Phirke</a>, <a href="https://publications.waset.org/abstracts/search?q=Renith%20G"> Renith G</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide adoption of UAVs (unmanned aerial vehicles) in various industries by the government as well as private corporations for solving computer vision tasks it’s necessary that their potential is analyzed completely. Recent advances in Deep Learning have also left us with a plethora of algorithms to solve different computer vision tasks. This study provides a comprehensive survey on solving the Visual Object Tracking problem and explains the tradeoffs involved in building a real-time yet reasonably accurate object tracking system for UAVs by looking at existing methods and evaluating them on the aerial datasets. Finally, the best trackers suitable for UAV-based applications are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=single%20object%20tracking" title=" single object tracking"> single object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20object%20tracking" title=" visual object tracking"> visual object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=UAVs" title=" UAVs"> UAVs</a> </p> <a href="https://publications.waset.org/abstracts/145331/uav-based-visual-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4572</span> Development of Intelligent Smart Multi Tracking Agent System to Support of Logistics Safety</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Umarov%20Jamshid">Umarov Jamshid</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju-Su%20Kim"> Ju-Su Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hak-Jun%20Lee"> Hak-Jun Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Man-Kyo%20Han"> Man-Kyo Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Ryum-Duck%20Oh"> Ryum-Duck Oh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, it becomes convenient to identify the location information of cargos by using GPS and wireless communication technologies. The development of IoT technologies and tracking system allows us to confirm site situation on an ad hoc basis in all the industries and social environments. Moreover, it allows us to apply IT technologies to a manageable extent. However, there have been many limitations for using the system due to the difficulty of identifying location information in real time and also due to the simple features. To globalize the logistics related tracking system, it is required to conduct a study to resolve the aforementioned problem. On that account, this paper designed and developed the IoT and RTLS based intelligent multi tracking agent system for more secure, accurate and reliable transportation in relation to logistics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GPS" title="GPS">GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20agent%20system" title=" tracking agent system"> tracking agent system</a>, <a href="https://publications.waset.org/abstracts/search?q=IoT" title=" IoT"> IoT</a>, <a href="https://publications.waset.org/abstracts/search?q=RTLS" title=" RTLS"> RTLS</a>, <a href="https://publications.waset.org/abstracts/search?q=Logistics" title=" Logistics"> Logistics</a> </p> <a href="https://publications.waset.org/abstracts/29324/development-of-intelligent-smart-multi-tracking-agent-system-to-support-of-logistics-safety" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29324.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">646</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4571</span> Using Eye-Tracking to Investigate TEM Validity and Design</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cao%20Xi">Cao Xi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper reports a study which used eye-tracking to examine the cognitive validity of TEM 8(Test for English Majors, Band 8). The study investigated test takers' reading patterns on four -item types using eye-tracking, and interviews. Thirty participants completed 22 items on a computer, with the Tobii X2 Eye Tracker recording their eye movements on screen. Eleven students further participated in a recall interview while viewing video footage of their gaze patterns on the test. The findings will indicate that first, different reading item types will employ different cognitive processes; then different reading patterns for stronger and weaker test takers’on each item types. The implication of this study is to provide recommendations for the use of eye tracking technology in language research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye%20tracking" title="eye tracking">eye tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=reading%20patterns" title=" reading patterns"> reading patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=test%20for%20english%20majors" title=" test for english majors"> test for english majors</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20validity" title=" cognitive validity"> cognitive validity</a> </p> <a href="https://publications.waset.org/abstracts/145848/using-eye-tracking-to-investigate-tem-validity-and-design" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145848.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4570</span> Integrated Target Tracking and Control for Automated Car-Following of Truck Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fadwa%20Alaskar">Fadwa Alaskar</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang-Chieh%20Chou"> Fang-Chieh Chou</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Flores"> Carlos Flores</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiao-Yun%20Lu"> Xiao-Yun Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20M.%20Bayen"> Alexandre M. Bayen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article proposes a perception model for enhancing the accuracy and stability of car-following control of a longitudinally automated truck. We applied a fusion-based tracking algorithm on measurements of a single preceding vehicle needed for car-following control. This algorithm fuses two types of data, radar and LiDAR data, to obtain more accurate and robust longitudinal perception of the subject vehicle in various weather conditions. The filter’s resulting signals are fed to the gap control algorithm at every tracking loop composed by a high-level gap control and lower acceleration tracking system. Several highway tests have been performed with two trucks. The tests show accurate and fast tracking of the target, which impacts on the gap control loop positively. The experiments also show the fulfilment of control design requirements, such as fast speed variations tracking and robust time gap following. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20fusion" title=" sensor fusion"> sensor fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20cruise%20control" title=" adaptive cruise control"> adaptive cruise control</a>, <a href="https://publications.waset.org/abstracts/search?q=cooperative%20adaptive%20cruise%20control" title=" cooperative adaptive cruise control"> cooperative adaptive cruise control</a> </p> <a href="https://publications.waset.org/abstracts/140234/integrated-target-tracking-and-control-for-automated-car-following-of-truck-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">229</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4569</span> Gesture-Controlled Interface Using Computer Vision and Python</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vedant%20Vardhan%20Rathour">Vedant Vardhan Rathour</a>, <a href="https://publications.waset.org/abstracts/search?q=Anant%20Agrawal"> Anant Agrawal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/193844/gesture-controlled-interface-using-computer-vision-and-python" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">12</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4568</span> Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Zhang">Qiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojian%20Hu"> Xiaojian Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time" title="real-time">real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-vehicle%20tracking" title=" multi-vehicle tracking"> multi-vehicle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20attribution" title=" color attribution"> color attribution</a> </p> <a href="https://publications.waset.org/abstracts/136438/real-time-multi-vehicle-tracking-application-at-intersections-based-on-feature-selection-in-combination-with-color-attribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4567</span> Test and Evaluation of Patient Tracking Platform in an Earthquake Simulation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nahid%20Tavakoli">Nahid Tavakoli</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20H.%20Yarmohammadian"> Mohammad H. Yarmohammadian</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Samimi"> Ali Samimi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In earthquake situation, medical response communities such as field and referral hospitals are challenged with injured victims’ identification and tracking. In our project, it was developed a patient tracking platform (PTP) where first responders triage the patients with an electronic tag which report the location and some information of each patient during his/her movement. This platform includes: 1) near field communication (NFC) tags (ISO 14443), 2) smart mobile phones (Android-base version 4.2.2), 3) Base station laptops (Windows), 4) server software, 5) Android software to use by first responders, 5) disaster command software, and 6) system architecture. Our model has been completed through literature review, Delphi technique, focus group, design the platform, and implement in an earthquake exercise. This paper presents consideration for content, function, and technologies that must apply for patient tracking in medical emergencies situations. It is demonstrated the robustness of the patient tracking platform (PTP) in tracking 6 patients in a simulated earthquake situation in the yard of the relief and rescue department of Isfahan’s Red Crescent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=test%20and%20evaluation" title="test and evaluation">test and evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20tracking%20platform" title=" patient tracking platform"> patient tracking platform</a>, <a href="https://publications.waset.org/abstracts/search?q=earthquake" title=" earthquake"> earthquake</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/112288/test-and-evaluation-of-patient-tracking-platform-in-an-earthquake-simulation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4566</span> Vision Based People Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boukerch%20Haroun">Boukerch Haroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Luo%20Qing%20Sheng"> Luo Qing Sheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Hua%20Shi"> Li Hua Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Boukraa%20Sebti"> Boukraa Sebti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camshift%20algorithm" title="camshift algorithm">camshift algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a> </p> <a href="https://publications.waset.org/abstracts/2264/vision-based-people-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4565</span> Fast and Robust Long-term Tracking with Effective Searching Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thang%20V.%20Kieu">Thang V. Kieu</a>, <a href="https://publications.waset.org/abstracts/search?q=Long%20P.%20Nguyen"> Long P. Nguyen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Kernelized Correlation Filter (KCF) based trackers have gained a lot of attention recently because of their accuracy and fast calculation speed. However, this algorithm is not robust in cases where the object is lost by a sudden change of direction, being obscured or going out of view. In order to improve KCF performance in long-term tracking, this paper proposes an anomaly detection method for target loss warning by analyzing the response map of each frame, and a classification algorithm for reliable target re-locating mechanism by using Random fern. Being tested with Visual Tracker Benchmark and Visual Object Tracking datasets, the experimental results indicated that the precision and success rate of the proposed algorithm were 2.92 and 2.61 times higher than that of the original KCF algorithm, respectively. Moreover, the proposed tracker handles occlusion better than many state-of-the-art long-term tracking methods while running at 60 frames per second. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=correlation%20filter" title="correlation filter">correlation filter</a>, <a href="https://publications.waset.org/abstracts/search?q=long-term%20tracking" title=" long-term tracking"> long-term tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20fern" title=" random fern"> random fern</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20tracking" title=" real-time tracking"> real-time tracking</a> </p> <a href="https://publications.waset.org/abstracts/130580/fast-and-robust-long-term-tracking-with-effective-searching-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4564</span> Characterization of Solar Panel Efficiency Using Sun Tracking Device and Cooling System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20B.%20G.%20Ibarra">J. B. G. Ibarra</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20M.%20A.%20Gagui"> J. M. A. Gagui</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20J.%20T.%20Jonson"> E. J. T. Jonson</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20A.%20V.%20Lim"> J. A. V. Lim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper focused on studying the performance of the solar panels that were equipped with water-spray cooling system, solar tracking system, and combination of both systems. The efficiencies were compared with the solar panels without any efficiency improvement technique. The efficiency of each setup was computed on an hourly basis every day for a month. The study compared the efficiencies and combined systems that significantly improved at a specific time of the day. The data showed that the solar tracking system had the highest efficiency during 6:00 AM to 7:45 AM. Then after 7:45 AM, the combination of both solar tracking and water-spray cooling system was the most efficient to use up to 12:00 NN. Meanwhile, from 12:00 NN to 12:45 PM, the water-spray cooling system had the significant contribution on efficiency. From 12:45 PM up to 4:30 PM, the combination of both systems was the most efficient, and lastly, from 4:30 PM to 6:00 PM, the solar tracking system was the best to use. The study intended to use solar tracking or water-spray cooling system or combined systems alternately to improve the solar panel efficiency on a specific time of the day. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=solar%20panel%20efficiency" title="solar panel efficiency">solar panel efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=solar%20panel%20efficiency%20technique" title=" solar panel efficiency technique"> solar panel efficiency technique</a>, <a href="https://publications.waset.org/abstracts/search?q=solar%20tracking%20system" title=" solar tracking system"> solar tracking system</a>, <a href="https://publications.waset.org/abstracts/search?q=water-spray%20cooling%20system" title=" water-spray cooling system"> water-spray cooling system</a> </p> <a href="https://publications.waset.org/abstracts/122446/characterization-of-solar-panel-efficiency-using-sun-tracking-device-and-cooling-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122446.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4563</span> Iterative Linear Quadratic Regulator (iLQR) vs LQR Controllers for Quadrotor Path Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wesam%20Jasim">Wesam Jasim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongbing%20Gu"> Dongbing Gu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an iterative linear quadratic regulator optimal control technique to solve the problem of quadrotors path tracking. The dynamic motion equations are represented based on unit quaternion representation and include some modelled aerodynamical effects as a nonlinear part. Simulation results prove the ability and effectiveness of iLQR to stabilize the quadrotor and successfully track different paths. It also shows that iLQR controller outperforms LQR controller in terms of fast convergence and tracking errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iLQR%20controller" title="iLQR controller">iLQR controller</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20control" title=" optimal control"> optimal control</a>, <a href="https://publications.waset.org/abstracts/search?q=path%20tracking" title=" path tracking"> path tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=quadrotor%20UAVs" title=" quadrotor UAVs"> quadrotor UAVs</a> </p> <a href="https://publications.waset.org/abstracts/51436/iterative-linear-quadratic-regulator-ilqr-vs-lqr-controllers-for-quadrotor-path-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4562</span> Fast and Scale-Adaptive Target Tracking via PCA-SIFT</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang">Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongchang%20Chen"> Hongchang Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li"> Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiangpeng%20Zhang"> Jiangpeng Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title="target tracking">target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA-SIFT" title=" PCA-SIFT"> PCA-SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=mean-shift" title=" mean-shift"> mean-shift</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-adaptive" title=" scale-adaptive"> scale-adaptive</a> </p> <a href="https://publications.waset.org/abstracts/19009/fast-and-scale-adaptive-target-tracking-via-pca-sift" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4561</span> Integrated Gesture and Voice-Activated Mouse Control System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dev%20Pratap%20Singh">Dev Pratap Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshika%20Hasija"> Harshika Hasija</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashwini%20S."> Ashwini S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computers using hand gestures and voice commands. The system leverages advanced computer vision techniques using the Media Pipe framework and OpenCV to detect and interpret real-time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the speech recognition library allows for seamless execution of tasks like web searches, location navigation, and gesture control in the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20assistant" title=" voice assistant"> voice assistant</a> </p> <a href="https://publications.waset.org/abstracts/193896/integrated-gesture-and-voice-activated-mouse-control-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">10</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4560</span> Information Retrieval from Internet Using Hand Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aniket%20S.%20Joshi">Aniket S. Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Aditya%20R.%20Mane"> Aditya R. Mane</a>, <a href="https://publications.waset.org/abstracts/search?q=Arjun%20Tukaram"> Arjun Tukaram </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the 21st century, in the era of e-world, people are continuously getting updated by daily information such as weather conditions, news, stock exchange market updates, new projects, cricket updates, sports and other such applications. In the busy situation, they want this information on the little use of keyboard, time. Today in order to get such information user have to repeat same mouse and keyboard actions which includes time and inconvenience. In India due to rural background many people are not much familiar about the use of computer and internet also. Also in small clinics, small offices, and hotels and in the airport there should be a system which retrieves daily information with the minimum use of keyboard and mouse actions. We plan to design application based project that can easily retrieve information with minimum use of keyboard and mouse actions and make our task more convenient and easier. This can be possible with an image processing application which takes real time hand gestures which will get matched by system and retrieve information. Once selected the functions with hand gestures, the system will report action information to user. In this project we use real time hand gesture movements to select required option which is stored on the screen in the form of RSS Feeds. Gesture will select the required option and the information will be popped and we got the information. A real time hand gesture makes the application handier and easier to use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color%20model" title=" HSV color model"> HSV color model</a>, <a href="https://publications.waset.org/abstracts/search?q=Blob%20detection" title=" Blob detection"> Blob detection</a> </p> <a href="https://publications.waset.org/abstracts/29069/information-retrieval-from-internet-using-hand-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29069.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4559</span> Real-Time Online Tracking Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Denis%20Obrul">Denis Obrul</a>, <a href="https://publications.waset.org/abstracts/search?q=Borut%20%C5%BDalik"> Borut Žalik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present an extendable online real-time tracking platform that can be used to track a wide variety of location-aware devices. These can range from GPS devices mounted inside a vehicle, closed and secure systems such as Teltonika and to mobile phones running multiple platforms. Special consideration is given to decentralized approach, security and flexibility. A number of different use cases are presented as a proof of concept. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time" title="real-time">real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=online" title=" online"> online</a>, <a href="https://publications.waset.org/abstracts/search?q=gps" title=" gps"> gps</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20application" title=" web application"> web application</a> </p> <a href="https://publications.waset.org/abstracts/17532/real-time-online-tracking-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17532.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4558</span> Eye Tracking: Biometric Evaluations of Instructional Materials for Improved Learning </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janet%20Holland">Janet Holland</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eye tracking is a great way to triangulate multiple data sources for deeper, more complete knowledge of how instructional materials are really being used and emotional connections made. Using sensor based biometrics provides a detailed local analysis in real time expanding our ability to collect science based data for a more comprehensive level of understanding, not previously possible, for teaching and learning. The knowledge gained will be used to make future improvements to instructional materials, tools, and interactions. The literature has been examined and a preliminary pilot test was implemented to develop a methodology for research in Instructional Design and Technology. Eye tracking now offers the addition of objective metrics obtained from eye tracking and other biometric data collection with analysis for a fresh perspective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=area%20of%20interest" title="area of interest">area of interest</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20tracking" title=" eye tracking"> eye tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=biometrics" title=" biometrics"> biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=fixation" title=" fixation"> fixation</a>, <a href="https://publications.waset.org/abstracts/search?q=fixation%20count" title=" fixation count"> fixation count</a>, <a href="https://publications.waset.org/abstracts/search?q=fixation%20sequence" title=" fixation sequence"> fixation sequence</a>, <a href="https://publications.waset.org/abstracts/search?q=fixation%20time" title=" fixation time"> fixation time</a>, <a href="https://publications.waset.org/abstracts/search?q=gaze%20points" title=" gaze points"> gaze points</a>, <a href="https://publications.waset.org/abstracts/search?q=heat%20map" title=" heat map"> heat map</a>, <a href="https://publications.waset.org/abstracts/search?q=saccades" title=" saccades"> saccades</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20to%20first%20fixation" title=" time to first fixation"> time to first fixation</a> </p> <a href="https://publications.waset.org/abstracts/104914/eye-tracking-biometric-evaluations-of-instructional-materials-for-improved-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104914.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4557</span> Comparison of Stationary and Two-Axis Tracking System of 50MW Photovoltaic Power Plant in Al-Kufra, Libya: Landscape Impact and Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasser%20Aldali">Yasser Aldali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The scope of this paper is to evaluate and compare the potential of LS-PV (Large Scale Photovoltaic Power Plant) power generation systems in the southern region of Libya at Al-Kufra for both stationary and tracking systems. A Microsoft Excel-VBA program has been developed to compute slope radiation, dew-point, sky temperature, and then cell temperature, maximum power output and module efficiency of the system for stationary system and for tracking system. The results for energy production show that the total energy output is 114GWh/year for stationary system and 148 GWh/year for tracking system. The average module efficiency for the stationary system is 16.6% and 16.2% for the tracking system. The values of electricity generation capacity factor (CF) and solar capacity factor (SCF) for stationary system were found to be 26% and 62.5% respectively and 34% and 82% for tracking system. The GCR (Ground Cover Ratio) for a stationary system is 0.7, which corresponds to a tilt angle of 24°. The GCR for tracking system was found to be 0.12. The estimated ground area needed to build a 50MW PV plant amounts to approx. 0.55 km2 for a stationary PV field constituted by HIT PV arrays and approx. 91 MW/km2. In case of a tracker PV field, the required ground area amounts approx. 2.4k m2 and approx. 20.5 MW/km2. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=large%20scale%20photovoltaic%20power%20plant" title="large scale photovoltaic power plant">large scale photovoltaic power plant</a>, <a href="https://publications.waset.org/abstracts/search?q=two-axis%20tracking%20system" title=" two-axis tracking system"> two-axis tracking system</a>, <a href="https://publications.waset.org/abstracts/search?q=stationary%20system" title=" stationary system"> stationary system</a>, <a href="https://publications.waset.org/abstracts/search?q=landscape%20impact" title=" landscape impact"> landscape impact</a> </p> <a href="https://publications.waset.org/abstracts/11750/comparison-of-stationary-and-two-axis-tracking-system-of-50mw-photovoltaic-power-plant-in-al-kufra-libya-landscape-impact-and-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11750.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">451</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4556</span> Lyapunov-Based Tracking Control for Nonholonomic Wheeled Mobile Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raouf%20Fareh">Raouf Fareh</a>, <a href="https://publications.waset.org/abstracts/search?q=Maarouf%20Saad"> Maarouf Saad</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofiane%20Khadraoui"> Sofiane Khadraoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Tamer%20Rabie"> Tamer Rabie </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a tracking control strategy based on Lyapunov approach for nonholonomic wheeled mobile robot. This control strategy consists of two levels. First, a kinematic controller is developed to adjust the right and left wheel velocities. Using this velocity control law, the stability of the tracking error is guaranteed using Lyapunov approach. This kinematic controller cannot be generated directly by the motors. To overcome this problem, the second level of the controllers, dynamic control, is designed. This dynamic control law is developed based on Lyapunov theory in order to track the desired trajectories of the mobile robot. The stability of the tracking error is proved using Lupunov and Barbalat approaches. Simulation results on a nonholonomic wheeled mobile robot are given to demonstrate the feasibility and effectiveness of the presented approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobile%20robot" title="mobile robot">mobile robot</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20tracking" title=" trajectory tracking"> trajectory tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Lyapunov" title=" Lyapunov"> Lyapunov</a>, <a href="https://publications.waset.org/abstracts/search?q=stability" title=" stability"> stability</a> </p> <a href="https://publications.waset.org/abstracts/50751/lyapunov-based-tracking-control-for-nonholonomic-wheeled-mobile-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50751.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4555</span> Navigating Uncertainties in Project Control: A Predictive Tracking Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Byung%20Cheol%20Kim">Byung Cheol Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study explores a method for the signal-noise separation challenge in project control, focusing on the limitations of traditional deterministic approaches that use single-point performance metrics to predict project outcomes. We detail how traditional methods often overlook future uncertainties, resulting in tracking biases when reliance is placed solely on immediate data without adjustments for predictive accuracy. Our investigation led to the development of the Predictive Tracking Project Control (PTPC) framework, which incorporates network simulation and Bayesian control models to adapt more effectively to project dynamics. The PTPC introduces controlled disturbances to better identify and separate tracking biases from useful predictive signals. We will demonstrate the efficacy of the PTPC with examples, highlighting its potential to enhance real-time project monitoring and decision-making, marking a significant shift towards more accurate project management practices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=predictive%20tracking" title="predictive tracking">predictive tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=project%20control" title=" project control"> project control</a>, <a href="https://publications.waset.org/abstracts/search?q=signal-noise%20separation" title=" signal-noise separation"> signal-noise separation</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20inference" title=" Bayesian inference"> Bayesian inference</a> </p> <a href="https://publications.waset.org/abstracts/192188/navigating-uncertainties-in-project-control-a-predictive-tracking-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192188.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">18</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=152">152</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=153">153</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20tracking&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>