CINXE.COM

Search results for: dynamic object tracking

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: dynamic object tracking</title> <meta name="description" content="Search results for: dynamic object tracking"> <meta name="keywords" content="dynamic object tracking"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="dynamic object tracking" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="dynamic object tracking"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 5797</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: dynamic object tracking</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5797</span> Adaptive Online Object Tracking via Positive and Negative Models Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li">Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang"> Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20drift" title=" tracking drift"> tracking drift</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20squares%20analysis" title=" partial least squares analysis"> partial least squares analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20and%20negative%20models%20matching" title=" positive and negative models matching"> positive and negative models matching</a> </p> <a href="https://publications.waset.org/abstracts/19382/adaptive-online-object-tracking-via-positive-and-negative-models-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5796</span> A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mulugeta%20K.%20Tefera">Mulugeta K. Tefera</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaolong%20Yang"> Xiaolong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Liu"> Jian Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20modeling" title="background modeling">background modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20model" title=" Gaussian mixture model"> Gaussian mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=inter-frame%20difference" title=" inter-frame difference"> inter-frame difference</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20and%20tracking" title=" object detection and tracking"> object detection and tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/78578/a-real-time-moving-object-detection-and-tracking-scheme-and-its-implementation-for-video-surveillance-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5795</span> UAV Based Visual Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaibhav%20Dalmia">Vaibhav Dalmia</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Phirke"> Manoj Phirke</a>, <a href="https://publications.waset.org/abstracts/search?q=Renith%20G"> Renith G</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide adoption of UAVs (unmanned aerial vehicles) in various industries by the government as well as private corporations for solving computer vision tasks it’s necessary that their potential is analyzed completely. Recent advances in Deep Learning have also left us with a plethora of algorithms to solve different computer vision tasks. This study provides a comprehensive survey on solving the Visual Object Tracking problem and explains the tradeoffs involved in building a real-time yet reasonably accurate object tracking system for UAVs by looking at existing methods and evaluating them on the aerial datasets. Finally, the best trackers suitable for UAV-based applications are provided. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=drones" title=" drones"> drones</a>, <a href="https://publications.waset.org/abstracts/search?q=single%20object%20tracking" title=" single object tracking"> single object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20object%20tracking" title=" visual object tracking"> visual object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=UAVs" title=" UAVs"> UAVs</a> </p> <a href="https://publications.waset.org/abstracts/145331/uav-based-visual-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5794</span> Object Tracking in Motion Blurred Images with Adaptive Mean Shift and Wavelet Feature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iman%20Iraei">Iman Iraei</a>, <a href="https://publications.waset.org/abstracts/search?q=Mina%20Sharifi"> Mina Sharifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But, the problem is that mean shift could not track the selected object accurately in blurred scenes. So, for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature, we should use Harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mean%20shift" title="mean shift">mean shift</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=blur%20extent" title=" blur extent"> blur extent</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20transform" title=" wavelet transform"> wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20blur" title=" motion blur"> motion blur</a> </p> <a href="https://publications.waset.org/abstracts/81408/object-tracking-in-motion-blurred-images-with-adaptive-mean-shift-and-wavelet-feature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5793</span> Multi Object Tracking for Predictive Collision Avoidance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bruk%20Gebregziabher">Bruk Gebregziabher</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The safe and efficient operation of Autonomous Mobile Robots (AMRs) in complex environments, such as manufacturing, logistics, and agriculture, necessitates accurate multiobject tracking and predictive collision avoidance. This paper presents algorithms and techniques for addressing these challenges using Lidar sensor data, emphasizing ensemble Kalman filter. The developed predictive collision avoidance algorithm employs the data provided by lidar sensors to track multiple objects and predict their velocities and future positions, enabling the AMR to navigate safely and effectively. A modification to the dynamic windowing approach is introduced to enhance the performance of the collision avoidance system. The overall system architecture encompasses object detection, multi-object tracking, and predictive collision avoidance control. The experimental results, obtained from both simulation and real-world data, demonstrate the effectiveness of the proposed methods in various scenarios, which lays the foundation for future research on global planners, other controllers, and the integration of additional sensors. This thesis contributes to the ongoing development of safe and efficient autonomous systems in complex and dynamic environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20mobile%20robots" title="autonomous mobile robots">autonomous mobile robots</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-object%20tracking" title=" multi-object tracking"> multi-object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=predictive%20collision%20avoidance" title=" predictive collision avoidance"> predictive collision avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20Kalman%20filter" title=" ensemble Kalman filter"> ensemble Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=lidar%20sensors" title=" lidar sensors"> lidar sensors</a> </p> <a href="https://publications.waset.org/abstracts/169056/multi-object-tracking-for-predictive-collision-avoidance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169056.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5792</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5791</span> SiamMask++: More Accurate Object Tracking through Layer Wise Aggregation in Visual Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyunbin%20Choi">Hyunbin Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jihyeon%20Noh"> Jihyeon Noh</a>, <a href="https://publications.waset.org/abstracts/search?q=Changwon%20Lim"> Changwon Lim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose SiamMask++, an architecture that performs layer-wise aggregation and depth-wise cross-correlation and introduce multi-RPN module and multi-MASK module to improve EAO (Expected Average Overlap), a representative performance evaluation metric for Visual Object Tracking (VOT) challenge. The proposed architecture, SiamMask++, has two versions, namely, bi_SiamMask++, which satisfies the real time (56fps) on systems equipped with GPUs (Titan XP), and rf_SiamMask++, which combines mask refinement modules for EAO improvements. Tests are performed on VOT2016, VOT2018 and VOT2019, the representative datasets of Visual Object Tracking tasks labeled as rotated bounding boxes. SiamMask++ perform better than SiamMask on all the three datasets tested. SiamMask++ is achieved performance of 62.6% accuracy, 26.2% robustness and 39.8% EAO, especially on the VOT2018 dataset. Compared to SiamMask, this is an improvement of 4.18%, 37.17%, 23.99%, respectively. In addition, we do an experimental in-depth analysis of how much the introduction of features and multi modules extracted from the backbone affects the performance of our model in the VOT task. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20object%20tracking" title="visual object tracking">visual object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=video" title=" video"> video</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=layer%20wise%20aggregation" title=" layer wise aggregation"> layer wise aggregation</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/151563/siammask-more-accurate-object-tracking-through-layer-wise-aggregation-in-visual-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151563.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5790</span> Vehicular Speed Detection Camera System Using Video Stream</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20A.%20Anser%20Pasha">C. A. Anser Pasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/45316/vehicular-speed-detection-camera-system-using-video-stream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45316.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5789</span> Objects Tracking in Catadioptric Images Using Spherical Snake</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khald%20Anisse">Khald Anisse</a>, <a href="https://publications.waset.org/abstracts/search?q=Amina%20Radgui"> Amina Radgui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Rziza"> Mohammed Rziza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tracking objects on video sequences is a very challenging task in many works in computer vision applications. However, there is no article that treats this topic in catadioptric vision. This paper is an attempt that tries to describe a new approach of omnidirectional images processing based on inverse stereographic projection in the half-sphere. We used the spherical model proposed by Gayer and al. For object tracking, our work is based on snake method, with optimization using the Greedy algorithm, by adapting its different operators. The algorithm will respect the deformed geometries of omnidirectional images such as spherical neighborhood, spherical gradient and reformulation of optimization algorithm on the spherical domain. This tracking method that we call "spherical snake" permitted to know the change of the shape and the size of object in different replacements in the spherical image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=spherical%20snake" title=" spherical snake"> spherical snake</a>, <a href="https://publications.waset.org/abstracts/search?q=omnidirectional%20image" title=" omnidirectional image"> omnidirectional image</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20stereographic%20projection" title=" inverse stereographic projection"> inverse stereographic projection</a> </p> <a href="https://publications.waset.org/abstracts/2285/objects-tracking-in-catadioptric-images-using-spherical-snake" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2285.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5788</span> Stereo Motion Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yudhajit%20Datta">Yudhajit Datta</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamsi%20Iyer"> Hamsi Iyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Bandi"> Jonathan Bandi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Sethia"> Ankit Sethia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=kalman%20filter" title="kalman filter">kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20vision" title=" stereo vision"> stereo vision</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20tracking" title=" motion tracking"> motion tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=matlab" title=" matlab"> matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title=" camera calibration"> camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision%20system%20toolbox" title=" computer vision system toolbox "> computer vision system toolbox </a> </p> <a href="https://publications.waset.org/abstracts/18999/stereo-motion-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18999.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">327</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5787</span> Fast and Robust Long-term Tracking with Effective Searching Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thang%20V.%20Kieu">Thang V. Kieu</a>, <a href="https://publications.waset.org/abstracts/search?q=Long%20P.%20Nguyen"> Long P. Nguyen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Kernelized Correlation Filter (KCF) based trackers have gained a lot of attention recently because of their accuracy and fast calculation speed. However, this algorithm is not robust in cases where the object is lost by a sudden change of direction, being obscured or going out of view. In order to improve KCF performance in long-term tracking, this paper proposes an anomaly detection method for target loss warning by analyzing the response map of each frame, and a classification algorithm for reliable target re-locating mechanism by using Random fern. Being tested with Visual Tracker Benchmark and Visual Object Tracking datasets, the experimental results indicated that the precision and success rate of the proposed algorithm were 2.92 and 2.61 times higher than that of the original KCF algorithm, respectively. Moreover, the proposed tracker handles occlusion better than many state-of-the-art long-term tracking methods while running at 60 frames per second. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=correlation%20filter" title="correlation filter">correlation filter</a>, <a href="https://publications.waset.org/abstracts/search?q=long-term%20tracking" title=" long-term tracking"> long-term tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20fern" title=" random fern"> random fern</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20tracking" title=" real-time tracking"> real-time tracking</a> </p> <a href="https://publications.waset.org/abstracts/130580/fast-and-robust-long-term-tracking-with-effective-searching-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5786</span> Evaluating the Tracking Abilities of Microsoft HoloLens-1 for Small-Scale Industrial Processes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kuhelee%20Chandel">Kuhelee Chandel</a>, <a href="https://publications.waset.org/abstracts/search?q=Julia%20%C3%85hl%C3%A9n"> Julia Åhlén</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Seipel"> Stefan Seipel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study evaluates the accuracy of Microsoft HoloLens (Version 1) for small-scale industrial activities, comparing its measurements to ground truth data from a Kuka Robotics arm. Two experiments were conducted to assess its position-tracking capabilities, revealing that the HoloLens device is effective for measuring the position of dynamic objects with small dimensions. However, its precision is affected by the velocity of the trajectory and its position within the device's field of view. While the HoloLens device may be suitable for small-scale tasks, its limitations for more complex and demanding applications requiring high precision and accuracy must be considered. The findings can guide the use of HoloLens devices in industrial applications and contribute to the development of more effective and reliable position-tracking systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality%20%28AR%29" title="augmented reality (AR)">augmented reality (AR)</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20HoloLens" title=" Microsoft HoloLens"> Microsoft HoloLens</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20processes" title=" industrial processes"> industrial processes</a>, <a href="https://publications.waset.org/abstracts/search?q=manufacturing%20processes" title=" manufacturing processes"> manufacturing processes</a> </p> <a href="https://publications.waset.org/abstracts/166490/evaluating-the-tracking-abilities-of-microsoft-hololens-1-for-small-scale-industrial-processes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5785</span> Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marrone%20Silverio%20Melo%20Dantas%20Pedro%20Henrique%20Dreyer">Marrone Silverio Melo Dantas Pedro Henrique Dreyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabriel%20Fonseca%20Reis%20de%20Souza"> Gabriel Fonseca Reis de Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Bezerra"> Daniel Bezerra</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Souza"> Ricardo Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Silvia%20Lins"> Silvia Lins</a>, <a href="https://publications.waset.org/abstracts/search?q=Judith%20Kelner"> Judith Kelner</a>, <a href="https://publications.waset.org/abstracts/search?q=Djamel%20Fawzi%20Hadj%20Sadok"> Djamel Fawzi Hadj Sadok</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RJ45" title="RJ45">RJ45</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20annotation" title=" automatic annotation"> automatic annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20projection" title=" 3D projection"> 3D projection</a> </p> <a href="https://publications.waset.org/abstracts/130540/video-object-segmentation-for-automatic-image-annotation-of-ethernet-connectors-with-environment-mapping-and-3d-projection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5784</span> Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Harraj%20Abdeslam">El Harraj Abdeslam</a>, <a href="https://publications.waset.org/abstracts/search?q=Raissouni%20Naoufal"> Raissouni Naoufal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title="video surveillance">video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title=" background subtraction"> background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20limited%20histogram%20equalization" title=" contrast limited histogram equalization"> contrast limited histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=illumination%20invariance" title=" illumination invariance"> illumination invariance</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=behavior%20understanding" title=" behavior understanding"> behavior understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20scenes" title=" dynamic scenes"> dynamic scenes</a> </p> <a href="https://publications.waset.org/abstracts/27499/toward-indoor-and-outdoor-surveillance-using-an-improved-fast-background-subtraction-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5783</span> Challenges in Video Based Object Detection in Maritime Scenario Using Computer Vision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dilip%20K.%20Prasad">Dilip K. Prasad</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Krishna%20Prasath"> C. Krishna Prasath</a>, <a href="https://publications.waset.org/abstracts/search?q=Deepu%20Rajan"> Deepu Rajan</a>, <a href="https://publications.waset.org/abstracts/search?q=Lily%20Rachmawati"> Lily Rachmawati</a>, <a href="https://publications.waset.org/abstracts/search?q=Eshan%20Rajabally"> Eshan Rajabally</a>, <a href="https://publications.waset.org/abstracts/search?q=Chai%20Quek"> Chai Quek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20maritime%20vehicle" title="autonomous maritime vehicle">autonomous maritime vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=situation%20awareness" title=" situation awareness"> situation awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/54887/challenges-in-video-based-object-detection-in-maritime-scenario-using-computer-vision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54887.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">458</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5782</span> Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao-Hong%20Tsai">Yao-Hong Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title="unmanned aerial vehicle">unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=collision%20avoidance" title=" collision avoidance"> collision avoidance</a> </p> <a href="https://publications.waset.org/abstracts/99181/vision-based-collision-avoidance-for-unmanned-aerial-vehicles-by-recurrent-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99181.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5781</span> Facility Detection from Image Using Mathematical Morphology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=In-Geun%20Lim">In-Geun Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Woong%20Ra"> Sung-Woong Ra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As high resolution satellite images can be used, lots of studies are carried out for exploiting these images in various fields. This paper proposes the method based on mathematical morphology for extracting the ‘horse's hoof shaped object’. This proposed method can make an automatic object detection system to track the meaningful object in a large satellite image rapidly. Mathematical morphology process can apply in binary image, so this method is very simple. Therefore this method can easily extract the ‘horse's hoof shaped object’ from any images which have indistinct edges of the tracking object and have different image qualities depending on filming location, filming time, and filming environment. Using the proposed method by which ‘horse's hoof shaped object’ can be rapidly extracted, the performance of the automatic object detection system can be improved dramatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facility%20detection" title="facility detection">facility detection</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title=" satellite image"> satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=object" title=" object"> object</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20morphology" title=" mathematical morphology"> mathematical morphology</a> </p> <a href="https://publications.waset.org/abstracts/67611/facility-detection-from-image-using-mathematical-morphology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67611.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">382</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5780</span> Lyapunov-Based Tracking Control for Nonholonomic Wheeled Mobile Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raouf%20Fareh">Raouf Fareh</a>, <a href="https://publications.waset.org/abstracts/search?q=Maarouf%20Saad"> Maarouf Saad</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofiane%20Khadraoui"> Sofiane Khadraoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Tamer%20Rabie"> Tamer Rabie </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a tracking control strategy based on Lyapunov approach for nonholonomic wheeled mobile robot. This control strategy consists of two levels. First, a kinematic controller is developed to adjust the right and left wheel velocities. Using this velocity control law, the stability of the tracking error is guaranteed using Lyapunov approach. This kinematic controller cannot be generated directly by the motors. To overcome this problem, the second level of the controllers, dynamic control, is designed. This dynamic control law is developed based on Lyapunov theory in order to track the desired trajectories of the mobile robot. The stability of the tracking error is proved using Lupunov and Barbalat approaches. Simulation results on a nonholonomic wheeled mobile robot are given to demonstrate the feasibility and effectiveness of the presented approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobile%20robot" title="mobile robot">mobile robot</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20tracking" title=" trajectory tracking"> trajectory tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Lyapunov" title=" Lyapunov"> Lyapunov</a>, <a href="https://publications.waset.org/abstracts/search?q=stability" title=" stability"> stability</a> </p> <a href="https://publications.waset.org/abstracts/50751/lyapunov-based-tracking-control-for-nonholonomic-wheeled-mobile-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50751.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5779</span> Object Trajectory Extraction by Using Mean of Motion Vectors Form Compressed Video Bitstream</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ching-Ting%20Hsu">Ching-Ting Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hua%20Ho"> Wei-Hua Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Yi-Chun%20Chang"> Yi-Chun Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video object tracking is one of the popular research topics in computer graphics area. The trajectory can be applied in security, traffic control, even the sports training. The trajectory for sports training can be utilized to analyze the athlete’s performance without traditional sensors. There are many relevant works which utilize mean shift algorithm with background subtraction. This kind of the schemes should select a kernel function which may affect the accuracy and performance. In this paper, we consider the motion information in the pre-coded bitstream. The proposed algorithm extracts the trajectory by composing the motion vectors from the pre-coded bitstream. We gather the motion vectors from the overlap area of the object and calculate mean of the overlapped motion vectors. We implement and simulate our proposed algorithm in H.264 video codec. The performance is better than relevant works and keeps the accuracy of the object trajectory. The experimental results show that the proposed trajectory extraction can extract trajectory form the pre-coded bitstream in high accuracy and achieve higher performance other relevant works. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.264" title="H.264">H.264</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20bitstream" title=" video bitstream"> video bitstream</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20object%20tracking" title=" video object tracking"> video object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=sports%20training" title=" sports training"> sports training</a> </p> <a href="https://publications.waset.org/abstracts/34740/object-trajectory-extraction-by-using-mean-of-motion-vectors-form-compressed-video-bitstream" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">428</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5778</span> Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=McClain%20Thiel">McClain Thiel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20distancing" title="monocular distancing">monocular distancing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20analysis" title=" facial analysis"> facial analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20localization" title=" 3D localization "> 3D localization </a> </p> <a href="https://publications.waset.org/abstracts/129037/monocular-3d-person-tracking-aia-demographic-classification-and-projective-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5777</span> Vision Based People Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boukerch%20Haroun">Boukerch Haroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Luo%20Qing%20Sheng"> Luo Qing Sheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Hua%20Shi"> Li Hua Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Boukraa%20Sebti"> Boukraa Sebti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the &ldquo;HOG&rdquo; descriptors. The second part is the tracking of a moving person it&rsquo;s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camshift%20algorithm" title="camshift algorithm">camshift algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a> </p> <a href="https://publications.waset.org/abstracts/2264/vision-based-people-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5776</span> Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Fang">Cheng Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingwei%20Quan"> Lingwei Quan</a>, <a href="https://publications.waset.org/abstracts/search?q=Cunyue%20Lu"> Cunyue Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20tracking" title=" pose tracking"> pose tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/112839/online-pose-estimation-and-tracking-approach-with-siamese-region-proposal-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5775</span> Integrated Target Tracking and Control for Automated Car-Following of Truck Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fadwa%20Alaskar">Fadwa Alaskar</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang-Chieh%20Chou"> Fang-Chieh Chou</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Flores"> Carlos Flores</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiao-Yun%20Lu"> Xiao-Yun Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20M.%20Bayen"> Alexandre M. Bayen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article proposes a perception model for enhancing the accuracy and stability of car-following control of a longitudinally automated truck. We applied a fusion-based tracking algorithm on measurements of a single preceding vehicle needed for car-following control. This algorithm fuses two types of data, radar and LiDAR data, to obtain more accurate and robust longitudinal perception of the subject vehicle in various weather conditions. The filter’s resulting signals are fed to the gap control algorithm at every tracking loop composed by a high-level gap control and lower acceleration tracking system. Several highway tests have been performed with two trucks. The tests show accurate and fast tracking of the target, which impacts on the gap control loop positively. The experiments also show the fulfilment of control design requirements, such as fast speed variations tracking and robust time gap following. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20fusion" title=" sensor fusion"> sensor fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20cruise%20control" title=" adaptive cruise control"> adaptive cruise control</a>, <a href="https://publications.waset.org/abstracts/search?q=cooperative%20adaptive%20cruise%20control" title=" cooperative adaptive cruise control"> cooperative adaptive cruise control</a> </p> <a href="https://publications.waset.org/abstracts/140234/integrated-target-tracking-and-control-for-automated-car-following-of-truck-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">229</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5774</span> Investigating Dynamic Transition Process of Issues Using Unstructured Text Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Myungsu%20Lim">Myungsu Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=William%20Xiu%20Shun%20Wong"> William Xiu Shun Wong</a>, <a href="https://publications.waset.org/abstracts/search?q=Yoonjin%20Hyun"> Yoonjin Hyun</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen%20Liu"> Chen Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Seongi%20Choi"> Seongi Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Dasom%20Kim"> Dasom Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Namgyu%20Kim"> Namgyu Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The amount of real-time data generated through various mass media has been increasing rapidly. In this study, we had performed topic analysis by using the unstructured text data that is distributed through news article. As one of the most prevalent applications of topic analysis, the issue tracking technique investigates the changes of the social issues that identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has limitation that it cannot discover dynamic mutation process of complex social issues. The purpose of this study is to overcome the limitations of the existing issue tracking method. We first derived core issues of each period, and then discover the dynamic mutation process of various issues. In this study, we further analyze the mutation process from the perspective of the issues categories, in order to figure out the pattern of issue flow, including the frequency and reliability of the pattern. In other words, this study allows us to understand the components of the complex issues by tracking the dynamic history of issues. This methodology can facilitate a clearer understanding of complex social phenomena by providing mutation history and related category information of the phenomena. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Data%20Mining" title="Data Mining">Data Mining</a>, <a href="https://publications.waset.org/abstracts/search?q=Issue%20Tracking" title=" Issue Tracking"> Issue Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Text%20Mining" title=" Text Mining"> Text Mining</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20Analysis" title=" topic Analysis"> topic Analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20Detection" title=" topic Detection"> topic Detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Trend%20Detection" title=" Trend Detection"> Trend Detection</a> </p> <a href="https://publications.waset.org/abstracts/29251/investigating-dynamic-transition-process-of-issues-using-unstructured-text-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29251.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5773</span> A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiang%20Zhang">Xiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Daohong%20Yang"> Daohong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ziyuan%20Wu"> Ziyuan Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Li"> Lei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Wanting%20Zhou"> Wanting Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20scene" title="dynamic scene">dynamic scene</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20visual%20SLAM" title=" dynamic visual SLAM"> dynamic visual SLAM</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=scene%20flow" title=" scene flow"> scene flow</a>, <a href="https://publications.waset.org/abstracts/search?q=VSLAM" title=" VSLAM"> VSLAM</a> </p> <a href="https://publications.waset.org/abstracts/164349/a-robust-visual-simultaneous-localization-and-mapping-for-indoor-dynamic-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">116</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5772</span> Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Zhang">Qiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojian%20Hu"> Xiaojian Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real-time" title="real-time">real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-vehicle%20tracking" title=" multi-vehicle tracking"> multi-vehicle tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20attribution" title=" color attribution"> color attribution</a> </p> <a href="https://publications.waset.org/abstracts/136438/real-time-multi-vehicle-tracking-application-at-intersections-based-on-feature-selection-in-combination-with-color-attribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5771</span> Iterative Linear Quadratic Regulator (iLQR) vs LQR Controllers for Quadrotor Path Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wesam%20Jasim">Wesam Jasim</a>, <a href="https://publications.waset.org/abstracts/search?q=Dongbing%20Gu"> Dongbing Gu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an iterative linear quadratic regulator optimal control technique to solve the problem of quadrotors path tracking. The dynamic motion equations are represented based on unit quaternion representation and include some modelled aerodynamical effects as a nonlinear part. Simulation results prove the ability and effectiveness of iLQR to stabilize the quadrotor and successfully track different paths. It also shows that iLQR controller outperforms LQR controller in terms of fast convergence and tracking errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iLQR%20controller" title="iLQR controller">iLQR controller</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20control" title=" optimal control"> optimal control</a>, <a href="https://publications.waset.org/abstracts/search?q=path%20tracking" title=" path tracking"> path tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=quadrotor%20UAVs" title=" quadrotor UAVs"> quadrotor UAVs</a> </p> <a href="https://publications.waset.org/abstracts/51436/iterative-linear-quadratic-regulator-ilqr-vs-lqr-controllers-for-quadrotor-path-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51436.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5770</span> Design and Implementation of a Bluetooth-Based Misplaced Object Finder Using DFRobot Arduino Interfaced with Led and Buzzer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bright%20Emeni">Bright Emeni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project is a system that allows users to locate their misplaced or lost devices by using Bluetooth technology. It utilizes the DFRobot Bettle BLE Arduino microcontroller as its main component for communication and control. By interfacing it with an LED and a buzzer, the system provides visual and auditory signals to assist in locating the target device. The search process can be initiated through an Android application, by which the system creates a Bluetooth connection between the microcontroller and the target device, permitting the exchange of signals for tracking purposes. When the device is within range, the LED indicator illuminates, and the buzzer produces audible alerts, guiding the user to the device's location. The application also provides an estimated distance of the object using Bluetooth signal strength. The project’s goal is to offer a practical and efficient solution for finding misplaced devices, leveraging the capabilities of Bluetooth technology and microcontroller-based control systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bluetooth%20finder" title="Bluetooth finder">Bluetooth finder</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20finder" title=" object finder"> object finder</a>, <a href="https://publications.waset.org/abstracts/search?q=Bluetooth%20tracking" title=" Bluetooth tracking"> Bluetooth tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracker" title=" tracker"> tracker</a> </p> <a href="https://publications.waset.org/abstracts/179777/design-and-implementation-of-a-bluetooth-based-misplaced-object-finder-using-dfrobot-arduino-interfaced-with-led-and-buzzer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">65</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5769</span> Dynamic Background Updating for Lightweight Moving Object Detection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kelemewerk%20Destalem">Kelemewerk Destalem</a>, <a href="https://publications.waset.org/abstracts/search?q=Joongjae%20Cho"> Joongjae Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaeseong%20Lee"> Jaeseong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonhyuk%20Yoo"> Joonhyuk Yoo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of a histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=background%20subtraction" title="background subtraction">background subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20updating" title=" background updating"> background updating</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20time" title=" real time"> real time</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20weight%20algorithm" title=" light weight algorithm"> light weight algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20difference" title=" temporal difference"> temporal difference</a> </p> <a href="https://publications.waset.org/abstracts/31063/dynamic-background-updating-for-lightweight-moving-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31063.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5768</span> Trajectory Tracking of a 2-Link Mobile Manipulator Using Sliding Mode Control Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abolfazl%20Mohammadijoo">Abolfazl Mohammadijoo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we are investigating the sliding mode control approach for trajectory tracking of a two-link-manipulator with a wheeled mobile robot in its base. The main challenge of this work is the dynamic interaction between mobile base and manipulator, which makes trajectory tracking more difficult than n-link manipulators with a fixed base. Another challenging part of this work is to avoid from chattering phenomenon of sliding mode control that makes lots of damages for actuators in real industrial cases. The results show the effectiveness of the sliding mode control approach for the desired trajectory. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobile%20manipulator" title="mobile manipulator">mobile manipulator</a>, <a href="https://publications.waset.org/abstracts/search?q=sliding%20mode%20control" title=" sliding mode control"> sliding mode control</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20interaction" title=" dynamic interaction"> dynamic interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20robotics" title=" mobile robotics"> mobile robotics</a> </p> <a href="https://publications.waset.org/abstracts/128498/trajectory-tracking-of-a-2-link-mobile-manipulator-using-sliding-mode-control-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128498.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">189</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=193">193</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=194">194</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20object%20tracking&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10