CINXE.COM

Search results for: tracking target identification

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: tracking target identification</title> <meta name="description" content="Search results for: tracking target identification"> <meta name="keywords" content="tracking target identification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="tracking target identification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="tracking target identification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 6285</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: tracking target identification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6285</span> Multi-Sensor Target Tracking Using Ensemble Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhekisipho%20Twala">Bhekisipho Twala</a>, <a href="https://publications.waset.org/abstracts/search?q=Mantepu%20Masetshaba"> Mantepu Masetshaba</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramapulana%20Nkoana"> Ramapulana Nkoana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multiple classifier systems combine several individual classifiers to deliver a final classification decision. However, an increasingly controversial question is whether such systems can outperform the single best classifier, and if so, what form of multiple classifiers system yields the most significant benefit. Also, multi-target tracking detection using multiple sensors is an important research field in mobile techniques and military applications. In this paper, several multiple classifiers systems are evaluated in terms of their ability to predict a system’s failure or success for multi-sensor target tracking tasks. The Bristol Eden project dataset is utilised for this task. Experimental and simulation results show that the human activity identification system can fulfill requirements of target tracking due to improved sensors classification performances with multiple classifier systems constructed using boosting achieving higher accuracy rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20classifier" title="single classifier">single classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-target%20tracking" title=" multi-target tracking"> multi-target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20classifiers" title=" multiple classifiers"> multiple classifiers</a> </p> <a href="https://publications.waset.org/abstracts/140816/multi-sensor-target-tracking-using-ensemble-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6284</span> Fast and Scale-Adaptive Target Tracking via PCA-SIFT</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang">Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongchang%20Chen"> Hongchang Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li"> Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiangpeng%20Zhang"> Jiangpeng Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title="target tracking">target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA-SIFT" title=" PCA-SIFT"> PCA-SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=mean-shift" title=" mean-shift"> mean-shift</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-adaptive" title=" scale-adaptive"> scale-adaptive</a> </p> <a href="https://publications.waset.org/abstracts/19009/fast-and-scale-adaptive-target-tracking-via-pca-sift" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6283</span> OFDM Radar for High Accuracy Target Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahbube%20Eghtesad">Mahbube Eghtesad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For a number of years, the problem of simultaneous detection and tracking of a target has been one of the most relevant and challenging issues in a wide variety of military and civilian systems. We develop methods for detecting and tracking a target using an orthogonal frequency division multiplexing (OFDM) based radar. As a preliminary step we introduce the target trajectory and Gaussian noise model in discrete time form. Then resorting to match filter and Kalman filter we derive a detector and target tracker. After that we propose an OFDM radar in order to achieve further improvement in tracking performance. The motivation for employing multiple frequencies is that the different scattering centers of a target resonate differently at each frequency. Numerical examples illustrate our analytical results, demonstrating the achieved performance improvement due to the OFDM signaling method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=matched%20filter" title="matched filter">matched filter</a>, <a href="https://publications.waset.org/abstracts/search?q=target%20trashing" title=" target trashing"> target trashing</a>, <a href="https://publications.waset.org/abstracts/search?q=OFDM%20radar" title=" OFDM radar"> OFDM radar</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a> </p> <a href="https://publications.waset.org/abstracts/8926/ofdm-radar-for-high-accuracy-target-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6282</span> Obstacle Avoidance Using Image-Based Visual Servoing Based on Deep Reinforcement Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tong%20He">Tong He</a>, <a href="https://publications.waset.org/abstracts/search?q=Long%20Chen"> Long Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Irag%20Mantegh"> Irag Mantegh</a>, <a href="https://publications.waset.org/abstracts/search?q=Wen-Fang%20Xie">Wen-Fang Xie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes an image-based obstacle avoidance and tracking target identification strategy in GPS-degraded or GPS-denied environment for an Unmanned Aerial Vehicle (UAV). The traditional force algorithm for obstacle avoidance could produce local minima area, in which UAV cannot get away obstacle effectively. In order to eliminate it, an artificial potential approach based on harmonic potential is proposed to guide the UAV to avoid the obstacle by using the vision system. And image-based visual servoing scheme (IBVS) has been adopted to implement the proposed obstacle avoidance approach. In IBVS, the pixel accuracy is a key factor to realize the obstacle avoidance. In this paper, the deep reinforcement learning framework has been applied by reducing pixel errors through constant interaction between the environment and the agent. In addition, the combination of OpenTLD and Tensorflow based on neural network is used to identify the type of tracking target. Numerical simulation in Matlab and ROS GAZEBO show the satisfactory result in target identification and obstacle avoidance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image-based%20visual%20servoing" title="image-based visual servoing">image-based visual servoing</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20avoidance" title=" obstacle avoidance"> obstacle avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification" title=" tracking target identification"> tracking target identification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20reinforcement%20learning" title=" deep reinforcement learning"> deep reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20potential%20approach" title=" artificial potential approach"> artificial potential approach</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/110259/obstacle-avoidance-using-image-based-visual-servoing-based-on-deep-reinforcement-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6281</span> Scheduling Nodes Activity and Data Communication for Target Tracking in Wireless Sensor Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=AmirHossein%20Mohajerzadeh">AmirHossein Mohajerzadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Alishahi"> Mohammad Alishahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Aslishahi"> Saeed Aslishahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohsen%20Zabihi"> Mohsen Zabihi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we consider sensor nodes with the capability of measuring the bearings (relative angle to the target). We use geometric methods to select a set of observer nodes which are responsible for collecting data from the target. Considering the characteristics of target tracking applications, it is clear that significant numbers of sensor nodes are usually inactive. Therefore, in order to minimize the total network energy consumption, a set of sensor nodes, called sentinel, is periodically selected for monitoring, controlling the environment and transmitting data through the network. The other nodes are inactive. Furthermore, the proposed algorithm provides a joint scheduling and routing algorithm to transmit data between network nodes and the fusion center (FC) in which not only provides an efficient way to estimate the target position but also provides an efficient target tracking. Performance evaluation confirms the superiority of the proposed algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coverage" title="coverage">coverage</a>, <a href="https://publications.waset.org/abstracts/search?q=routing" title=" routing"> routing</a>, <a href="https://publications.waset.org/abstracts/search?q=scheduling" title=" scheduling"> scheduling</a>, <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title=" target tracking"> target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20sensor%20networks" title=" wireless sensor networks"> wireless sensor networks</a> </p> <a href="https://publications.waset.org/abstracts/46939/scheduling-nodes-activity-and-data-communication-for-target-tracking-in-wireless-sensor-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6280</span> Vision Based People Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boukerch%20Haroun">Boukerch Haroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Luo%20Qing%20Sheng"> Luo Qing Sheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Hua%20Shi"> Li Hua Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Boukraa%20Sebti"> Boukraa Sebti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the &ldquo;HOG&rdquo; descriptors. The second part is the tracking of a moving person it&rsquo;s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camshift%20algorithm" title="camshift algorithm">camshift algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a> </p> <a href="https://publications.waset.org/abstracts/2264/vision-based-people-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6279</span> Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Izadkhah">Mohammad Izadkhah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mojtaba%20Hoseini"> Mojtaba Hoseini</a>, <a href="https://publications.waset.org/abstracts/search?q=Alireza%20Khalili%20Tehrani"> Alireza Khalili Tehrani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target&#39;s edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20tracking" title="video tracking">video tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a>, <a href="https://publications.waset.org/abstracts/search?q=greedy%20snake" title=" greedy snake"> greedy snake</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/11913/particle-filter-supported-with-the-neural-network-for-aircraft-tracking-based-on-kernel-and-active-contour" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">343</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6278</span> An Improved Tracking Approach Using Particle Filter and Background Subtraction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amir%20Mukhtar">Amir Mukhtar</a>, <a href="https://publications.waset.org/abstracts/search?q=Dr.%20Likun%20Xia"> Dr. Likun Xia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tracking" title="tracking">tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=corner%20points" title=" corner points"> corner points</a>, <a href="https://publications.waset.org/abstracts/search?q=occlusion" title=" occlusion"> occlusion</a>, <a href="https://publications.waset.org/abstracts/search?q=illumination" title=" illumination"> illumination</a> </p> <a href="https://publications.waset.org/abstracts/3223/an-improved-tracking-approach-using-particle-filter-and-background-subtraction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3223.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">381</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6277</span> Tracking Filtering Algorithm Based on ConvLSTM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ailing%20Yang">Ailing Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Penghan%20Song"> Penghan Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Aihua%20Cai"> Aihua Cai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The nonlinear maneuvering target tracking problem is mainly a state estimation problem when the target motion model is uncertain. Traditional solutions include Kalman filtering based on Bayesian filtering framework and extended Kalman filtering. However, these methods need prior knowledge such as kinematics model and state system distribution, and their performance is poor in state estimation of nonprior complex dynamic systems. Therefore, in view of the problems existing in traditional algorithms, a convolution LSTM target state estimation (SAConvLSTM-SE) algorithm based on Self-Attention memory (SAM) is proposed to learn the historical motion state of the target and the error distribution information measured at the current time. The measured track point data of airborne radar are processed into data sets. After supervised training, the data-driven deep neural network based on SAConvLSTM can directly obtain the target state at the next moment. Through experiments on two different maneuvering targets, we find that the network has stronger robustness and better tracking accuracy than the existing tracking methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=maneuvering%20target" title="maneuvering target">maneuvering target</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20estimation" title=" state estimation"> state estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a> </p> <a href="https://publications.waset.org/abstracts/164893/tracking-filtering-algorithm-based-on-convlstm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164893.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6276</span> Fast and Robust Long-term Tracking with Effective Searching Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thang%20V.%20Kieu">Thang V. Kieu</a>, <a href="https://publications.waset.org/abstracts/search?q=Long%20P.%20Nguyen"> Long P. Nguyen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Kernelized Correlation Filter (KCF) based trackers have gained a lot of attention recently because of their accuracy and fast calculation speed. However, this algorithm is not robust in cases where the object is lost by a sudden change of direction, being obscured or going out of view. In order to improve KCF performance in long-term tracking, this paper proposes an anomaly detection method for target loss warning by analyzing the response map of each frame, and a classification algorithm for reliable target re-locating mechanism by using Random fern. Being tested with Visual Tracker Benchmark and Visual Object Tracking datasets, the experimental results indicated that the precision and success rate of the proposed algorithm were 2.92 and 2.61 times higher than that of the original KCF algorithm, respectively. Moreover, the proposed tracker handles occlusion better than many state-of-the-art long-term tracking methods while running at 60 frames per second. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=correlation%20filter" title="correlation filter">correlation filter</a>, <a href="https://publications.waset.org/abstracts/search?q=long-term%20tracking" title=" long-term tracking"> long-term tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20fern" title=" random fern"> random fern</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20tracking" title=" real-time tracking"> real-time tracking</a> </p> <a href="https://publications.waset.org/abstracts/130580/fast-and-robust-long-term-tracking-with-effective-searching-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6275</span> Integrated Target Tracking and Control for Automated Car-Following of Truck Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fadwa%20Alaskar">Fadwa Alaskar</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang-Chieh%20Chou"> Fang-Chieh Chou</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Flores"> Carlos Flores</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiao-Yun%20Lu"> Xiao-Yun Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20M.%20Bayen"> Alexandre M. Bayen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article proposes a perception model for enhancing the accuracy and stability of car-following control of a longitudinally automated truck. We applied a fusion-based tracking algorithm on measurements of a single preceding vehicle needed for car-following control. This algorithm fuses two types of data, radar and LiDAR data, to obtain more accurate and robust longitudinal perception of the subject vehicle in various weather conditions. The filter’s resulting signals are fed to the gap control algorithm at every tracking loop composed by a high-level gap control and lower acceleration tracking system. Several highway tests have been performed with two trucks. The tests show accurate and fast tracking of the target, which impacts on the gap control loop positively. The experiments also show the fulfilment of control design requirements, such as fast speed variations tracking and robust time gap following. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20fusion" title=" sensor fusion"> sensor fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20cruise%20control" title=" adaptive cruise control"> adaptive cruise control</a>, <a href="https://publications.waset.org/abstracts/search?q=cooperative%20adaptive%20cruise%20control" title=" cooperative adaptive cruise control"> cooperative adaptive cruise control</a> </p> <a href="https://publications.waset.org/abstracts/140234/integrated-target-tracking-and-control-for-automated-car-following-of-truck-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">229</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6274</span> Gaussian Particle Flow Bernoulli Filter for Single Target Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyeongbok%20Kim">Hyeongbok Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingling%20Zhao"> Lingling Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaohong%20Su"> Xiaohong Su</a>, <a href="https://publications.waset.org/abstracts/search?q=Junjie%20Wang"> Junjie Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Bernoulli filter is a precise Bayesian filter for single target tracking based on the random finite set theory. The standard Bernoulli filter often underestimates the number of targets. This study proposes a Gaussian particle flow (GPF) Bernoulli filter employing particle flow to migrate particles from prior to posterior positions to improve the performance of the standard Bernoulli filter. By employing the particle flow filter, the computational speed of the Bernoulli filters is significantly improved. In addition, the GPF Bernoulli filter provides a more accurate estimation compared with that of the standard Bernoulli filter. Simulation results confirm the improved tracking performance and computational speed in two- and three-dimensional scenarios compared with other algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bernoulli%20filter" title="Bernoulli filter">Bernoulli filter</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20flow%20filter" title=" particle flow filter"> particle flow filter</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20finite%20sets" title=" random finite sets"> random finite sets</a>, <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title=" target tracking"> target tracking</a> </p> <a href="https://publications.waset.org/abstracts/162210/gaussian-particle-flow-bernoulli-filter-for-single-target-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162210.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6273</span> Person Re-Identification using Siamese Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sello%20Mokwena">Sello Mokwena</a>, <a href="https://publications.waset.org/abstracts/search?q=Monyepao%20Thabang"> Monyepao Thabang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis on benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20network" title="camera network">camera network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network%20topology" title=" convolutional neural network topology"> convolutional neural network topology</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20tracking" title=" person tracking"> person tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=siamese" title=" siamese"> siamese</a> </p> <a href="https://publications.waset.org/abstracts/171989/person-re-identification-using-siamese-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6272</span> Human Tracking across Heterogeneous Systems Based on Mobile Agent Technologies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tappei%20Yotsumoto">Tappei Yotsumoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Atsushi%20Nomura"> Atsushi Nomura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kozo%20Tanigawa"> Kozo Tanigawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenichi%20Takahashi"> Kenichi Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Takao%20Kawamura"> Takao Kawamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazunori%20Sugahara"> Kazunori Sugahara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In a human tracking system, expanding a monitoring range of one system is complicating the management of devices and increasing its cost. Therefore, we propose a method to realize a wide-range human tracking by connecting small systems. In this paper, we examined an agent deploy method and information contents across the heterogeneous human tracking systems. By implementing the proposed method, we can construct a human tracking system across heterogeneous systems, and the system can track a target continuously between systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20tracking%20system" title="human tracking system">human tracking system</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agent" title=" mobile agent"> mobile agent</a>, <a href="https://publications.waset.org/abstracts/search?q=monitoring" title=" monitoring"> monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=heterogeneous%20systems" title=" heterogeneous systems"> heterogeneous systems</a> </p> <a href="https://publications.waset.org/abstracts/11702/human-tracking-across-heterogeneous-systems-based-on-mobile-agent-technologies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11702.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">536</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6271</span> Visual Servoing for Quadrotor UAV Target Tracking: Effects of Target Information Sharing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jason%20R.%20King">Jason R. King</a>, <a href="https://publications.waset.org/abstracts/search?q=Hugh%20H.%20T.%20Liu"> Hugh H. T. Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research presents simulation and experimental work in the visual servoing of a quadrotor Unmanned Aerial Vehicle (UAV) to stabilize overtop of a moving target. Most previous work in the field assumes static or slow-moving, unpredictable targets. In this experiment, the target is assumed to be a friendly ground robot moving freely on a horizontal plane, which shares information with the UAV. This information includes velocity and acceleration information of the ground target to aid the quadrotor in its tracking task. The quadrotor is assumed to have a downward-facing camera which is fixed to the frame of the quadrotor. Only onboard sensing for the quadrotor is utilized for the experiment, with a VICON motion capture system in place used only to measure ground truth and evaluate the performance of the controller. The experimental platform consists of an ArDrone 2.0 and a Create Roomba, communicating using Robot Operating System (ROS). The addition of the target’s information is demonstrated to help the quadrotor in its tracking task using simulations of the dynamic model of a quadrotor in Matlab Simulink. A nested PID control loop is utilized for inner-loop control the quadrotor, similar to previous works at the Flight Systems and Controls Laboratory (FSC) at the University of Toronto Institute for Aerospace Studies (UTIAS). Experiments are performed with ground truth provided by an indoor motion capture system, and the results are analyzed. It is demonstrated that a velocity controller which incorporates the additional information is able to perform better than the controllers which do not have access to the target’s information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quadrotor" title="quadrotor">quadrotor</a>, <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title=" target tracking"> target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV" title=" UAV"> UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=UAS" title=" UAS"> UAS</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20servoing" title=" visual servoing"> visual servoing</a> </p> <a href="https://publications.waset.org/abstracts/56269/visual-servoing-for-quadrotor-uav-target-tracking-effects-of-target-information-sharing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56269.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">341</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6270</span> Development of Application Architecture for RFID Based Indoor Tracking Using Passive RFID Tag</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sumaya%20Ismail">Sumaya Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Aijaz%20Ahmad%20Rehi"> Aijaz Ahmad Rehi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Abstract The location tracking and positioning systems have technologically grown exponentially in recent decade. In particular, Global Position system (GPS) has become a universal norm to be a part of almost every software application directly or indirectly for the location based modules. However major drawback of GPS based system is their inability of working in indoor environments. Researchers are thus focused on the alternative technologies which can be used in indoor environments for a vast range of application domains which require indoor location tracking. One of the most popular technology used for indoor tracking is radio frequency identification (RFID). Due to its numerous advantages, including its cost effectiveness, it is considered as a technology of choice in indoor location tracking systems. To contribute to the emerging trend of the research, this paper proposes an application architecture of passive RFID tag based indoor location tracking system. For the proof of concept, a test bed will be developed to in this study. In addition, various indoor location tracking algorithms will be used to assess their appropriateness in the proposed application architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RFID" title="RFID">RFID</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=indoor%20location%20tracking" title=" indoor location tracking"> indoor location tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=application%20architecture" title=" application architecture"> application architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=passive%20RFID%20tag" title=" passive RFID tag"> passive RFID tag</a> </p> <a href="https://publications.waset.org/abstracts/164777/development-of-application-architecture-for-rfid-based-indoor-tracking-using-passive-rfid-tag" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6269</span> Analyze and Visualize Eye-Tracking Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aymen%20Sekhri">Aymen Sekhri</a>, <a href="https://publications.waset.org/abstracts/search?q=Emmanuel%20Kwabena%20Frimpong"> Emmanuel Kwabena Frimpong</a>, <a href="https://publications.waset.org/abstracts/search?q=Bolaji%20Mubarak%20Ayeyemi"> Bolaji Mubarak Ayeyemi</a>, <a href="https://publications.waset.org/abstracts/search?q=Aleksi%20Hirvonen"> Aleksi Hirvonen</a>, <a href="https://publications.waset.org/abstracts/search?q=Matias%20Hirvonen"> Matias Hirvonen</a>, <a href="https://publications.waset.org/abstracts/search?q=Tedros%20Tesfay%20Andemichael"> Tedros Tesfay Andemichael</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fixation identification, which involves isolating and identifying fixations and saccades in eye-tracking protocols, is an important aspect of eye-movement data processing that can have a big impact on higher-level analyses. However, fixation identification techniques are frequently discussed informally and rarely compared in any meaningful way. With two state-of-the-art algorithms, we will implement fixation detection and analysis in this work. The velocity threshold fixation algorithm is the first algorithm, and it identifies fixation based on a threshold value. For eye movement detection, the second approach is U'n' Eye, a deep neural network algorithm. The goal of this project is to analyze and visualize eye-tracking data from an eye gaze dataset that has been provided. The data was collected in a scenario in which individuals were shown photos and asked whether or not they recognized them. The results of the two-fixation detection approach are contrasted and visualized in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human-computer%20interaction" title="human-computer interaction">human-computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=eye-tracking" title=" eye-tracking"> eye-tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=fixations" title=" fixations"> fixations</a>, <a href="https://publications.waset.org/abstracts/search?q=saccades" title=" saccades"> saccades</a> </p> <a href="https://publications.waset.org/abstracts/149628/analyze-and-visualize-eye-tracking-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">135</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6268</span> Classification of Random Doppler-Radar Targets during the Surveillance Operations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G.%20C.%20Tikkiwal">G. C. Tikkiwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mukesh%20Upadhyay"> Mukesh Upadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the surveillance operations at war or peace time, the Radar operator gets a scatter of targets over the screen. This may be a tracked vehicle like tank vis-à-vis T72, BMP etc, or it may be a wheeled vehicle like ALS, TATRA, 2.5Tonne, Shaktiman or moving the army, moving convoys etc. The radar operator selects one of the promising targets into single target tracking (STT) mode. Once the target is locked, the operator gets a typical audible signal into his headphones. With reference to the gained experience and training over the time, the operator then identifies the random target. But this process is cumbersome and is solely dependent on the skills of the operator, thus may lead to misclassification of the object. In this paper, we present a technique using mathematical and statistical methods like fast fourier transformation (FFT) and principal component analysis (PCA) to identify the random objects. The process of classification is based on transforming the audible signature of target into music octave-notes. The whole methodology is then automated by developing suitable software. This automation increases the efficiency of identification of the random target by reducing the chances of misclassification. This whole study is based on live data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar%20target" title="radar target">radar target</a>, <a href="https://publications.waset.org/abstracts/search?q=FFT" title=" FFT"> FFT</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis" title=" principal component analysis"> principal component analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=eigenvector" title=" eigenvector"> eigenvector</a>, <a href="https://publications.waset.org/abstracts/search?q=octave-notes" title=" octave-notes"> octave-notes</a>, <a href="https://publications.waset.org/abstracts/search?q=DSP" title=" DSP"> DSP</a> </p> <a href="https://publications.waset.org/abstracts/37430/classification-of-random-doppler-radar-targets-during-the-surveillance-operations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37430.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">394</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6267</span> Application of Principle Component Analysis for Classification of Random Doppler-Radar Targets during the Surveillance Operations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G.%20C.%20Tikkiwal">G. C. Tikkiwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mukesh%20Upadhyay"> Mukesh Upadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the surveillance operations at war or peace time, the Radar operator gets a scatter of targets over the screen. This may be a tracked vehicle like tank vis-à-vis T72, BMP etc, or it may be a wheeled vehicle like ALS, TATRA, 2.5Tonne, Shaktiman or moving army, moving convoys etc. The Radar operator selects one of the promising targets into Single Target Tracking (STT) mode. Once the target is locked, the operator gets a typical audible signal into his headphones. With reference to the gained experience and training over the time, the operator then identifies the random target. But this process is cumbersome and is solely dependent on the skills of the operator, thus may lead to misclassification of the object. In this paper we present a technique using mathematical and statistical methods like Fast Fourier Transformation (FFT) and Principal Component Analysis (PCA) to identify the random objects. The process of classification is based on transforming the audible signature of target into music octave-notes. The whole methodology is then automated by developing suitable software. This automation increases the efficiency of identification of the random target by reducing the chances of misclassification. This whole study is based on live data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar%20target" title="radar target">radar target</a>, <a href="https://publications.waset.org/abstracts/search?q=fft" title=" fft"> fft</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis" title=" principal component analysis"> principal component analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=eigenvector" title=" eigenvector"> eigenvector</a>, <a href="https://publications.waset.org/abstracts/search?q=octave-notes" title=" octave-notes"> octave-notes</a>, <a href="https://publications.waset.org/abstracts/search?q=dsp" title=" dsp"> dsp</a> </p> <a href="https://publications.waset.org/abstracts/39492/application-of-principle-component-analysis-for-classification-of-random-doppler-radar-targets-during-the-surveillance-operations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39492.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6266</span> Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=McClain%20Thiel">McClain Thiel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20distancing" title="monocular distancing">monocular distancing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20analysis" title=" facial analysis"> facial analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20localization" title=" 3D localization "> 3D localization </a> </p> <a href="https://publications.waset.org/abstracts/129037/monocular-3d-person-tracking-aia-demographic-classification-and-projective-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6265</span> Identification and Selection of a Supply Chain Target Process for Re-Design</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaime%20A.%20Palma-Mendoza">Jaime A. Palma-Mendoza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A supply chain consists of different processes and when conducting supply chain re-design is necessary to identify the relevant processes and select a target for re-design. A solution was developed which consists to identify first the relevant processes using the Supply Chain Operations Reference (SCOR) model, then to use Analytical Hierarchy Process (AHP) for target process selection. An application was conducted in an Airline MRO supply chain re-design project which shows this combination can clearly aid the identification of relevant supply chain processes and the selection of a target process for re-design. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decision%20support%20systems" title="decision support systems">decision support systems</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20criteria%20analysis" title=" multiple criteria analysis"> multiple criteria analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=supply%20chain%20management" title=" supply chain management "> supply chain management </a> </p> <a href="https://publications.waset.org/abstracts/27912/identification-and-selection-of-a-supply-chain-target-process-for-re-design" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">492</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6264</span> Test and Evaluation of Patient Tracking Platform in an Earthquake Simulation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nahid%20Tavakoli">Nahid Tavakoli</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20H.%20Yarmohammadian"> Mohammad H. Yarmohammadian</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Samimi"> Ali Samimi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In earthquake situation, medical response communities such as field and referral hospitals are challenged with injured victims’ identification and tracking. In our project, it was developed a patient tracking platform (PTP) where first responders triage the patients with an electronic tag which report the location and some information of each patient during his/her movement. This platform includes: 1) near field communication (NFC) tags (ISO 14443), 2) smart mobile phones (Android-base version 4.2.2), 3) Base station laptops (Windows), 4) server software, 5) Android software to use by first responders, 5) disaster command software, and 6) system architecture. Our model has been completed through literature review, Delphi technique, focus group, design the platform, and implement in an earthquake exercise. This paper presents consideration for content, function, and technologies that must apply for patient tracking in medical emergencies situations. It is demonstrated the robustness of the patient tracking platform (PTP) in tracking 6 patients in a simulated earthquake situation in the yard of the relief and rescue department of Isfahan’s Red Crescent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=test%20and%20evaluation" title="test and evaluation">test and evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20tracking%20platform" title=" patient tracking platform"> patient tracking platform</a>, <a href="https://publications.waset.org/abstracts/search?q=earthquake" title=" earthquake"> earthquake</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/112288/test-and-evaluation-of-patient-tracking-platform-in-an-earthquake-simulation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6263</span> Pupil Size: A Measure of Identification Memory in Target Present Lineups</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Camilla%20Elphick">Camilla Elphick</a>, <a href="https://publications.waset.org/abstracts/search?q=Graham%20Hole"> Graham Hole</a>, <a href="https://publications.waset.org/abstracts/search?q=Samuel%20Hutton"> Samuel Hutton</a>, <a href="https://publications.waset.org/abstracts/search?q=Graham%20Pike"> Graham Pike</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pupil size has been found to change irrespective of luminosity, suggesting that it can be used to make inferences about cognitive processes, such as cognitive load. To see whether identifying a target requires a different cognitive load to rejecting distractors, the effect of viewing a target (compared with viewing distractors) on pupil size was investigated using a sequential video lineup procedure with two lineup sessions. Forty one participants were chosen randomly via the university. Pupil sizes were recorded when viewing pre target distractors and post target distractors and compared to pupil size when viewing the target. Overall, pupil size was significantly larger when viewing the target compared with viewing distractors. In the first session, pupil size changes were significantly different between participants who identified the target (Hits) and those who did not. Specifically, the pupil size of Hits reduced significantly after viewing the target (by 26%), suggesting that cognitive load reduced following identification. The pupil sizes of Misses (who made no identification) and False Alarms (who misidentified a distractor) did not reduce, suggesting that the cognitive load remained high in participants who failed to make the correct identification. In the second session, pupil sizes were smaller overall, suggesting that cognitive load was smaller in this session, and there was no significant difference between Hits, Misses and False Alarms. Furthermore, while the frequency of Hits increased, so did False Alarms. These two findings suggest that the benefits of including a second session remain uncertain, as the second session neither provided greater accuracy nor a reliable way to measure it. It is concluded that pupil size is a measure of face recognition strength in the first session of a target present lineup procedure. However, it is still not known whether cognitive load is an adequate explanation for this, or whether cognitive engagement might describe the effect more appropriately. If cognitive load and cognitive engagement can be teased apart with further investigation, this would have positive implications for understanding eyewitness identification. Nevertheless, this research has the potential to provide a tool for improving the reliability of lineup procedures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20load" title="cognitive load">cognitive load</a>, <a href="https://publications.waset.org/abstracts/search?q=eyewitness%20identification" title=" eyewitness identification"> eyewitness identification</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pupillometry" title=" pupillometry"> pupillometry</a> </p> <a href="https://publications.waset.org/abstracts/65435/pupil-size-a-measure-of-identification-memory-in-target-present-lineups" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65435.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">404</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6262</span> Chipless RFID Capacity Enhancement Using the E-pulse Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haythem%20H.%20Abdullah">Haythem H. Abdullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesham%20Elkady"> Hesham Elkady</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the fast increase in radio frequency identification (RFID) applications such as medical recording, library management, etc., the limitation of active tags stems from its need to external batteries as well as passive or active chips. The chipless RFID tag reduces the cost to a large extent but at the expense of utilizing the spectrum. The reduction of the cost of chipless RFID is due to the absence of the chip itself. The identification is done by utilizing the spectrum in such a way that the frequency response of the tags consists of some resonance frequencies that represent the bits. The system capacity is decided by the number of resonators within the pre-specified band. It is important to find a solution to enhance the spectrum utilization when using chipless RFID. Target identification is a process that results in a decision that a specific target is present or not. Several target identification schemes are present, but one of the most successful techniques in radar target identification in the oscillatory region is the extinction pulse technique (E-Pulse). The E-Pulse technique is used to identify targets via its characteristics (natural) modes. By introducing an innovative solution for chipless RFID reader and tag designs, the spectrum utilization goes to the optimum case. In this paper, a novel capacity enhancement scheme based on the E-pulse technique is introduced to improve the performance of the chipless RFID system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chipless%20RFID" title="chipless RFID">chipless RFID</a>, <a href="https://publications.waset.org/abstracts/search?q=E-pulse" title=" E-pulse"> E-pulse</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20modes" title=" natural modes"> natural modes</a>, <a href="https://publications.waset.org/abstracts/search?q=resonators" title=" resonators"> resonators</a> </p> <a href="https://publications.waset.org/abstracts/172234/chipless-rfid-capacity-enhancement-using-the-e-pulse-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6261</span> Determination of Neighbor Node in Consideration of the Imaging Range of Cameras in Automatic Human Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kozo%20Tanigawa">Kozo Tanigawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Tappei%20Yotsumoto"> Tappei Yotsumoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenichi%20Takahashi"> Kenichi Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Takao%20Kawamura"> Takao Kawamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazunori%20Sugahara"> Kazunori Sugahara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An automatic human tracking system using mobile agent technology is realized because a mobile agent moves in accordance with a migration of a target person. In this paper, we propose a method for determining the neighbor node in consideration of the imaging range of cameras. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20tracking" title="human tracking">human tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agent" title=" mobile agent"> mobile agent</a>, <a href="https://publications.waset.org/abstracts/search?q=Pan%2FTilt%2FZoom" title=" Pan/Tilt/Zoom"> Pan/Tilt/Zoom</a>, <a href="https://publications.waset.org/abstracts/search?q=neighbor%20relation" title=" neighbor relation"> neighbor relation</a> </p> <a href="https://publications.waset.org/abstracts/11821/determination-of-neighbor-node-in-consideration-of-the-imaging-range-of-cameras-in-automatic-human-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11821.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6260</span> Tropical Squall Lines in Brazil: A Methodology for Identification and Analysis Based on ISCCP Tracking Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=W.%20A.%20Gon%C3%A7alves">W. A. Gonçalves</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20P.%20Souza"> E. P. Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20R.%20Alc%C3%A2ntara"> C. R. Alcântara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The ISCCP-Tracking database offers an opportunity to study physical and morphological characteristics of Convective Systems based on geostationary meteorological satellites. This database contains 26 years of tracking of Convective Systems for the entire globe. Then, Tropical Squall Lines which occur in Brazil are certainly within the database. In this study, we propose a methodology for identification of these systems based on the ISCCP-Tracking database. A physical and morphological characterization of these systems is also shown. The proposed methodology is firstly based on the year of 2007. The Squall Lines were subjectively identified by visually analyzing infrared images from GOES-12. Based on this identification, the same systems were identified within the ISCCP-Tracking database. It is known, and it was also observed that the Squall Lines which occur on the north coast of Brazil develop parallel to the coast, influenced by the sea breeze. In addition, it was also observed that the eccentricity of the identified systems was greater than 0.7. Then, a methodology based on the inclination (based on the coast) and eccentricity (greater than 0.7) of the Convective Systems was applied in order to identify and characterize Tropical Squall Lines in Brazil. These thresholds were applied back in the ISCCP-Tracking database for the year of 2007. It was observed that other systems, which were not Squall Lines, were also identified. Then, we decided to call all systems identified by the inclination and eccentricity thresholds as Linear Convective Systems, instead of Squall Lines. After this step, the Linear Convective Systems were identified and characterized for the entire database, from 1983 to 2008. The physical and morphological characteristics of these systems were compared to those systems which did not have the required inclination and eccentricity to be called Linear Convective Systems. The results showed that the convection associated with the Linear Convective Systems seems to be more intense and organized than in the other systems. This affirmation is based on all ISCCP-Tracking variables analyzed. This type of methodology, which explores 26 years of satellite data by an objective analysis, was not previously explored in the literature. The physical and morphological characterization of the Linear Convective Systems based on 26 years of data is of a great importance and should be used in many branches of atmospheric sciences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=squall%20lines" title="squall lines">squall lines</a>, <a href="https://publications.waset.org/abstracts/search?q=convective%20systems" title=" convective systems"> convective systems</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20convective%20systems" title=" linear convective systems"> linear convective systems</a>, <a href="https://publications.waset.org/abstracts/search?q=ISCCP-Tracking" title=" ISCCP-Tracking"> ISCCP-Tracking</a> </p> <a href="https://publications.waset.org/abstracts/68608/tropical-squall-lines-in-brazil-a-methodology-for-identification-and-analysis-based-on-isccp-tracking-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68608.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6259</span> Adaptive Online Object Tracking via Positive and Negative Models Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li">Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang"> Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title="object tracking">object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking%20drift" title=" tracking drift"> tracking drift</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20least%20squares%20analysis" title=" partial least squares analysis"> partial least squares analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20and%20negative%20models%20matching" title=" positive and negative models matching"> positive and negative models matching</a> </p> <a href="https://publications.waset.org/abstracts/19382/adaptive-online-object-tracking-via-positive-and-negative-models-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">530</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6258</span> Model-Free Distributed Control of Dynamical Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javad%20Khazaei">Javad Khazaei</a>, <a href="https://publications.waset.org/abstracts/search?q=Rick%20Blum"> Rick Blum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Distributed control is an efficient and flexible approach for coordination of multi-agent systems. One of the main challenges in designing a distributed controller is identifying the governing dynamics of the dynamical systems. Data-driven system identification is currently undergoing a revolution. With the availability of high-fidelity measurements and historical data, model-free identification of dynamical systems can facilitate the control design without tedious modeling of high-dimensional and/or nonlinear systems. This paper develops a distributed control design using consensus theory for linear and nonlinear dynamical systems using sparse identification of system dynamics. Compared with existing consensus designs that heavily rely on knowing the detailed system dynamics, the proposed model-free design can accurately capture the dynamics of the system with available measurements and input data and provide guaranteed performance in consensus and tracking problems. Heterogeneous damped oscillators are chosen as examples of dynamical system for validation purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=consensus%20tracking" title="consensus tracking">consensus tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20control" title=" distributed control"> distributed control</a>, <a href="https://publications.waset.org/abstracts/search?q=model-free%20control" title=" model-free control"> model-free control</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20identification%20of%20dynamical%20systems" title=" sparse identification of dynamical systems"> sparse identification of dynamical systems</a> </p> <a href="https://publications.waset.org/abstracts/144452/model-free-distributed-control-of-dynamical-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144452.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6257</span> Translation Directionality: An Eye Tracking Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elahe%20Kamari">Elahe Kamari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Research on translation process has been conducted for more than 20 years, investigating various issues and using different research methodologies. Most recently, researchers have started to use eye tracking to study translation processes. They believed that the observable, measurable data that can be gained from eye tracking are indicators of unobservable cognitive processes happening in the translators’ mind during translation tasks. The aim of this study was to investigate directionality in translation processes through using eye tracking. The following hypotheses were tested: 1) processing the target text requires more cognitive effort than processing the source text, in both directions of translation; 2) L2 translation tasks on the whole require more cognitive effort than L1 tasks; 3) cognitive resources allocated to the processing of the source text is higher in L1 translation than in L2 translation; 4) cognitive resources allocated to the processing of the target text is higher in L2 translation than in L1 translation; and 5) in both directions non-professional translators invest more cognitive effort in translation tasks than do professional translators. The performance of a group of 30 male professional translators was compared with that of a group of 30 male non-professional translators. All the participants translated two comparable texts one into their L1 (Persian) and the other into their L2 (English). The eye tracker measured gaze time, average fixation duration, total task length and pupil dilation. These variables are assumed to measure the cognitive effort allocated to the translation task. The data derived from eye tracking only confirmed the first hypothesis. This hypothesis was confirmed by all the relevant indicators: gaze time, average fixation duration and pupil dilation. The second hypothesis that L2 translation tasks requires allocation of more cognitive resources than L1 translation tasks has not been confirmed by all four indicators. The third hypothesis that source text processing requires more cognitive resources in L1 translation than in L2 translation and the fourth hypothesis that target text processing requires more cognitive effort in L2 translation than L1 translation were not confirmed. It seems that source text processing in L2 translation can be just as demanding as in L1 translation. The final hypothesis that non-professional translators allocate more cognitive resources for the same translation tasks than do the professionals was partially confirmed. One of the indicators, average fixation duration, indicated higher cognitive effort-related values for professionals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=translation%20processes" title="translation processes">translation processes</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20tracking" title=" eye tracking"> eye tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20resources" title=" cognitive resources"> cognitive resources</a>, <a href="https://publications.waset.org/abstracts/search?q=directionality" title=" directionality"> directionality</a> </p> <a href="https://publications.waset.org/abstracts/36599/translation-directionality-an-eye-tracking-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36599.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6256</span> Iot-Based Interactive Patient Identification and Safety Management System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jonghoon%20Chun">Jonghoon Chun</a>, <a href="https://publications.waset.org/abstracts/search?q=Insung%20Kim"> Insung Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonghyun%20Lim"> Jonghyun Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Gun%20Ro"> Gun Ro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We believe that it is possible to provide a solution to reduce patient safety accidents by displaying correct medical records and prescription information through interactive patient identification. Our system is based on the use of smart bands worn by patients and these bands communicate with the hybrid gateways which understand both BLE and Wifi communication protocols. Through the convergence of low-power Bluetooth (BLE) and hybrid gateway technology, which is one of short-range wireless communication technologies, we implement ‘Intelligent Patient Identification and Location Tracking System’ to prevent medical malfunction frequently occurring in medical institutions. Based on big data and IOT technology using MongoDB, smart band (BLE, NFC function) and hybrid gateway, we develop a system to enable two-way communication between medical staff and hospitalized patients as well as to store locational information of the patients in minutes. Based on the precise information provided using big data systems, such as location tracking and movement of in-hospital patients wearing smart bands, our findings include the fact that a patient-specific location tracking algorithm can more efficiently operate HIS (Hospital Information System) and other related systems. Through the system, we can always correctly identify patients using identification tags. In addition, the system automatically determines whether the patient is a scheduled for medical service by the system in use at the medical institution, and displays the appropriateness of the medical treatment and the medical information (medical record and prescription information) on the screen and voice. This work was supported in part by the Korea Technology and Information Promotion Agency for SMEs (TIPA) grant funded by the Korean Small and Medium Business Administration (No. S2410390). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BLE" title="BLE">BLE</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20gateway" title=" hybrid gateway"> hybrid gateway</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20identification" title=" patient identification"> patient identification</a>, <a href="https://publications.waset.org/abstracts/search?q=IoT" title=" IoT"> IoT</a>, <a href="https://publications.waset.org/abstracts/search?q=safety%20management" title=" safety management"> safety management</a>, <a href="https://publications.waset.org/abstracts/search?q=smart%20band" title=" smart band"> smart band</a> </p> <a href="https://publications.waset.org/abstracts/68840/iot-based-interactive-patient-identification-and-safety-management-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">311</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=209">209</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=210">210</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=tracking%20target%20identification&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10