CINXE.COM

Search results for: RANSAC

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: RANSAC</title> <meta name="description" content="Search results for: RANSAC"> <meta name="keywords" content="RANSAC"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="RANSAC" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="RANSAC"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 9</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: RANSAC</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Lane Detection Using Labeling Based RANSAC Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yeongyu%20Choi">Yeongyu Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose labeling based RANSAC algorithm for lane detection. Advanced driver assistance systems (ADAS) have been widely researched to avoid unexpected accidents. Lane detection is a necessary system to assist keeping lane and lane departure prevention. The proposed vision based lane detection method applies Canny edge detection, inverse perspective mapping (IPM), K-means algorithm, mathematical morphology operations and 8 connected-component labeling. Next, random samples are selected from each labeling region for RANSAC. The sampling method selects the points of lane with a high probability. Finally, lane parameters of straight line or curve equations are estimated. Through the simulations tested on video recorded at daytime and nighttime, we show that the proposed method has better performance than the existing RANSAC algorithm in various environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Canny%20edge%20detection" title="Canny edge detection">Canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means%20algorithm" title=" k-means algorithm"> k-means algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20perspective%20mapping" title=" inverse perspective mapping"> inverse perspective mapping</a> </p> <a href="https://publications.waset.org/abstracts/92894/lane-detection-using-labeling-based-ransac-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92894.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">243</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Concentric Circle Detection based on Edge Pre-Classification and Extended RANSAC</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhongjie%20Yu">Zhongjie Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hancheng%20Yu"> Hancheng Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an effective method to detect concentric circles with imperfect edges. First, the gradient of edge pixel is coded and a 2-D lookup table is built to speed up normal generation. Then we take an accumulator to estimate the rough center and collect plausible edges of concentric circles through gradient and distance. Later, we take the contour-based method, which takes the contour and edge intersection, to pre-classify the edges. Finally, we use the extended RANSAC method to find all the candidate circles. The center of concentric circles is determined by the two circles with the highest concentricity. Experimental results demonstrate that the proposed method has both good performance and accuracy for the detection of concentric circles. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=concentric%20circle%20detection" title="concentric circle detection">concentric circle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient" title=" gradient"> gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=contour" title=" contour"> contour</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20pre-classification" title=" edge pre-classification"> edge pre-classification</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a> </p> <a href="https://publications.waset.org/abstracts/144332/concentric-circle-detection-based-on-edge-pre-classification-and-extended-ransac" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> An Efficient Fundamental Matrix Estimation for Moving Object Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yeongyu%20Choi">Yeongyu Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Lee"> S. M. Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object&rsquo;s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=corner%20detection" title="corner detection">corner detection</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=epipolar%20geometry" title=" epipolar geometry"> epipolar geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a> </p> <a href="https://publications.waset.org/abstracts/79103/an-efficient-fundamental-matrix-estimation-for-moving-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79103.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongyeob%20Han">Dongyeob Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Jungwon%20Huh"> Jungwon Huh</a>, <a href="https://publications.waset.org/abstracts/search?q=Quang%20Huy%20Tran"> Quang Huy Tran</a>, <a href="https://publications.waset.org/abstracts/search?q=Choonghyun%20Kang"> Choonghyun Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building" title="building">building</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a> </p> <a href="https://publications.waset.org/abstracts/85064/registration-of-multi-temporal-unmanned-aerial-vehicle-images-for-facility-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rindra%20Rantoson">Rindra Rantoson</a>, <a href="https://publications.waset.org/abstracts/search?q=Hichem%20Nouira"> Hichem Nouira</a>, <a href="https://publications.waset.org/abstracts/search?q=Nabil%20Anwer"> Nabil Anwer</a>, <a href="https://publications.waset.org/abstracts/search?q=Charyar%20Mehdi-Souzani"> Charyar Mehdi-Souzani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20curvature" title="discrete curvature">discrete curvature</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC%20transformation" title=" RANSAC transformation"> RANSAC transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=hough%20transformation" title=" hough transformation"> hough transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=coarse%20registration" title=" coarse registration"> coarse registration</a>, <a href="https://publications.waset.org/abstracts/search?q=ICP%20variant" title=" ICP variant"> ICP variant</a>, <a href="https://publications.waset.org/abstracts/search?q=point-to-point%20and%20point-to-plane%20minimization%20combination" title=" point-to-point and point-to-plane minimization combination"> point-to-point and point-to-plane minimization combination</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20tomography" title=" computer tomography"> computer tomography</a> </p> <a href="https://publications.waset.org/abstracts/36575/curvature-based-methods-for-automatic-coarse-and-fine-registration-in-dimensional-metrology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36575.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shao-Tzu%20Huang">Shao-Tzu Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen-Chien%20Hsu"> Chen-Chien Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Yen%20Wang"> Wei-Yen Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Matching high dimensional features between images is computationally expensive for exhaustive search approaches in computer vision. Although the dimension of the feature can be degraded by simplifying the prior knowledge of homography, matching accuracy may degrade as a tradeoff. In this paper, we present a feature matching method based on k-means algorithm that reduces the matching cost and matches the features between images instead of using a simplified geometric assumption. Experimental results show that the proposed method outperforms the previous linear exhaustive search approaches in terms of the inlier ratio of matched pairs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20matching" title="feature matching">feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means%20clustering" title=" k-means clustering"> k-means clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a> </p> <a href="https://publications.waset.org/abstracts/73493/k-means-based-matching-algorithm-for-multi-resolution-feature-descriptors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Monocular Visual Odometry for Three Different View Angles by Intel Realsense T265 with the Measurement of Remote</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heru%20Syah%20Putra">Heru Syah Putra</a>, <a href="https://publications.waset.org/abstracts/search?q=Aji%20Tri%20Pamungkas%20Nurcahyo"> Aji Tri Pamungkas Nurcahyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Chuang-Jan%20Chang"> Chuang-Jan Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> MOIL-SDK method refers to the spatial angle that forms a view with a different perspective from the Fisheye image. Visual Odometry forms a trusted application for extending projects by tracking using image sequences. A real-time, precise, and persistent approach that is able to contribute to the work when taking datasets and generate ground truth as a reference for the estimates of each image using the FAST Algorithm method in finding Keypoints that are evaluated during the tracking process with the 5-point Algorithm with RANSAC, as well as produce accurate estimates the camera trajectory for each rotational, translational movement on the X, Y, and Z axes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MOIL-SDK" title="MOIL-SDK">MOIL-SDK</a>, <a href="https://publications.waset.org/abstracts/search?q=intel%20realsense%20T265" title=" intel realsense T265"> intel realsense T265</a>, <a href="https://publications.waset.org/abstracts/search?q=Fisheye%20image" title=" Fisheye image"> Fisheye image</a>, <a href="https://publications.waset.org/abstracts/search?q=monocular%20visual%20odometry" title=" monocular visual odometry"> monocular visual odometry</a> </p> <a href="https://publications.waset.org/abstracts/147340/monocular-visual-odometry-for-three-different-view-angles-by-intel-realsense-t265-with-the-measurement-of-remote" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147340.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> Evaluation of Fusion Sonar and Stereo Camera System for 3D Reconstruction of Underwater Archaeological Object</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yadpiroon%20Onmek">Yadpiroon Onmek</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean%20Triboulet"> Jean Triboulet</a>, <a href="https://publications.waset.org/abstracts/search?q=Sebastien%20Druon"> Sebastien Druon</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Jouvencel"> Bruno Jouvencel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this paper is to develop the 3D underwater reconstruction of archaeology object, which is based on the fusion between a sonar system and stereo camera system. The underwater images are obtained from a calibrated camera system. The multiples image pairs are input, and we first solve the problem of image processing by applying the well-known filter, therefore to improve the quality of underwater images. The features of interest between image pairs are selected by well-known methods: a FAST detector and FLANN descriptor. Subsequently, the RANSAC method is applied to reject outlier points. The putative inliers are matched by triangulation to produce the local sparse point clouds in 3D space, using a pinhole camera model and Euclidean distance estimation. The SFM technique is used to carry out the global sparse point clouds. Finally, the ICP method is used to fusion the sonar information with the stereo model. The final 3D models have a pr茅cised by measurement comparing with the real object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20reconstruction" title="3D reconstruction">3D reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=archaeology" title=" archaeology"> archaeology</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20system" title=" stereo system"> stereo system</a>, <a href="https://publications.waset.org/abstracts/search?q=sonar%20system" title=" sonar system"> sonar system</a>, <a href="https://publications.waset.org/abstracts/search?q=underwater" title=" underwater"> underwater</a> </p> <a href="https://publications.waset.org/abstracts/73700/evaluation-of-fusion-sonar-and-stereo-camera-system-for-3d-reconstruction-of-underwater-archaeological-object" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">299</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Fast and Scale-Adaptive Target Tracking via PCA-SIFT</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yawen%20Wang">Yawen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongchang%20Chen"> Hongchang Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaomei%20Li"> Shaomei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Gao"> Chao Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiangpeng%20Zhang"> Jiangpeng Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=target%20tracking" title="target tracking">target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA-SIFT" title=" PCA-SIFT"> PCA-SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=mean-shift" title=" mean-shift"> mean-shift</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-adaptive" title=" scale-adaptive"> scale-adaptive</a> </p> <a href="https://publications.waset.org/abstracts/19009/fast-and-scale-adaptive-target-tracking-via-pca-sift" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10