CINXE.COM
Search results for: image matching technique
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image matching technique</title> <meta name="description" content="Search results for: image matching technique"> <meta name="keywords" content="image matching technique"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image matching technique" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image matching technique"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 9281</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image matching technique</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9281</span> Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20images" title="big images">big images</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20images" title=" binary images"> binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20similarity" title=" image similarity"> image similarity</a> </p> <a href="https://publications.waset.org/abstracts/89963/quick-similarity-measurement-of-binary-images-via-probabilistic-pixel-mapping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9280</span> Least Support Orthogonal Matching Pursuit (LS-OMP) Recovery Method for Invisible Watermarking Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Israa%20Sh.%20Tawfic">Israa Sh. Tawfic</a>, <a href="https://publications.waset.org/abstracts/search?q=Sema%20Koc%20Kayhan"> Sema Koc Kayhan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, first, we propose least support orthogonal matching pursuit (LS-OMP) algorithm to improve the performance, of the OMP (orthogonal matching pursuit) algorithm. LS-OMP algorithm adaptively chooses optimum L (least part of support), at each iteration. This modification helps to reduce the computational complexity significantly and performs better than OMP algorithm. Second, we give the procedure for the invisible image watermarking in the presence of compressive sampling. The image reconstruction based on a set of watermarked measurements is performed using LS-OMP. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compressed%20sensing" title="compressed sensing">compressed sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20matching%20pursuit" title=" orthogonal matching pursuit"> orthogonal matching pursuit</a>, <a href="https://publications.waset.org/abstracts/search?q=restricted%20isometry%20property" title=" restricted isometry property"> restricted isometry property</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20reconstruction" title=" signal reconstruction"> signal reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20support%20orthogonal%20matching%20pursuit" title=" least support orthogonal matching pursuit"> least support orthogonal matching pursuit</a>, <a href="https://publications.waset.org/abstracts/search?q=watermark" title=" watermark"> watermark</a> </p> <a href="https://publications.waset.org/abstracts/15820/least-support-orthogonal-matching-pursuit-ls-omp-recovery-method-for-invisible-watermarking-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15820.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9279</span> Generation of Photo-Mosaic Images through Block Matching and Color Adjustment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hae-Yeoun%20Lee">Hae-Yeoun Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mosaic refers to a technique that makes image by gathering lots of small materials in various colours. This paper presents an automatic algorithm that makes the photomosaic image using photos. The algorithm is composed of four steps: Partition and feature extraction, block matching, redundancy removal and colour adjustment. The input image is partitioned in the small block to extract feature. Each block is matched to find similar photo in database by comparing similarity with Euclidean difference between blocks. The intensity of the block is adjusted to enhance the similarity of image by replacing the value of light and darkness with that of relevant block. Further, the quality of image is improved by minimizing the redundancy of tiles in the adjacent blocks. Experimental results support that the proposed algorithm is excellent in quantitative analysis and qualitative analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=photomosaic" title="photomosaic">photomosaic</a>, <a href="https://publications.waset.org/abstracts/search?q=Euclidean%20distance" title=" Euclidean distance"> Euclidean distance</a>, <a href="https://publications.waset.org/abstracts/search?q=block%20matching" title=" block matching"> block matching</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20adjustment" title=" intensity adjustment"> intensity adjustment</a> </p> <a href="https://publications.waset.org/abstracts/7022/generation-of-photo-mosaic-images-through-block-matching-and-color-adjustment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7022.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9278</span> Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binary%20image" title="binary image">binary image</a>, <a href="https://publications.waset.org/abstracts/search?q=dissimilarity%20detection" title=" dissimilarity detection"> dissimilarity detection</a>, <a href="https://publications.waset.org/abstracts/search?q=probabilistic%20matching%20model%20for%20binary%20images" title=" probabilistic matching model for binary images"> probabilistic matching model for binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20mapping" title=" image mapping"> image mapping</a> </p> <a href="https://publications.waset.org/abstracts/113778/comparative-analysis-of-dissimilarity-detection-between-binary-images-based-on-equivalency-and-non-equivalency-of-image-inversion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/113778.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9277</span> Wavelet Coefficients Based on Orthogonal Matching Pursuit (OMP) Based Filtering for Remotely Sensed Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramandeep%20Kaur">Ramandeep Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamaljit%20Kaur"> Kamaljit Kaur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, the technology of the remote sensing is growing rapidly. Image enhancement is one of most commonly used of image processing operations. Noise reduction plays very important role in digital image processing and various technologies have been located ahead to reduce the noise of the remote sensing images. The noise reduction using wavelet coefficients based on Orthogonal Matching Pursuit (OMP) has less consequences on the edges than available methods but this is not as establish in edge preservation techniques. So in this paper we provide a new technique minimum patch based noise reduction OMP which reduce the noise from an image and used edge preservation patch which preserve the edges of the image and presents the superior results than existing OMP technique. Experimental results show that the proposed minimum patch approach outperforms over existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20denoising" title="image denoising">image denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=minimum%20patch" title=" minimum patch"> minimum patch</a>, <a href="https://publications.waset.org/abstracts/search?q=OMP" title=" OMP"> OMP</a>, <a href="https://publications.waset.org/abstracts/search?q=WCOMP" title=" WCOMP"> WCOMP</a> </p> <a href="https://publications.waset.org/abstracts/59831/wavelet-coefficients-based-on-orthogonal-matching-pursuit-omp-based-filtering-for-remotely-sensed-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59831.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9276</span> Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongyeob%20Han">Dongyeob Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Jungwon%20Huh"> Jungwon Huh</a>, <a href="https://publications.waset.org/abstracts/search?q=Quang%20Huy%20Tran"> Quang Huy Tran</a>, <a href="https://publications.waset.org/abstracts/search?q=Choonghyun%20Kang"> Choonghyun Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building" title="building">building</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a> </p> <a href="https://publications.waset.org/abstracts/85064/registration-of-multi-temporal-unmanned-aerial-vehicle-images-for-facility-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9275</span> Design and Implementation of Partial Denoising Boundary Image Matching Using Indexing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bum-Soo%20Kim">Bum-Soo Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jin-Uk%20Kim"> Jin-Uk Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we design and implement a partial denoising boundary image matching system using indexing techniques. Converting boundary images to time-series makes it feasible to perform fast search using indexes even on a very large image database. Thus, using this converting method we develop a client-server system based on the previous partial denoising research in the GUI (graphical user interface) environment. The client first converts a query image given by a user to a time-series and sends denoising parameters and the tolerance with this time-series to the server. The server identifies similar images from the index by evaluating a range query, which is constructed using inputs given from the client, and sends the resulting images to the client. Experimental results show that our system provides much intuitive and accurate matching result. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=boundary%20image%20matching" title="boundary image matching">boundary image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=indexing" title=" indexing"> indexing</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20denoising" title=" partial denoising"> partial denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=time-series%20matching" title=" time-series matching"> time-series matching</a> </p> <a href="https://publications.waset.org/abstracts/97170/design-and-implementation-of-partial-denoising-boundary-image-matching-using-indexing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97170.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9274</span> Speeding-up Gray-Scale FIC by Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20A.%20Al-Hilo">Eman A. Al-Hilo</a>, <a href="https://publications.waset.org/abstracts/search?q=Hawraa%20H.%20Al-Waelly"> Hawraa H. Al-Waelly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractal%20gray%20level%20image" title="fractal gray level image">fractal gray level image</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20compression%20technique" title=" fractal compression technique"> fractal compression technique</a>, <a href="https://publications.waset.org/abstracts/search?q=iterated%20function%20system" title=" iterated function system"> iterated function system</a>, <a href="https://publications.waset.org/abstracts/search?q=moments%20feature" title=" moments feature"> moments feature</a>, <a href="https://publications.waset.org/abstracts/search?q=zero-mean%20range-domain%20block" title=" zero-mean range-domain block"> zero-mean range-domain block</a> </p> <a href="https://publications.waset.org/abstracts/19903/speeding-up-gray-scale-fic-by-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">492</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9273</span> Biimodal Biometrics System Using Fusion of Iris and Fingerprint</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Attallah%20Bilal">Attallah Bilal</a>, <a href="https://publications.waset.org/abstracts/search?q=Hendel%20Fatiha"> Hendel Fatiha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris" title="iris">iris</a>, <a href="https://publications.waset.org/abstracts/search?q=fingerprint" title=" fingerprint"> fingerprint</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20rule" title=" sum rule"> sum rule</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/18556/biimodal-biometrics-system-using-fusion-of-iris-and-fingerprint" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18556.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9272</span> Size Reduction of Images Using Constraint Optimization Approach for Machine Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chee%20Sun%20Won">Chee Sun Won</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the size reduction of images for machine-to-machine communications. Here, the salient image regions to be preserved include the image patches of the key-points such as corners and blobs. Based on a saliency image map from the key-points and their image patches, an axis-aligned grid-size optimization is proposed for the reduction of image size. To increase the size-reduction efficiency the aspect ratio constraint is relaxed in the constraint optimization framework. The proposed method yields higher matching accuracy after the size reduction than the conventional content-aware image size-reduction methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20compression" title="image compression">image compression</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description" title=" key-point detection and description"> key-point detection and description</a>, <a href="https://publications.waset.org/abstracts/search?q=machine-to-machine%20communication" title=" machine-to-machine communication"> machine-to-machine communication</a> </p> <a href="https://publications.waset.org/abstracts/67605/size-reduction-of-images-using-constraint-optimization-approach-for-machine-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67605.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9271</span> A Comparison of Image Data Representations for Local Stereo Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andr%C3%A9%20Smith">André Smith</a>, <a href="https://publications.waset.org/abstracts/search?q=Amr%20Abdel-Dayem"> Amr Abdel-Dayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colour%20data" title="colour data">colour data</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20stereo%20matching" title=" local stereo matching"> local stereo matching</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20correspondence" title=" stereo correspondence"> stereo correspondence</a>, <a href="https://publications.waset.org/abstracts/search?q=disparity%20map" title=" disparity map"> disparity map</a> </p> <a href="https://publications.waset.org/abstracts/68197/a-comparison-of-image-data-representations-for-local-stereo-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68197.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9270</span> Blind Data Hiding Technique Using Interpolation of Subsampled Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Singara%20Singh%20Kasana">Singara Singh Kasana</a>, <a href="https://publications.waset.org/abstracts/search?q=Pankaj%20Garg"> Pankaj Garg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a blind data hiding technique based on interpolation of sub sampled versions of a cover image is proposed. Sub sampled image is taken as a reference image and an interpolated image is generated from this reference image. Then difference between original cover image and interpolated image is used to embed secret data. Comparisons with the existing interpolation based techniques show that proposed technique provides higher embedding capacity and better visual quality marked images. Moreover, the performance of the proposed technique is more stable for different images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interpolation" title="interpolation">interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subsampling" title=" image subsampling"> image subsampling</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=SIM" title=" SIM"> SIM</a> </p> <a href="https://publications.waset.org/abstracts/18926/blind-data-hiding-technique-using-interpolation-of-subsampled-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">578</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9269</span> Impedance Matching of Axial Mode Helical Antennas</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Mardani">Hossein Mardani</a>, <a href="https://publications.waset.org/abstracts/search?q=Neil%20Buchanan"> Neil Buchanan</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20Cahill"> Robert Cahill</a>, <a href="https://publications.waset.org/abstracts/search?q=Vincent%20Fusco"> Vincent Fusco</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we study the input impedance characteristics of axial mode helical antennas to find an effective way for matching it to 50 Ω. The study is done on the important matching parameters such as like wire diameter and helix to the ground plane gap. It is intended that these parameters control the matching without detrimentally affecting the radiation pattern. Using transmission line theory, a simple broadband technique is proposed, which is applicable for perfect matching of antennas with similar design parameters. We provide design curves to help to choose the proper dimensions of the matching section based on the antenna’s unmatched input impedance. Finally, using the proposed technique, a 4-turn axial mode helix is designed at 2.5 GHz center frequency and the measurement results of the manufactured antenna will be included. This parametric study gives a good insight into the input impedance characteristics of axial mode helical antennas and the proposed impedance matching approach provides a simple, useful method for matching these types of antennas. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=antenna" title="antenna">antenna</a>, <a href="https://publications.waset.org/abstracts/search?q=helix" title=" helix"> helix</a>, <a href="https://publications.waset.org/abstracts/search?q=helical" title=" helical"> helical</a>, <a href="https://publications.waset.org/abstracts/search?q=axial%20mode" title=" axial mode"> axial mode</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20power%20transfer" title=" wireless power transfer"> wireless power transfer</a>, <a href="https://publications.waset.org/abstracts/search?q=impedance%20matching" title=" impedance matching"> impedance matching</a> </p> <a href="https://publications.waset.org/abstracts/134308/impedance-matching-of-axial-mode-helical-antennas" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134308.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">312</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9268</span> An Approach for Reducing Morphological Operator Dataset and Recognize Optical Character Based on Significant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashis%20Pradhan">Ashis Pradhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohan%20P.%20Pradhan"> Mohan P. Pradhan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pattern Matching is useful for recognizing character in a digital image. OCR is one such technique which reads character from a digital image and recognizes them. Line segmentation is initially used for identifying character in an image and later refined by morphological operations like binarization, erosion, thinning, etc. The work discusses a recognition technique that defines a set of morphological operators based on its orientation in a character. These operators are further categorized into groups having similar shape but different orientation for efficient utilization of memory. Finally the characters are recognized in accordance with the occurrence of frequency in hierarchy of significant pattern of those morphological operators and by comparing them with the existing database of each character. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binary%20image" title="binary image">binary image</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20patterns" title=" morphological patterns"> morphological patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20count" title=" frequency count"> frequency count</a>, <a href="https://publications.waset.org/abstracts/search?q=priority" title=" priority"> priority</a>, <a href="https://publications.waset.org/abstracts/search?q=reduction%20data%20set%20and%20recognition" title=" reduction data set and recognition"> reduction data set and recognition</a> </p> <a href="https://publications.waset.org/abstracts/30867/an-approach-for-reducing-morphological-operator-dataset-and-recognize-optical-character-based-on-significant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30867.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9267</span> A Deep Learning Based Approach for Dynamically Selecting Pre-processing Technique for Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Revoti%20Prasad%20Bora">Revoti Prasad Bora</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikita%20Katyal"> Nikita Katyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabh%20Yadav"> Saurabh Yadav</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pre-processing plays an important role in various image processing applications. Most of the time due to the similar nature of images, a particular pre-processing or a set of pre-processing steps are sufficient to produce the desired results. However, in the education domain, there is a wide variety of images in various aspects like images with line-based diagrams, chemical formulas, mathematical equations, etc. Hence a single pre-processing or a set of pre-processing steps may not yield good results. Therefore, a Deep Learning based approach for dynamically selecting a relevant pre-processing technique for each image is proposed. The proposed method works as a classifier to detect hidden patterns in the images and predicts the relevant pre-processing technique needed for the image. This approach experimented for an image similarity matching problem but it can be adapted to other use cases too. Experimental results showed significant improvement in average similarity ranking with the proposed method as opposed to static pre-processing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title="deep-learning">deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-processing" title=" pre-processing"> pre-processing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=educational%20data%20mining" title=" educational data mining"> educational data mining</a> </p> <a href="https://publications.waset.org/abstracts/148397/a-deep-learning-based-approach-for-dynamically-selecting-pre-processing-technique-for-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148397.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9266</span> BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gianna%20Zou">Gianna Zou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BART" title="BART">BART</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian" title=" Bayesian"> Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=matching" title=" matching"> matching</a>, <a href="https://publications.waset.org/abstracts/search?q=regression" title=" regression"> regression</a> </p> <a href="https://publications.waset.org/abstracts/149989/bart-matching-method-using-bayesian-additive-regression-tree-for-data-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9265</span> Counting People Utilizing Space-Time Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Elmarhomy">Ahmed Elmarhomy</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An automated method for counting passerby has been proposed using virtual-vertical measurement lines. Space-time image is representing the human regions which are treated using the segmentation process. Different color space has been used to perform the template matching. A proper template matching has been achieved to determine direction and speed of passing people. Distinguish one or two passersby has been investigated using a correlation between passerby speed and the human-pixel area. Finally, the effectiveness of the presented method has been experimentally verified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=counting%20people" title="counting people">counting people</a>, <a href="https://publications.waset.org/abstracts/search?q=measurement%20line" title=" measurement line"> measurement line</a>, <a href="https://publications.waset.org/abstracts/search?q=space-time%20image" title=" space-time image"> space-time image</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a> </p> <a href="https://publications.waset.org/abstracts/46877/counting-people-utilizing-space-time-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46877.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9264</span> Chinese Event Detection Technique Based on Dependency Parsing and Rule Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Weitao%20Lin">Weitao Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To quickly extract adequate information from large-scale unstructured text data, this paper studies the representation of events in Chinese scenarios and performs the regularized abstraction. It proposes a Chinese event detection technique based on dependency parsing and rule matching. The method first performs dependency parsing on the original utterance, then performs pattern matching at the word or phrase granularity based on the results of dependent syntactic analysis, filters out the utterances with prominent non-event characteristics, and obtains the final results. The experimental results show the effectiveness of the method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title="natural language processing">natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=Chinese%20event%20detection" title=" Chinese event detection"> Chinese event detection</a>, <a href="https://publications.waset.org/abstracts/search?q=rules%20matching" title=" rules matching"> rules matching</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20parsing" title=" dependency parsing"> dependency parsing</a> </p> <a href="https://publications.waset.org/abstracts/158129/chinese-event-detection-technique-based-on-dependency-parsing-and-rule-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158129.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9263</span> Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hu%20Zhenxing">Hu Zhenxing</a>, <a href="https://publications.waset.org/abstracts/search?q=Gao%20Jianxin"> Gao Jianxin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distortion" title="distortion">distortion</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo-based%20digital%20image%20correlation" title=" stereo-based digital image correlation"> stereo-based digital image correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=b-spline" title=" b-spline"> b-spline</a>, <a href="https://publications.waset.org/abstracts/search?q=3D" title=" 3D"> 3D</a>, <a href="https://publications.waset.org/abstracts/search?q=2D" title=" 2D "> 2D </a> </p> <a href="https://publications.waset.org/abstracts/20547/application-of-a-universal-distortion-correction-method-in-stereo-based-digital-image-correlation-measurement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20547.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">498</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9262</span> Optimizing Machine Learning Through Python Based Image Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Srinidhi.%20A">Srinidhi. A</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Twinkle%20Hareendran"> Twinkle Hareendran</a>, <a href="https://publications.waset.org/abstracts/search?q=Vriksha%20Prakash"> Vriksha Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20applications" title=" machine learning applications"> machine learning applications</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a> </p> <a href="https://publications.waset.org/abstracts/193107/optimizing-machine-learning-through-python-based-image-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">15</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9261</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9260</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9259</span> High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalid%20A.%20Al-Afandy">Khalid A. Al-Afandy</a>, <a href="https://publications.waset.org/abstracts/search?q=El-Sayyed%20El-Rabaie"> El-Sayyed El-Rabaie</a>, <a href="https://publications.waset.org/abstracts/search?q=Osama%20Salah"> Osama Salah</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20El-Mhalaway"> Ahmed El-Mhalaway</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=steganography" title="steganography">steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=stego" title=" stego"> stego</a>, <a href="https://publications.waset.org/abstracts/search?q=LSB" title=" LSB"> LSB</a>, <a href="https://publications.waset.org/abstracts/search?q=crop" title=" crop"> crop</a> </p> <a href="https://publications.waset.org/abstracts/44747/high-secure-data-hiding-using-cropping-image-and-least-significant-bit-steganography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44747.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9258</span> Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siraa%20Ben%20Ftima">Siraa Ben Ftima</a>, <a href="https://publications.waset.org/abstracts/search?q=Mourad%20Talbi"> Mourad Talbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tahar%20Ezzedine"> Tahar Ezzedine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lifting%20wavelet%20transform%20%28LWT%29" title="lifting wavelet transform (LWT)">lifting wavelet transform (LWT)</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-space%20vectorial%20decomposition" title=" sub-space vectorial decomposition"> sub-space vectorial decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=secure" title=" secure"> secure</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20watermarking" title=" image watermarking"> image watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=watermark" title=" watermark"> watermark</a> </p> <a href="https://publications.waset.org/abstracts/70998/lifting-wavelet-transform-and-singular-values-decomposition-for-secure-image-watermarking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">276</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9257</span> Automated Feature Detection and Matching Algorithms for Breast IR Sequence Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chia-Yen%20Lee">Chia-Yen Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao-Jen%20Wang"> Hao-Jen Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jhih-Hao%20Lai"> Jhih-Hao Lai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, infrared (IR) imaging has been considered as a potential tool to assess the efficacy of chemotherapy and early detection of breast cancer. Regions of tumor growth with high metabolic rate and angiogenesis phenomenon lead to the high temperatures. Observation of differences between the heat maps in long term is useful to help assess the growth of breast cancer cells and detect breast cancer earlier, wherein the multi-time infrared image alignment technology is a necessary step. Representative feature points detection and matching are essential steps toward the good performance of image registration and quantitative analysis. However, there is no clear boundary on the infrared images and the subject's posture are different for each shot. It cannot adhesive markers on a body surface for a very long period, and it is hard to find anatomic fiducial markers on a body surface. In other words, it’s difficult to detect and match features in an IR sequence images. In this study, automated feature detection and matching algorithms with two type of automatic feature points (i.e., vascular branch points and modified Harris corner) are developed respectively. The preliminary results show that the proposed method could identify the representative feature points on the IR breast images successfully of 98% accuracy and the matching results of 93% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Harris%20corner" title="Harris corner">Harris corner</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared%20image" title=" infrared image"> infrared image</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20detection" title=" feature detection"> feature detection</a>, <a href="https://publications.waset.org/abstracts/search?q=registration" title=" registration"> registration</a>, <a href="https://publications.waset.org/abstracts/search?q=matching" title=" matching"> matching</a> </p> <a href="https://publications.waset.org/abstracts/16915/automated-feature-detection-and-matching-algorithms-for-breast-ir-sequence-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9256</span> On Phase Based Stereo Matching and Its Related Issues</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andr%C3%A1s%20R%C3%B6vid">András Rövid</a>, <a href="https://publications.waset.org/abstracts/search?q=Takeshi%20Hashimoto"> Takeshi Hashimoto</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper focuses on the problem of the point correspondence matching in stereo images. The proposed matching algorithm is based on the combination of simpler methods such as normalized sum of squared differences (NSSD) and a more complex phase correlation based approach, by considering the noise and other factors, as well. The speed of NSSD and the preciseness of the phase correlation together yield an efficient approach to find the best candidate point with sub-pixel accuracy in stereo image pairs. The task of the NSSD in this case is to approach the candidate pixel roughly. Afterwards the location of the candidate is refined by an enhanced phase correlation based method which in contrast to the NSSD has to run only once for each selected pixel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=stereo%20matching" title="stereo matching">stereo matching</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-pixel%20accuracy" title=" sub-pixel accuracy"> sub-pixel accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=phase%20correlation" title=" phase correlation"> phase correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=SVD" title=" SVD"> SVD</a>, <a href="https://publications.waset.org/abstracts/search?q=NSSD" title=" NSSD"> NSSD</a> </p> <a href="https://publications.waset.org/abstracts/8549/on-phase-based-stereo-matching-and-its-related-issues" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">468</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9255</span> Image Steganography Using Least Significant Bit Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Preeti%20Kumari">Preeti Kumari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ridhi%20Kapoor"> Ridhi Kapoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In any communication, security is the most important issue in today’s world. In this paper, steganography is the process of hiding the important data into other data, such as text, audio, video, and image. The interest in this topic is to provide availability, confidentiality, integrity, and authenticity of data. The steganographic technique that embeds hides content with unremarkable cover media so as not to provoke eavesdropper’s suspicion or third party and hackers. In which many applications of compression, encryption, decryption, and embedding methods are used for digital image steganography. Due to compression, the nose produces in the image. To sustain noise in the image, the LSB insertion technique is used. The performance of the proposed embedding system with respect to providing security to secret message and robustness is discussed. We also demonstrate the maximum steganography capacity and visual distortion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=steganography" title="steganography">steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=LSB" title=" LSB"> LSB</a>, <a href="https://publications.waset.org/abstracts/search?q=encoding" title=" encoding"> encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20hiding" title=" information hiding"> information hiding</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20image" title=" color image"> color image</a> </p> <a href="https://publications.waset.org/abstracts/35755/image-steganography-using-least-significant-bit-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35755.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9254</span> Analysis of Various Copy Move Image Forgery Techniques for Better Detection Accuracy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Grishma%20D.%20Solanki">Grishma D. Solanki</a>, <a href="https://publications.waset.org/abstracts/search?q=Karshan%20Kandoriya"> Karshan Kandoriya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In modern era of information age, digitalization has revolutionized like never before. Powerful computers, advanced photo editing software packages and high resolution capturing devices have made manipulation of digital images incredibly easy. As per as image forensics concerns, one of the most actively researched area are detection of copy move forgeries. Higher computational complexity is one of the major component of existing techniques to detect such tampering. Moreover, copy move forgery is usually performed in three steps. First, copying of a region in an image then pasting the same one in the same respective image and finally doing some post-processing like rotation, scaling, shift, noise, etc. Consequently, pseudo Zernike moment is used as a features extraction method for matching image blocks and as a primary factor on which performance of detection algorithms depends. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=copy-move%20image%20forgery" title="copy-move image forgery">copy-move image forgery</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20forensics" title=" digital forensics"> digital forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title=" image forensics"> image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20forgery" title=" image forgery"> image forgery</a> </p> <a href="https://publications.waset.org/abstracts/49539/analysis-of-various-copy-move-image-forgery-techniques-for-better-detection-accuracy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49539.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9253</span> A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Zheng">Yuan Zheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model’s wireframes. Additionally, a normalization operation for the model’s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D-2D%20matching" title="3D-2D matching">3D-2D matching</a>, <a href="https://publications.waset.org/abstracts/search?q=fitness%20function" title=" fitness function"> fitness function</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20vehicle%20model" title=" 3D vehicle model"> 3D vehicle model</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20image%20gradient" title=" local image gradient"> local image gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouette%20information" title=" silhouette information"> silhouette information</a> </p> <a href="https://publications.waset.org/abstracts/45357/a-practical-and-efficient-evaluation-function-for-3d-model-based-vehicle-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45357.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9252</span> Imp_hist-Si: Improved Hybrid Image Segmentation Technique for Satellite Imagery to Decrease the Segmentation Error Rate</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neetu%20Manocha">Neetu Manocha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is a technique where a picture is parted into distinct parts having similar features which have a place with similar items. Various segmentation strategies have been proposed as of late by prominent analysts. But, after ultimate thorough research, the novelists have analyzed that generally, the old methods do not decrease the segmentation error rate. Then author finds the technique HIST-SI to decrease the segmentation error rates. In this technique, cluster-based and threshold-based segmentation techniques are merged together. After then, to improve the result of HIST-SI, the authors added the method of filtering and linking in this technique named Imp_HIST-SI to decrease the segmentation error rates. The goal of this research is to find a new technique to decrease the segmentation error rates and produce much better results than the HIST-SI technique. For testing the proposed technique, a dataset of Bhuvan – a National Geoportal developed and hosted by ISRO (Indian Space Research Organisation) is used. Experiments are conducted using Scikit-image & OpenCV tools of Python, and performance is evaluated and compared over various existing image segmentation techniques for several matrices, i.e., Mean Square Error (MSE) and Peak Signal Noise Ratio (PSNR). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=satellite%20image" title="satellite image">satellite image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20rate" title=" error rate"> error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=MSE" title=" MSE"> MSE</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=HIST-SI" title=" HIST-SI"> HIST-SI</a>, <a href="https://publications.waset.org/abstracts/search?q=linking" title=" linking"> linking</a>, <a href="https://publications.waset.org/abstracts/search?q=filtering" title=" filtering"> filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=imp_HIST-SI" title=" imp_HIST-SI"> imp_HIST-SI</a> </p> <a href="https://publications.waset.org/abstracts/149905/imp-hist-si-improved-hybrid-image-segmentation-technique-for-satellite-imagery-to-decrease-the-segmentation-error-rate" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149905.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=309">309</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=310">310</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20matching%20technique&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>