CINXE.COM
Search results for: image fusion
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image fusion</title> <meta name="description" content="Search results for: image fusion"> <meta name="keywords" content="image fusion"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image fusion" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image fusion"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1678</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image fusion</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1678</span> Region-Based Image Fusion with Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Shuo-Li%20Hsu">Shuo-Li Hsu</a>, <a href="https://publications.waset.org/search?q=Peng-Wei%20Gau"> Peng-Wei Gau</a>, <a href="https://publications.waset.org/search?q=I-Lin%20Wu"> I-Lin Wu</a>, <a href="https://publications.waset.org/search?q=Jyh-Horng%20Jeng"> Jyh-Horng Jeng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For most image fusion algorithms separate relationship by pixels in the image and treat them more or less independently. In addition, they have to be adjusted different parameters in different time or weather. In this paper, we propose a region–based image fusion which combines aspects of feature and pixel-level fusion method to replace only by pixel. The basic idea is to segment far infrared image only and to add information of each region from segmented image to visual image respectively. Then we determine different fused parameters according different region. At last, we adopt artificial neural network to deal with the problems of different time or weather, because the relationship between fused parameters and image features are nonlinear. It render the fused parameters can be produce automatically according different states. The experimental results present the method we proposed indeed have good adaptive capacity with automatic determined fused parameters. And the architecture can be used for lots of applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20fusion" title="Image fusion">Image fusion</a>, <a href="https://publications.waset.org/search?q=Region-based%20fusion" title=" Region-based fusion"> Region-based fusion</a>, <a href="https://publications.waset.org/search?q=Segmentation" title=" Segmentation"> Segmentation</a>, <a href="https://publications.waset.org/search?q=Neural%20network" title=" Neural network"> Neural network</a>, <a href="https://publications.waset.org/search?q=Multi-sensor." title=" Multi-sensor."> Multi-sensor.</a> </p> <a href="https://publications.waset.org/7557/region-based-image-fusion-with-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7557/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7557/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7557/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7557/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7557/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7557/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7557/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7557/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7557/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7557/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7557.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2258</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1677</span> On the EM Algorithm and Bootstrap Approach Combination for Improving Satellite Image Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tijani%20Delleji">Tijani Delleji</a>, <a href="https://publications.waset.org/search?q=Mourad%20Zribi"> Mourad Zribi</a>, <a href="https://publications.waset.org/search?q=Ahmed%20Ben%20Hamida"> Ahmed Ben Hamida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses EM algorithm and Bootstrap approach combination applied for the improvement of the satellite image fusion process. This novel satellite image fusion method based on estimation theory EM algorithm and reinforced by Bootstrap approach was successfully implemented and tested. The sensor images are firstly split by a Bayesian segmentation method to determine a joint region map for the fused image. Then, we use the EM algorithm in conjunction with the Bootstrap approach to develop the bootstrap EM fusion algorithm, hence producing the fused targeted image. We proposed in this research to estimate the statistical parameters from some iterative equations of the EM algorithm relying on a reference of representative Bootstrap samples of images. Sizes of those samples are determined from a new criterion called 'hybrid criterion'. Consequently, the obtained results of our work show that using the Bootstrap EM (BEM) in image fusion improve performances of estimated parameters which involve amelioration of the fused image quality; and reduce the computing time during the fusion process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Satellite%20image%20fusion" title="Satellite image fusion">Satellite image fusion</a>, <a href="https://publications.waset.org/search?q=Bayesian%20segmentation" title=" Bayesian segmentation"> Bayesian segmentation</a>, <a href="https://publications.waset.org/search?q=Bootstrap%20approach" title=" Bootstrap approach"> Bootstrap approach</a>, <a href="https://publications.waset.org/search?q=EM%20algorithm." title=" EM algorithm."> EM algorithm.</a> </p> <a href="https://publications.waset.org/3474/on-the-em-algorithm-and-bootstrap-approach-combination-for-improving-satellite-image-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3474/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3474/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3474/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3474/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3474/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3474/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3474/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3474/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3474/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3474/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2260</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1676</span> Performance Analysis of Brain Tumor Detection Based On Image Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Anbumozhi">S. Anbumozhi</a>, <a href="https://publications.waset.org/search?q=P.%20S.%20Manoharan"> P. S. Manoharan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Medical Image fusion plays a vital role in medical field to diagnose the brain tumors which can be classified as benign or malignant. It is the process of integrating multiple images of the same scene into a single fused image to reduce uncertainty and minimizing redundancy while extracting all the useful information from the source images. Fuzzy logic is used to fuse two brain MRI images with different vision. The fused image will be more informative than the source images. The texture and wavelet features are extracted from the fused image. The multilevel Adaptive Neuro Fuzzy Classifier classifies the brain tumors based on trained and tested features. The proposed method achieved 80.48% sensitivity, 99.9% specificity and 99.69% accuracy. Experimental results obtained from fusion process prove that the use of the proposed image fusion approach shows better performance while compared with conventional fusion methodologies.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20fusion" title="Image fusion">Image fusion</a>, <a href="https://publications.waset.org/search?q=Fuzzy%20rules" title=" Fuzzy rules"> Fuzzy rules</a>, <a href="https://publications.waset.org/search?q=Neuro-fuzzy%20classifier." title=" Neuro-fuzzy classifier."> Neuro-fuzzy classifier.</a> </p> <a href="https://publications.waset.org/9998655/performance-analysis-of-brain-tumor-detection-based-on-image-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998655/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998655/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998655/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998655/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998655/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998655/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998655/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998655/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998655/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998655/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998655.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3058</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1675</span> Efficient Feature Fusion for Noise Iris in Unconstrained Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yao-Hong%20Tsai">Yao-Hong Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20fusion" title="Image fusion">Image fusion</a>, <a href="https://publications.waset.org/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/search?q=wavelet." title=" wavelet."> wavelet.</a> </p> <a href="https://publications.waset.org/10000671/efficient-feature-fusion-for-noise-iris-in-unconstrained-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000671/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000671/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000671/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000671/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000671/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000671/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000671/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000671/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000671/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000671/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000671.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2217</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1674</span> A Novel Metric for Performance Evaluation of Image Fusion Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nedeljko%20Cvejic">Nedeljko Cvejic</a>, <a href="https://publications.waset.org/search?q=Artur%20%C5%81oza"> Artur Łoza</a>, <a href="https://publications.waset.org/search?q=David%20Bull"> David Bull</a>, <a href="https://publications.waset.org/search?q=Nishan%20Canagarajah"> Nishan Canagarajah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Fusion%20performance%20measures" title="Fusion performance measures">Fusion performance measures</a>, <a href="https://publications.waset.org/search?q=image%20fusion" title=" image fusion"> image fusion</a>, <a href="https://publications.waset.org/search?q=non-reference%20quality%20measures" title=" non-reference quality measures"> non-reference quality measures</a>, <a href="https://publications.waset.org/search?q=objective%20quality%20measures." title=" objective quality measures."> objective quality measures.</a> </p> <a href="https://publications.waset.org/8392/a-novel-metric-for-performance-evaluation-of-image-fusion-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8392/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8392/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8392/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8392/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8392/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8392/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8392/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8392/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8392/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8392/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8392.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2843</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1673</span> A Novel Architecture for Wavelet based Image Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Susmitha%20Vekkot">Susmitha Vekkot</a>, <a href="https://publications.waset.org/search?q=Pancham%20Shukla"> Pancham Shukla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we focus on the fusion of images from different sources using multiresolution wavelet transforms. Based on reviews of popular image fusion techniques used in data analysis, different pixel and energy based methods are experimented. A novel architecture with a hybrid algorithm is proposed which applies pixel based maximum selection rule to low frequency approximations and filter mask based fusion to high frequency details of wavelet decomposition. The key feature of hybrid architecture is the combination of advantages of pixel and region based fusion in a single image which can help the development of sophisticated algorithms enhancing the edges and structural details. A Graphical User Interface is developed for image fusion to make the research outcomes available to the end user. To utilize GUI capabilities for medical, industrial and commercial activities without MATLAB installation, a standalone executable application is also developed using Matlab Compiler Runtime. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Filter%20mask" title="Filter mask">Filter mask</a>, <a href="https://publications.waset.org/search?q=GUI" title=" GUI"> GUI</a>, <a href="https://publications.waset.org/search?q=hybrid%20architecture" title=" hybrid architecture"> hybrid architecture</a>, <a href="https://publications.waset.org/search?q=image%20fusion" title=" image fusion"> image fusion</a>, <a href="https://publications.waset.org/search?q=Matlab%20Compiler%20Runtime" title="Matlab Compiler Runtime">Matlab Compiler Runtime</a>, <a href="https://publications.waset.org/search?q=wavelet%20transform." title=" wavelet transform."> wavelet transform.</a> </p> <a href="https://publications.waset.org/6304/a-novel-architecture-for-wavelet-based-image-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6304/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6304/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6304/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6304/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6304/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6304/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6304/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6304/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6304/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6304/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6304.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2389</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1672</span> The Use of Complex Contourlet Transform on Fusion Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dipeng%20Chen">Dipeng Chen</a>, <a href="https://publications.waset.org/search?q=Qi%20Li"> Qi Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image fusion aims to enhance the perception of a scene by combining important information captured by different sensors. Dual-Tree Complex Wavelet (DT-CWT) has been thouroughly investigated for image fusion, since it takes advantages of approximate shift invariance and direction selectivity. But it can only handle limited direction information. To allow a more flexible directional expansion for images, we propose a novel fusion scheme, referred to as complex contourlet transform (CCT). It successfully incorporates directional filter banks (DFB) into DT-CWT. As a result it efficiently deal with images containing contours and textures, whereas it retains the property of shift invariance. Experimental results demonstrated that the method features high quality fusion performance and can facilitate many image processing applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Complex%20contourlet%20transform" title="Complex contourlet transform">Complex contourlet transform</a>, <a href="https://publications.waset.org/search?q=Complex%20wavelettransform" title=" Complex wavelettransform"> Complex wavelettransform</a>, <a href="https://publications.waset.org/search?q=Fusion." title=" Fusion."> Fusion.</a> </p> <a href="https://publications.waset.org/10595/the-use-of-complex-contourlet-transform-on-fusion-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10595/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10595/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10595/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10595/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10595/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10595/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10595/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10595/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10595/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10595/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10595.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1594</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1671</span> A Similarity Metric for Assessment of Image Fusion Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nedeljko%20Cvejic">Nedeljko Cvejic</a>, <a href="https://publications.waset.org/search?q=Artur%20%C5%81oza"> Artur Łoza</a>, <a href="https://publications.waset.org/search?q=David%20Bull"> David Bull</a>, <a href="https://publications.waset.org/search?q=Nishan%20Canagarajah"> Nishan Canagarajah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Fusion%20performance%20measures" title="Fusion performance measures">Fusion performance measures</a>, <a href="https://publications.waset.org/search?q=image%20fusion" title=" image fusion"> image fusion</a>, <a href="https://publications.waset.org/search?q=nonreferencequality%20measures" title=" nonreferencequality measures"> nonreferencequality measures</a>, <a href="https://publications.waset.org/search?q=objective%20quality%20measures." title=" objective quality measures."> objective quality measures.</a> </p> <a href="https://publications.waset.org/1046/a-similarity-metric-for-assessment-of-image-fusion-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1046/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1046/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1046/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1046/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1046/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1046/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1046/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1046/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1046/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1046/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1046.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2492</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1670</span> Multi-Sensor Image Fusion for Visible and Infrared Thermal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Amit%20Kr.%20Happy">Amit Kr. Happy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity. </p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20fusion" title="Image fusion">Image fusion</a>, <a href="https://publications.waset.org/search?q=IR%20thermal%20imager" title=" IR thermal imager"> IR thermal imager</a>, <a href="https://publications.waset.org/search?q=multi-sensor" title=" multi-sensor"> multi-sensor</a>, <a href="https://publications.waset.org/search?q=Multi-Scale%20Transform." title=" Multi-Scale Transform."> Multi-Scale Transform.</a> </p> <a href="https://publications.waset.org/10012628/multi-sensor-image-fusion-for-visible-and-infrared-thermal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012628/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012628/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012628/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012628/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012628/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012628/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012628/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012628/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012628/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012628/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">430</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1669</span> Multi-Focus Image Fusion Using SFM and Wavelet Packet </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Somkait%20Udomhunsakul">Somkait Udomhunsakul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Multi-focus%20image%20fusion" title="Multi-focus image fusion">Multi-focus image fusion</a>, <a href="https://publications.waset.org/search?q=Wavelet%20Packet" title=" Wavelet Packet"> Wavelet Packet</a>, <a href="https://publications.waset.org/search?q=Spatial%20Frequency%20Measurement." title=" Spatial Frequency Measurement."> Spatial Frequency Measurement.</a> </p> <a href="https://publications.waset.org/9998287/multi-focus-image-fusion-using-sfm-and-wavelet-packet" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998287/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998287/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998287/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998287/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998287/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998287/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998287/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998287/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998287/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998287/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998287.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1614</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1668</span> Medical Imaging Fusion: A Teaching-Learning Simulation Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Cristina%20M.%20R.%20Caridade">Cristina M. R. Caridade</a>, <a href="https://publications.waset.org/search?q=Ana%20Rita%20F.%20Morais"> Ana Rita F. Morais</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with health care facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool, developed in MATLAB using Graphical User Interface, for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing to view original images and fusion images, compare processed and original images, adjust parameters and save images. The tool proposed in an innovative teaching and learning environment, consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques, necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides a real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20fusion" title="Image fusion">Image fusion</a>, <a href="https://publications.waset.org/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/search?q=teaching-learning%20simulation%20tool" title=" teaching-learning simulation tool"> teaching-learning simulation tool</a>, <a href="https://publications.waset.org/search?q=biomedical%20engineering%20education." title=" biomedical engineering education."> biomedical engineering education.</a> </p> <a href="https://publications.waset.org/10013903/medical-imaging-fusion-a-teaching-learning-simulation-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10013903/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10013903/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10013903/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10013903/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10013903/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10013903/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10013903/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10013903/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10013903/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10013903/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10013903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">21</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1667</span> Medical Image Fusion Based On Redundant Wavelet Transform and Morphological Processing </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=P.%20S.%20Gomathi">P. S. Gomathi</a>, <a href="https://publications.waset.org/search?q=B.%20Kalaavathi"> B. Kalaavathi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Discrete%20Wavelet%20Transform%20%28DWT%29" title="Discrete Wavelet Transform (DWT)">Discrete Wavelet Transform (DWT)</a>, <a href="https://publications.waset.org/search?q=Image%20Fusion" title=" Image Fusion"> Image Fusion</a>, <a href="https://publications.waset.org/search?q=Morphological%20Processing" title=" Morphological Processing"> Morphological Processing</a>, <a href="https://publications.waset.org/search?q=Redundant%20Wavelet%20Transform%20%28RWT%29." title=" Redundant Wavelet Transform (RWT). "> Redundant Wavelet Transform (RWT). </a> </p> <a href="https://publications.waset.org/9998892/medical-image-fusion-based-on-redundant-wavelet-transform-and-morphological-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998892/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998892/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998892/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998892/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998892/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998892/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998892/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998892/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998892/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998892/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998892.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2158</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1666</span> Integral Image-Based Differential Filters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kohei%20Inoue">Kohei Inoue</a>, <a href="https://publications.waset.org/search?q=Kenji%20Hara"> Kenji Hara</a>, <a href="https://publications.waset.org/search?q=Kiichi%20Urahama"> Kiichi Urahama</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Integral%20images" title="Integral images">Integral images</a>, <a href="https://publications.waset.org/search?q=differential%20images" title=" differential images"> differential images</a>, <a href="https://publications.waset.org/search?q=differential%20filters" title=" differential filters"> differential filters</a>, <a href="https://publications.waset.org/search?q=image%20fusion." title=" image fusion."> image fusion.</a> </p> <a href="https://publications.waset.org/9998323/integral-image-based-differential-filters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998323/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998323/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998323/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998323/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998323/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998323/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998323/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998323/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998323/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998323/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2099</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1665</span> Dempster-Shafer Evidence Theory for Image Segmentation: Application in Cells Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Ben%20Chaabane">S. Ben Chaabane</a>, <a href="https://publications.waset.org/search?q=M.%20Sayadi"> M. Sayadi</a>, <a href="https://publications.waset.org/search?q=F.%20Fnaiech"> F. Fnaiech</a>, <a href="https://publications.waset.org/search?q=E.%20Brassart"> E. Brassart</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we propose a new knowledge model using the Dempster-Shafer-s evidence theory for image segmentation and fusion. The proposed method is composed essentially of two steps. First, mass distributions in Dempster-Shafer theory are obtained from the membership degrees of each pixel covering the three image components (R, G and B). Each membership-s degree is determined by applying Fuzzy C-Means (FCM) clustering to the gray levels of the three images. Second, the fusion process consists in defining three discernment frames which are associated with the three images to be fused, and then combining them to form a new frame of discernment. The strategy used to define mass distributions in the combined framework is discussed in detail. The proposed fusion method is illustrated in the context of image segmentation. Experimental investigations and comparative studies with the other previous methods are carried out showing thus the robustness and superiority of the proposed method in terms of image segmentation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Fuzzy%20C-means" title="Fuzzy C-means">Fuzzy C-means</a>, <a href="https://publications.waset.org/search?q=Color%20image" title=" Color image"> Color image</a>, <a href="https://publications.waset.org/search?q=data%20fusion" title=" data fusion"> data fusion</a>, <a href="https://publications.waset.org/search?q=Dempster-Shafer%27s%20evidence%20theory" title="Dempster-Shafer's evidence theory">Dempster-Shafer's evidence theory</a> </p> <a href="https://publications.waset.org/8313/dempster-shafer-evidence-theory-for-image-segmentation-application-in-cells-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8313/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8313/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8313/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8313/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8313/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8313/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8313/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8313/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8313/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8313/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8313.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2200</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1664</span> Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=N.%20W.%20U.%20D.%20Chathurani">N. W. U. D. Chathurani</a>, <a href="https://publications.waset.org/search?q=Shlomo%20Geva"> Shlomo Geva</a>, <a href="https://publications.waset.org/search?q=Vinod%20Chandran"> Vinod Chandran</a>, <a href="https://publications.waset.org/search?q=Proboda%20Rajapaksha"> Proboda Rajapaksha </a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Feature%20fusion" title="Feature fusion">Feature fusion</a>, <a href="https://publications.waset.org/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a>, <a href="https://publications.waset.org/search?q=membership%20function" title=" membership function"> membership function</a>, <a href="https://publications.waset.org/search?q=normalization." title=" normalization. "> normalization. </a> </p> <a href="https://publications.waset.org/10005560/image-retrieval-based-on-multi-feature-fusion-for-heterogeneous-image-databases" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10005560/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10005560/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10005560/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10005560/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10005560/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10005560/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10005560/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10005560/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10005560/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10005560/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10005560.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1345</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1663</span> A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yongquan%20Zhao">Yongquan Zhao</a>, <a href="https://publications.waset.org/search?q=Bo%20Huang"> Bo Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Hybrid%20spatial-temporal-spectral%20fusion" title="Hybrid spatial-temporal-spectral fusion">Hybrid spatial-temporal-spectral fusion</a>, <a href="https://publications.waset.org/search?q=high%20resolution%20synthetic%20imagery" title=" high resolution synthetic imagery"> high resolution synthetic imagery</a>, <a href="https://publications.waset.org/search?q=least%20square%20regression" title=" least square regression"> least square regression</a>, <a href="https://publications.waset.org/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/search?q=spectral%20transformation." title=" spectral transformation."> spectral transformation.</a> </p> <a href="https://publications.waset.org/10008092/a-hybrid-image-fusion-model-for-generating-high-spatial-temporal-spectral-resolution-data-using-oli-modis-hyperion-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10008092/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10008092/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10008092/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10008092/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10008092/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10008092/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10008092/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10008092/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10008092/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10008092/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10008092.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1235</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1662</span> Super Resolution Blind Reconstruction of Low Resolution Images using Wavelets based Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Liyakathunisa">Liyakathunisa</a>, <a href="https://publications.waset.org/search?q=V.%20K.%20Ananthashayana"> V. K. Ananthashayana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Crucial information barely visible to the human eye is often embedded in a series of low resolution images taken of the same scene. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. The ideal algorithm should be fast, and should add sharpness and details, both at edges and in regions without adding artifacts. In this paper we propose a super resolution blind reconstruction technique for linearly degraded images. In our proposed technique the algorithm is divided into three parts an image registration, wavelets based fusion and an image restoration. In this paper three low resolution images are considered which may sub pixels shifted, rotated, blurred or noisy, the sub pixel shifted images are registered using affine transformation model; A wavelet based fusion is performed and the noise is removed using soft thresolding. Our proposed technique reduces blocking artifacts and also smoothens the edges and it is also able to restore high frequency details in an image. Our technique is efficient and computationally fast having clear perspective of real time implementation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Affine%20Transforms" title="Affine Transforms">Affine Transforms</a>, <a href="https://publications.waset.org/search?q=Denoiseing" title=" Denoiseing"> Denoiseing</a>, <a href="https://publications.waset.org/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/search?q=Fusion" title=" Fusion"> Fusion</a>, <a href="https://publications.waset.org/search?q=Image%20registration." title="Image registration.">Image registration.</a> </p> <a href="https://publications.waset.org/7195/super-resolution-blind-reconstruction-of-low-resolution-images-using-wavelets-based-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7195/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7195/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7195/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7195/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7195/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7195/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7195/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7195/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7195/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7195/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7195.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2670</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1661</span> Color Image Enhancement Using Multiscale Retinex and Image Fusion Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Chang-Hsing%20Lee">Chang-Hsing Lee</a>, <a href="https://publications.waset.org/search?q=Cheng-Chang%20Lien"> Cheng-Chang Lien</a>, <a href="https://publications.waset.org/search?q=Chin-Chuan%20Han"> Chin-Chuan Han</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, an edge-strength guided multiscale retinex (EGMSR) approach will be proposed for color image contrast enhancement. In EGMSR, the pixel-dependent weight associated with each pixel in the single scale retinex output image is computed according to the edge strength around this pixel in order to prevent from over-enhancing the noises contained in the smooth dark/bright regions. Further, by fusing together the enhanced results of EGMSR and adaptive multiscale retinex (AMSR), we can get a natural fused image having high contrast and proper tonal rendition. Experimental results on several low-contrast images have shown that our proposed approach can produce natural and appealing enhanced images.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Enhancement" title="Image Enhancement">Image Enhancement</a>, <a href="https://publications.waset.org/search?q=Multiscale%20Retinex" title=" Multiscale Retinex"> Multiscale Retinex</a>, <a href="https://publications.waset.org/search?q=Image%0D%0AFusion." title=" Image Fusion."> Image Fusion.</a> </p> <a href="https://publications.waset.org/9999495/color-image-enhancement-using-multiscale-retinex-and-image-fusion-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999495/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999495/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999495/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999495/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999495/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999495/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999495/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999495/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999495/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999495/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999495.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2738</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1660</span> Video Data Mining based on Information Fusion for Tamper Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Girija%20Chetty">Girija Chetty</a>, <a href="https://publications.waset.org/search?q=Renuka%20Biswas"> Renuka Biswas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose novel algorithmic models based on information fusion and feature transformation in crossmodal subspace for different types of residue features extracted from several intra-frame and inter-frame pixel sub-blocks in video sequences for detecting digital video tampering or forgery. An evaluation of proposed residue features – the noise residue features and the quantization features, their transformation in cross-modal subspace, and their multimodal fusion, for emulated copy-move tamper scenario shows a significant improvement in tamper detection accuracy as compared to single mode features without transformation in cross-modal subspace. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=image%20tamper%20detection" title="image tamper detection">image tamper detection</a>, <a href="https://publications.waset.org/search?q=digital%20forensics" title=" digital forensics"> digital forensics</a>, <a href="https://publications.waset.org/search?q=correlation%0Afeatures%20image%20fusion" title=" correlation features image fusion"> correlation features image fusion</a> </p> <a href="https://publications.waset.org/7183/video-data-mining-based-on-information-fusion-for-tamper-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7183/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7183/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7183/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7183/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7183/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7183/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7183/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7183/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7183/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7183/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7183.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1900</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1659</span> A Novel Cytokine Derived Fusion Tag for Over- Expression of Heterologous Proteins in E. coli</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Banerjee">S. Banerjee</a>, <a href="https://publications.waset.org/search?q=A.%20Apte%20Deshpande"> A. Apte Deshpande</a>, <a href="https://publications.waset.org/search?q=N.%20Mandi"> N. Mandi</a>, <a href="https://publications.waset.org/search?q=S.%20Padmanabhan"> S. Padmanabhan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We report a novel fusion tag for expressing recombinant proteins in E. coli. The fusion tag is the C-terminus part of the human GMCSF gene comprising 45 amino acids, which aid in over expression of otherwise non expressible genes. Expression of hIFN a2b with this fusion tag also escapes the requirement of rare codons for expression. This is also a first report of a small fusion tag of human origin having affinity to heparin sepharose column facilitating the purification of fusion protein. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=fusion%20tag" title="fusion tag">fusion tag</a>, <a href="https://publications.waset.org/search?q=bacterial%20expression" title=" bacterial expression"> bacterial expression</a>, <a href="https://publications.waset.org/search?q=rare%20codons" title=" rare codons"> rare codons</a>, <a href="https://publications.waset.org/search?q=human%20GMCSF" title=" human GMCSF"> human GMCSF</a> </p> <a href="https://publications.waset.org/11943/a-novel-cytokine-derived-fusion-tag-for-over-expression-of-heterologous-proteins-in-e-coli" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11943/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11943/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11943/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11943/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11943/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11943/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11943/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11943/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11943/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11943/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1898</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1658</span> Hand Written Digit Recognition by Multiple Classifier Fusion based on Decision Templates Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Reza%20Ebrahimpour">Reza Ebrahimpour</a>, <a href="https://publications.waset.org/search?q=Samaneh%20Hamedi"> Samaneh Hamedi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Classifier fusion may generate more accurate classification than each of the basic classifiers. Fusion is often based on fixed combination rules like the product, average etc. This paper presents decision templates as classifier fusion method for the recognition of the handwritten English and Farsi numerals (1-9). The process involves extracting a feature vector on well-known image databases. The extracted feature vector is fed to multiple classifier fusion. A set of experiments were conducted to compare decision templates (DTs) with some combination rules. Results from decision templates conclude 97.99% and 97.28% for Farsi and English handwritten digits. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Decision%20templates" title="Decision templates">Decision templates</a>, <a href="https://publications.waset.org/search?q=multi-layer%20perceptron" title=" multi-layer perceptron"> multi-layer perceptron</a>, <a href="https://publications.waset.org/search?q=characteristics%20Loci" title=" characteristics Loci"> characteristics Loci</a>, <a href="https://publications.waset.org/search?q=principle%20component%20analysis%20%28PCA%29." title=" principle component analysis (PCA)."> principle component analysis (PCA).</a> </p> <a href="https://publications.waset.org/7930/hand-written-digit-recognition-by-multiple-classifier-fusion-based-on-decision-templates-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7930/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7930/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7930/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7930/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7930/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7930/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7930/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7930/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7930/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7930/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1956</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1657</span> Fusion Filters Weighted by Scalars and Matrices for Linear Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Seok%20Hyoung%20Lee">Seok Hyoung Lee</a>, <a href="https://publications.waset.org/search?q=Vladimir%20Shin"> Vladimir Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An optimal mean-square fusion formulas with scalar and matrix weights are presented. The relationship between them is established. The fusion formulas are compared on the continuous-time filtering problem. The basic differential equation for cross-covariance of the local errors being the key quantity for distributed fusion is derived. It is shown that the fusion filters are effective for multi-sensor systems containing different types of sensors. An example demonstrating the reasonable good accuracy of the proposed filters is given. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Kalman%20filtering" title="Kalman filtering">Kalman filtering</a>, <a href="https://publications.waset.org/search?q=fusion%20formula" title=" fusion formula"> fusion formula</a>, <a href="https://publications.waset.org/search?q=multi-sensor" title=" multi-sensor"> multi-sensor</a>, <a href="https://publications.waset.org/search?q=mean-square%20error." title="mean-square error.">mean-square error.</a> </p> <a href="https://publications.waset.org/12729/fusion-filters-weighted-by-scalars-and-matrices-for-linear-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12729/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12729/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12729/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12729/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12729/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12729/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12729/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12729/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12729/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12729/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12729.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1395</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1656</span> Improving Spatiotemporal Change Detection: A High Level Fusion Approach for Discovering Uncertain Knowledge from Satellite Image Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Wadii%20Boulila">Wadii Boulila</a>, <a href="https://publications.waset.org/search?q=Imed%20Riadh%20Farah"> Imed Riadh Farah</a>, <a href="https://publications.waset.org/search?q=Karim%20Saheb%20Ettabaa"> Karim Saheb Ettabaa</a>, <a href="https://publications.waset.org/search?q=Basel%20Solaiman"> Basel Solaiman</a>, <a href="https://publications.waset.org/search?q=Henda%20Ben%20Ghezala"> Henda Ben Ghezala</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Knowledge%20discovery%20in%20satellite%20databases" title="Knowledge discovery in satellite databases">Knowledge discovery in satellite databases</a>, <a href="https://publications.waset.org/search?q=knowledge%20fusion" title=" knowledge fusion"> knowledge fusion</a>, <a href="https://publications.waset.org/search?q=data%20imperfection" title=" data imperfection"> data imperfection</a>, <a href="https://publications.waset.org/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/search?q=spatiotemporal%20change%20detection." title=" spatiotemporal change detection."> spatiotemporal change detection.</a> </p> <a href="https://publications.waset.org/14440/improving-spatiotemporal-change-detection-a-high-level-fusion-approach-for-discovering-uncertain-knowledge-from-satellite-image-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14440/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14440/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14440/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14440/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14440/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14440/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14440/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14440/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14440/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14440/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14440.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1547</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1655</span> Characterization of Inertial Confinement Fusion Targets Based on Transmission Holographic Mach-Zehnder Interferometer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=B.%20Zare-Farsani">B. Zare-Farsani</a>, <a href="https://publications.waset.org/search?q=M.%20Valieghbal"> M. Valieghbal</a>, <a href="https://publications.waset.org/search?q=M.%20Tarkashvand"> M. Tarkashvand</a>, <a href="https://publications.waset.org/search?q=A.%20H.%20Farahbod"> A. H. Farahbod</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To provide the conditions for nuclear fusion by high energy and powerful laser beams, it is required to have a high degree of symmetry and surface uniformity of the spherical capsules to reduce the Rayleigh-Taylor hydrodynamic instabilities. In this paper, we have used the digital microscopic holography based on Mach-Zehnder interferometer to study the quality of targets for inertial fusion. The interferometric pattern of the target has been registered by a CCD camera and analyzed by Holovision software. The uniformity of the surface and shell thickness are investigated and measured in reconstructed image. We measured shell thickness in different zone where obtained non uniformity 22.82 percent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Inertial%20confinement%20fusion" title="Inertial confinement fusion">Inertial confinement fusion</a>, <a href="https://publications.waset.org/search?q=Mach-Zehnder%20interferometer" title=" Mach-Zehnder interferometer"> Mach-Zehnder interferometer</a>, <a href="https://publications.waset.org/search?q=Digital%20holographic%20microscopy." title=" Digital holographic microscopy."> Digital holographic microscopy.</a> </p> <a href="https://publications.waset.org/10004811/characterization-of-inertial-confinement-fusion-targets-based-on-transmission-holographic-mach-zehnder-interferometer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004811/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004811/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004811/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004811/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004811/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004811/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004811/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004811/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004811/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004811/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004811.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1314</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1654</span> Feature Level Fusion of Multimodal Images Using Haar Lifting Wavelet Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sudipta%20Majumdar">Sudipta Majumdar</a>, <a href="https://publications.waset.org/search?q=Jayant%20Bharadwaj"> Jayant Bharadwaj</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents feature level image fusion using Haar lifting wavelet transform. Feature fused is edge and boundary information, which is obtained using wavelet transform modulus maxima criteria. Simulation results show the superiority of the result as entropy, gradient, standard deviation are increased for fused image as compared to input images. The proposed methods have the advantages of simplicity of implementation, fast algorithm, perfect reconstruction, and reduced computational complexity. (Computational cost of Haar wavelet is very small as compared to other lifting wavelets.)</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Lifting%20wavelet%20transform" title="Lifting wavelet transform">Lifting wavelet transform</a>, <a href="https://publications.waset.org/search?q=wavelet%20transform%20modulus%20maxima." title=" wavelet transform modulus maxima. "> wavelet transform modulus maxima. </a> </p> <a href="https://publications.waset.org/9998893/feature-level-fusion-of-multimodal-images-using-haar-lifting-wavelet-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998893/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998893/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998893/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998893/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998893/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998893/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998893/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998893/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998893/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998893/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998893.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2423</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1653</span> Alternative to M-Estimates in Multisensor Data Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nga-Viet%20Nguyen">Nga-Viet Nguyen</a>, <a href="https://publications.waset.org/search?q=Georgy%20Shevlyakov"> Georgy Shevlyakov</a>, <a href="https://publications.waset.org/search?q=Vladimir%20Shin"> Vladimir Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To solve the problem of multisensor data fusion under non-Gaussian channel noise. The advanced M-estimates are known to be robust solution while trading off some accuracy. In order to improve the estimation accuracy while still maintaining the equivalent robustness, a two-stage robust fusion algorithm is proposed using preliminary rejection of outliers then an optimal linear fusion. The numerical experiments show that the proposed algorithm is equivalent to the M-estimates in the case of uncorrelated local estimates and significantly outperforms the M-estimates when local estimates are correlated. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Data%20fusion" title="Data fusion">Data fusion</a>, <a href="https://publications.waset.org/search?q=estimation" title=" estimation"> estimation</a>, <a href="https://publications.waset.org/search?q=robustness" title=" robustness"> robustness</a>, <a href="https://publications.waset.org/search?q=M-estimates." title=" M-estimates."> M-estimates.</a> </p> <a href="https://publications.waset.org/7839/alternative-to-m-estimates-in-multisensor-data-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7839/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7839/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7839/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7839/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7839/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7839/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7839/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7839/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7839/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7839/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1832</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1652</span> The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a>, <a href="https://publications.waset.org/search?q=Dmitry%20V.%20Egorov"> Dmitry V. Egorov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper introduces an original method of parametric optimization of the structure for multimodal decisionlevel fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=%D0%A1lassification%20accuracy" title="Сlassification accuracy">Сlassification accuracy</a>, <a href="https://publications.waset.org/search?q=fusion%20solution" title=" fusion solution"> fusion solution</a>, <a href="https://publications.waset.org/search?q=total%20error%0D%0Arate." title=" total error rate."> total error rate.</a> </p> <a href="https://publications.waset.org/10000710/the-optimization-of-decision-rules-in-multimodal-decision-level-fusion-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000710/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000710/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000710/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000710/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000710/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000710/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000710/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000710/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000710/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000710/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000710.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1975</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1651</span> Liver Tumor Detection by Classification through FD Enhancement of CT Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=N.%20Ghatwary">N. Ghatwary</a>, <a href="https://publications.waset.org/search?q=A.%20Ahmed"> A. Ahmed</a>, <a href="https://publications.waset.org/search?q=H.%20Jalab"> H. Jalab</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Fractional%20differential%20%28FD%29" title="Fractional differential (FD)">Fractional differential (FD)</a>, <a href="https://publications.waset.org/search?q=Computed%20Tomography%0D%0A%28CT%29" title=" Computed Tomography (CT)"> Computed Tomography (CT)</a>, <a href="https://publications.waset.org/search?q=fusion." title=" fusion."> fusion.</a> </p> <a href="https://publications.waset.org/10003283/liver-tumor-detection-by-classification-through-fd-enhancement-of-ct-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10003283/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10003283/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10003283/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10003283/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10003283/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10003283/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10003283/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10003283/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10003283/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10003283/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10003283.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1682</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1650</span> Unified Fusion Approach with Application to SLAM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Xinde%20Li">Xinde Li</a>, <a href="https://publications.waset.org/search?q=Xinhan%20Huang"> Xinhan Huang</a>, <a href="https://publications.waset.org/search?q=Min%20Wang"> Min Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose the pre-processor based on the Evidence Supporting Measure of Similarity (ESMS) filter and also propose the unified fusion approach (UFA) based on the general fusion machine coupled with ESMS filter, which improve the correctness and precision of information fusion in any fields of application. Here we mainly apply the new approach to Simultaneous Localization And Mapping (SLAM) of Pioneer II mobile robots. A simulation experiment was performed, where an autonomous virtual mobile robot with sonar sensors evolves in a virtual world map with obstacles. By comparing the result of building map according to the general fusion machine (here DSmT-based fusing machine and PCR5-based conflict redistributor considereded) coupling with ESMS filter and without ESMS filter, it shows the benefit of the selection of the sources as a prerequisite for improvement of the information fusion, and also testifies the superiority of the UFA in dealing with SLAM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=DSmT" title="DSmT">DSmT</a>, <a href="https://publications.waset.org/search?q=ESMS%20filter" title=" ESMS filter"> ESMS filter</a>, <a href="https://publications.waset.org/search?q=SLAM" title=" SLAM"> SLAM</a>, <a href="https://publications.waset.org/search?q=UFA" title=" UFA"> UFA</a> </p> <a href="https://publications.waset.org/5429/unified-fusion-approach-with-application-to-slam" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5429/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5429/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5429/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5429/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5429/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5429/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5429/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5429/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5429/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5429/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5429.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1350</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1649</span> Service-Oriented Architecture for Object- Centric Information Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jeffrey%20A.%20Dunne">Jeffrey A. Dunne</a>, <a href="https://publications.waset.org/search?q=Kevin%20Ligozio"> Kevin Ligozio</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In many applications there is a broad variety of information relevant to a focal “object" of interest, and the fusion of such heterogeneous data types is desirable for classification and categorization. While these various data types can sometimes be treated as orthogonal (such as the hull number, superstructure color, and speed of an oil tanker), there are instances where the inference and the correlation between quantities can provide improved fusion capabilities (such as the height, weight, and gender of a person). A service-oriented architecture has been designed and prototyped to support the fusion of information for such “object-centric" situations. It is modular, scalable, and flexible, and designed to support new data sources, fusion algorithms, and computational resources without affecting existing services. The architecture is designed to simplify the incorporation of legacy systems, support exact and probabilistic entity disambiguation, recognize and utilize multiple types of uncertainties, and minimize network bandwidth requirements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Data%20fusion" title="Data fusion">Data fusion</a>, <a href="https://publications.waset.org/search?q=distributed%20computing" title=" distributed computing"> distributed computing</a>, <a href="https://publications.waset.org/search?q=service-oriented%20architecture" title=" service-oriented architecture"> service-oriented architecture</a>, <a href="https://publications.waset.org/search?q=SOA" title=" SOA"> SOA</a> </p> <a href="https://publications.waset.org/12815/service-oriented-architecture-for-object-centric-information-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12815/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12815/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12815/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12815/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12815/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12815/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12815/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12815/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12815/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12815/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12815.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1469</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=55">55</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=56">56</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=image%20fusion&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>