CINXE.COM

Search results for: fusion method

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: fusion method</title> <meta name="description" content="Search results for: fusion method"> <meta name="keywords" content="fusion method"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="fusion method" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="fusion method"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 19279</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: fusion method</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19279</span> Sampling Two-Channel Nonseparable Wavelets and Its Applications in Multispectral Image Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bin%20Liu">Bin Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Weijie%20Liu"> Weijie Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bin%20Sun"> Bin Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Yihui%20Luo"> Yihui Luo </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to solve the problem of lower spatial resolution and block effect in the fusion method based on separable wavelet transform in the resulting fusion image, a new sampling mode based on multi-resolution analysis of two-channel non separable wavelet transform, whose dilation matrix is [1,1;1,-1], is presented and a multispectral image fusion method based on this kind of sampling mode is proposed. Filter banks related to this kind of wavelet are constructed, and multiresolution decomposition of the intensity of the MS and panchromatic image are performed in the sampled mode using the constructed filter bank. The low- and high-frequency coefficients are fused by different fusion rules. The experiment results show that this method has good visual effect. The fusion performance has been noted to outperform the IHS fusion method, as well as, the fusion methods based on DWT, IHS-DWT, IHS-Contourlet transform, and IHS-Curvelet transform in preserving both spectral quality and high spatial resolution information. Furthermore, when compared with the fusion method based on nonsubsampled two-channel non separable wavelet, the proposed method has been observed to have higher spatial resolution and good global spectral information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=two-channel%20sampled%20nonseparable%20wavelets" title=" two-channel sampled nonseparable wavelets"> two-channel sampled nonseparable wavelets</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral%20image" title=" multispectral image"> multispectral image</a>, <a href="https://publications.waset.org/abstracts/search?q=panchromatic%20image" title=" panchromatic image"> panchromatic image</a> </p> <a href="https://publications.waset.org/abstracts/15357/sampling-two-channel-nonseparable-wavelets-and-its-applications-in-multispectral-image-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15357.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19278</span> The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a>, <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20V.%20Egorov"> Dmitry V. Egorov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification%20accuracy" title="classification accuracy">classification accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion%20solution" title=" fusion solution"> fusion solution</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20error%20rate" title=" total error rate"> total error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion%20classifier" title=" multimodal fusion classifier"> multimodal fusion classifier</a> </p> <a href="https://publications.waset.org/abstracts/26088/the-optimization-of-decision-rules-in-multimodal-decision-level-fusion-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26088.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">466</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19277</span> Track Initiation Method Based on Multi-Algorithm Fusion Learning of 1DCNN And Bi-LSTM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhe%20Li">Zhe Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Aihua%20Cai"> Aihua Cai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aiming at the problem of high-density clutter and interference affecting radar detection target track initiation in ECM and complex radar mission, the traditional radar target track initiation method has been difficult to adapt. To this end, we propose a multi-algorithm fusion learning track initiation algorithm, which transforms the track initiation problem into a true-false track discrimination problem, and designs an algorithm based on 1DCNN(One-Dimensional CNN)combined with Bi-LSTM (Bi-Directional Long Short-Term Memory )for fusion classification. The experimental dataset consists of real trajectories obtained from a certain type of three-coordinate radar measurements, and the experiments are compared with traditional trajectory initiation methods such as rule-based method, logical-based method and Hough-transform-based method. The simulation results show that the overall performance of the multi-algorithm fusion learning track initiation algorithm is significantly better than that of the traditional method, and the real track initiation rate can be effectively improved under high clutter density with the average initiation time similar to the logical method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=track%20initiation" title="track initiation">track initiation</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-algorithm%20fusion" title=" multi-algorithm fusion"> multi-algorithm fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=1DCNN" title=" 1DCNN"> 1DCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=Bi-LSTM" title=" Bi-LSTM"> Bi-LSTM</a> </p> <a href="https://publications.waset.org/abstracts/173764/track-initiation-method-based-on-multi-algorithm-fusion-learning-of-1dcnn-and-bi-lstm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173764.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19276</span> Multi-Focus Image Fusion Using SFM and Wavelet Packet</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somkait%20Udomhunsakul">Somkait Udomhunsakul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-focus%20image%20fusion" title="multi-focus image fusion">multi-focus image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20packet" title=" wavelet packet"> wavelet packet</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20frequency%20measurement" title=" spatial frequency measurement"> spatial frequency measurement</a> </p> <a href="https://publications.waset.org/abstracts/4886/multi-focus-image-fusion-using-sfm-and-wavelet-packet" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4886.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19275</span> Adaptive Dehazing Using Fusion Strategy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Ramesh%20Kanthan">M. Ramesh Kanthan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Naga%20Nandini%20Sujatha"> S. Naga Nandini Sujatha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20image" title="single image">single image</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=dehazing" title=" dehazing"> dehazing</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20fusion" title=" multi-scale fusion"> multi-scale fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=per-pixel" title=" per-pixel"> per-pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20map" title=" weight map"> weight map</a> </p> <a href="https://publications.waset.org/abstracts/32544/adaptive-dehazing-using-fusion-strategy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19274</span> Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samar%20M.%20Alqhtani">Samar M. Alqhtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Suhuai%20Luo"> Suhuai Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Brian%20Regan"> Brian Regan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title="data fusion">data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=Dempster-Shafer%20theory" title=" Dempster-Shafer theory"> Dempster-Shafer theory</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=event%20detection" title=" event detection"> event detection</a> </p> <a href="https://publications.waset.org/abstracts/34741/multimedia-data-fusion-for-event-detection-in-twitter-by-using-dempster-shafer-evidence-theory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34741.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19273</span> Age Determination from Epiphyseal Union of Bones at Shoulder Joint in Girls of Central India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Tirpude">B. Tirpude</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Surwade"> V. Surwade</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Murkey"> P. Murkey</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Wankhade"> P. Wankhade</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Meena"> S. Meena </a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is no statistical data to establish variation in epiphyseal fusion in girls in central India population. This significant oversight can lead to exclusion of persons of interest in a forensic investigation. Epiphyseal fusion of proximal end of humerus in eighty females were analyzed on radiological basis to assess the range of variation of epiphyseal fusion at each age. In the study, the X ray films of the subjects were divided into three groups on the basis of degree of fusion. Firstly, those which were showing No Epiphyseal Fusion (N), secondly those showing Partial Union (PC), and thirdly those showing Complete Fusion (C). Observations made were compared with the previous studies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=epiphyseal%20union" title="epiphyseal union">epiphyseal union</a>, <a href="https://publications.waset.org/abstracts/search?q=shoulder%20joint" title=" shoulder joint"> shoulder joint</a>, <a href="https://publications.waset.org/abstracts/search?q=proximal%20end%20of%20humerus" title=" proximal end of humerus"> proximal end of humerus</a> </p> <a href="https://publications.waset.org/abstracts/19684/age-determination-from-epiphyseal-union-of-bones-at-shoulder-joint-in-girls-of-central-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19684.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">496</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19272</span> Keypoint Detection Method Based on Multi-Scale Feature Fusion of Attention Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoxiao%20Li">Xiaoxiao Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuangcheng%20Jia"> Shuangcheng Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Qian%20Li"> Qian Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Keypoint detection has always been a challenge in the field of image recognition. This paper proposes a novelty keypoint detection method which is called Multi-Scale Feature Fusion Convolutional Network with Attention (MFFCNA). We verified that the multi-scale features with the attention mechanism module have better feature expression capability. The feature fusion between different scales makes the information that the network model can express more abundant, and the network is easier to converge. On our self-made street sign corner dataset, we validate the MFFCNA model with an accuracy of 97.8% and a recall of 81%, which are 5 and 8 percentage points higher than the HRNet network, respectively. On the COCO dataset, the AP is 71.9%, and the AR is 75.3%, which are 3 points and 2 points higher than HRNet, respectively. Extensive experiments show that our method has a remarkable improvement in the keypoint recognition tasks, and the recognition effect is better than the existing methods. Moreover, our method can be applied not only to keypoint detection but also to image classification and semantic segmentation with good generality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keypoint%20detection" title="keypoint detection">keypoint detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title=" feature fusion"> feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a> </p> <a href="https://publications.waset.org/abstracts/147796/keypoint-detection-method-based-on-multi-scale-feature-fusion-of-attention-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147796.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19271</span> Performance of Hybrid Image Fusion: Implementation of Dual-Tree Complex Wavelet Transform Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Gupta">Manoj Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nirmendra%20Singh%20Bhadauria"> Nirmendra Singh Bhadauria</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of the applications in image processing require high spatial and high spectral resolution in a single image. For example satellite image system, the traffic monitoring system, and long range sensor fusion system all use image processing. However, most of the available equipment is not capable of providing this type of data. The sensor in the surveillance system can only cover the view of a small area for a particular focus, yet the demanding application of this system requires a view with a high coverage of the field. Image fusion provides the possibility of combining different sources of information. In this paper, we have decomposed the image using DTCWT and then fused using average and hybrid of (maxima and average) pixel level techniques and then compared quality of both the images using PSNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/abstracts/search?q=DT-CWT" title=" DT-CWT"> DT-CWT</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20image%20fusion" title=" average image fusion"> average image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20image%20fusion" title=" hybrid image fusion"> hybrid image fusion</a> </p> <a href="https://publications.waset.org/abstracts/19207/performance-of-hybrid-image-fusion-implementation-of-dual-tree-complex-wavelet-transform-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19270</span> Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xudong%20Guan">Xudong Guan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ainong%20Li"> Ainong Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaohuan%20Liu"> Gaohuan Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chong%20Huang"> Chong Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhao"> Wei Zhao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20fusion" title=" decision fusion"> decision fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-temporal" title=" multi-temporal"> multi-temporal</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a> </p> <a href="https://publications.waset.org/abstracts/112195/integrating-time-series-and-high-spatial-remote-sensing-data-based-on-multilevel-decision-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112195.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19269</span> Changes in the Median Sacral Crest Associated with Sacrocaudal Fusion in the Greyhound</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Ismail">S. M. Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=H-H%20Yen"> H-H Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20M.%20Murray"> C. M. Murray</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20M.%20S.%20Davies"> H. M. S. Davies</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A recent study reported a 33% incidence of complete sacrocaudal fusion in greyhounds compared to a 3% incidence in other dogs. In the dog, the median sacral crest is formed by the fusion of sacral spinous processes. Separation of the 1st spinous process from the median crest of the sacrum in the dog has been reported as a diagnostic tool of type one lumbosacral transitional vertebra (LTV). LTV is a congenital spinal anomaly, which includes either sacralization of the caudal lumbar part or lumbarization of the most cranial sacral segment of the spine. In this study, the absence or reduction of fusion (presence of separation) between the 1st and 2ndspinous processes of the median sacral crest has been identified in association with sacrocaudal fusion in the greyhound, without any feature of LTV. In order to provide quantitative data on the absence or reduction of fusion in the median sacral crest between the 1st and 2nd sacral spinous processes, in association with sacrocaudal fusion. 204 dog sacrums free of any pathological changes (192 greyhound, 9 beagles and 3 labradors) were grouped based on the occurrence and types of fusion and the presence, absence, or reduction in the median sacral crest between the 1st and 2nd sacral spinous processes., Sacrums were described and classified as follows: F: Complete fusion (crest is present), N: Absence (fusion is absent), and R: Short crest (fusion reduced but not absent (reduction). The incidence of sacrocaudal fusion in the 204 sacrums: 57% of the sacrums were standard (3 vertebrae) and 43% were fused (4 vertebrae). Type of sacrum had a significant (p < .05) association with the absence and reduction of fusion between the 1st and 2nd sacral spinous processes of the median sacral crest. In the 108 greyhounds with standard sacrums (3 vertebrae) the percentages of F, N and R were 45% 23% and 23% respectively, while in the 84 fused (4 vertebrae) sacrums, the percentages of F, N and R were 3%, 87% and 10% respectively and these percentages were significantly different between standard (3 vertebrae) and fused (4 vertebrae) sacrums (p < .05). This indicates that absence of spinous process fusion in the median sacral crest was found in a large percentage of the greyhounds in this study and was found to be particularly prevalent in those with sacrocaudal fusion – therefore in this breed, at least, absence of sacral spinous process fusion may be unlikely to be associated with LTV. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=greyhound" title="greyhound">greyhound</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20sacral%20crest" title=" median sacral crest"> median sacral crest</a>, <a href="https://publications.waset.org/abstracts/search?q=sacrocaudal%20fusion" title=" sacrocaudal fusion"> sacrocaudal fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=sacral%20spinous%20process" title=" sacral spinous process"> sacral spinous process</a> </p> <a href="https://publications.waset.org/abstracts/47980/changes-in-the-median-sacral-crest-associated-with-sacrocaudal-fusion-in-the-greyhound" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19268</span> Implementation of Sensor Fusion Structure of 9-Axis Sensors on the Multipoint Control Unit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun%20Gil%20Ahn">Jun Gil Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Jong%20Tae%20Kim"> Jong Tae Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we study the sensor fusion structure on the multipoint control unit (MCU). Sensor fusion using Kalman filter for 9-axis sensors is considered. The 9-axis inertial sensor is the combination of 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. We implement the sensor fusion structure among the sensor hubs in MCU and measure the execution time, power consumptions, and total energy. Experiments with real data from 9-axis sensor in 20Mhz show that the average power consumptions are 44mW and 48mW on Cortx-M0 and Cortex-M3 MCU, respectively. Execution times are 613.03 us and 305.6 us respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=9-axis%20sensor" title="9-axis sensor">9-axis sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=MCU" title=" MCU"> MCU</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20fusion" title=" sensor fusion"> sensor fusion</a> </p> <a href="https://publications.waset.org/abstracts/84323/implementation-of-sensor-fusion-structure-of-9-axis-sensors-on-the-multipoint-control-unit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">504</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19267</span> Efficient Feature Fusion for Noise Iris in Unconstrained Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao-Hong%20Tsai">Yao-Hong Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet" title=" wavelet"> wavelet</a> </p> <a href="https://publications.waset.org/abstracts/17027/efficient-feature-fusion-for-noise-iris-in-unconstrained-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17027.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19266</span> Implementation and Comparative Analysis of PET and CT Image Fusion Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Guruprasad">S. Guruprasad</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20N.%20Suma"> H. N. Suma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical imaging modalities are becoming life saving components. These modalities are very much essential to doctors for proper diagnosis, treatment planning and follow up. Some modalities provide anatomical information such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), X-rays and some provides only functional information such as Positron Emission Tomography (PET). Therefore, single modality image does not give complete information. This paper presents the fusion of structural information in CT and functional information present in PET image. This fused image is very much essential in detecting the stages and location of abnormalities and in particular very much needed in oncology for improved diagnosis and treatment. We have implemented and compared image fusion techniques like pyramid, wavelet, and principal components fusion methods along with hybrid method of DWT and PCA. The performances of the algorithms are evaluated quantitatively and qualitatively. The system is implemented and tested by using MATLAB software. Based on the MSE, PSNR and ENTROPY analysis, PCA and DWT-PCA methods showed best results over all experiments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=pyramid" title=" pyramid"> pyramid</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelets" title=" wavelets"> wavelets</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis" title=" principal component analysis"> principal component analysis</a> </p> <a href="https://publications.waset.org/abstracts/60736/implementation-and-comparative-analysis-of-pet-and-ct-image-fusion-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60736.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19265</span> The Optimum Operating Conditions for the Synthesis of Zeolite from Waste Incineration Fly Ash by Alkali Fusion and Hydrothermal Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yi-Jie%20Lin">Yi-Jie Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Jyh-Cherng%20Chen"> Jyh-Cherng Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The fly ash of waste incineration processes is usually hazardous and the disposal or reuse of waste incineration fly ash is difficult. In this study, the waste incineration fly ash was converted to useful zeolites by the alkali fusion and hydrothermal synthesis method. The influence of different operating conditions (the ratio of Si/Al, the ratio of hydrolysis liquid to solid, and hydrothermal time) was investigated to seek the optimum operating conditions for the synthesis of zeolite from waste incineration fly ash. The results showed that concentrations of heavy metals in the leachate of Toxicity Characteristic Leaching Procedure (TCLP) were all lower than the regulatory limits except lead. The optimum operating conditions for the synthesis of zeolite from waste incineration fly ash by the alkali fusion and hydrothermal synthesis method were Si/Al=40, NaOH/ash=1.5, alkali fusion at 400 <sup>o</sup>C for 40 min, hydrolysis with Liquid to Solid ratio (L/S)= 200 at 105 <sup>o</sup>C for 24 h, and hydrothermal synthesis at 105 <sup>o</sup>C for 24 h. The specific surface area of fly ash could be significantly increased from 8.59 m<sup>2</sup>/g to 651.51 m<sup>2</sup>/g (synthesized zeolite). The influence of different operating conditions on the synthesis of zeolite from waste incineration fly ash followed the sequence of Si/Al ratio &gt; hydrothermal time &gt; hydrolysis L/S ratio. The synthesized zeolites can be reused as good adsorbents to control the air or wastewater pollutants. The purpose of fly ash detoxification, reduction and waste recycling/reuse is achieved successfully. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alkali%20fusion" title="alkali fusion">alkali fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=hydrothermal" title=" hydrothermal"> hydrothermal</a>, <a href="https://publications.waset.org/abstracts/search?q=fly%20ash" title=" fly ash"> fly ash</a>, <a href="https://publications.waset.org/abstracts/search?q=zeolite" title=" zeolite"> zeolite</a> </p> <a href="https://publications.waset.org/abstracts/94021/the-optimum-operating-conditions-for-the-synthesis-of-zeolite-from-waste-incineration-fly-ash-by-alkali-fusion-and-hydrothermal-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94021.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">240</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19264</span> Method of Successive Approximations for Modeling of Distributed Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Torokhti">A. Torokhti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A new method of mathematical modeling of the distributed nonlinear system is developed. The system is represented by a combination of the set of spatially distributed sensors and the fusion center. Its mathematical model is obtained from the iterative procedure that converges to the model which is optimal in the sense of minimizing an associated cost function. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mathematical%20modeling" title="mathematical modeling">mathematical modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=non-linear%20system" title=" non-linear system"> non-linear system</a>, <a href="https://publications.waset.org/abstracts/search?q=spatially%20distributed%20sensors" title=" spatially distributed sensors"> spatially distributed sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion%20center" title=" fusion center"> fusion center</a> </p> <a href="https://publications.waset.org/abstracts/6226/method-of-successive-approximations-for-modeling-of-distributed-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">381</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19263</span> An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yu-ding%20Du">Yu-ding Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Qi-lian%20Bao"> Qi-lian Bao</a>, <a href="https://publications.waset.org/abstracts/search?q=Nassim%20Bessaad"> Nassim Bessaad</a>, <a href="https://publications.waset.org/abstracts/search?q=Lin%20Liu"> Lin Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-sensor%20data%20fusion" title="multi-sensor data fusion">multi-sensor data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=train%20positioning" title=" train positioning"> train positioning</a>, <a href="https://publications.waset.org/abstracts/search?q=GNSS" title=" GNSS"> GNSS</a>, <a href="https://publications.waset.org/abstracts/search?q=odometer" title=" odometer"> odometer</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20track%20map" title=" digital track map"> digital track map</a>, <a href="https://publications.waset.org/abstracts/search?q=map%20matching" title=" map matching"> map matching</a>, <a href="https://publications.waset.org/abstracts/search?q=BP%20neural%20network" title=" BP neural network"> BP neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20weighted%20fusion" title=" adaptive weighted fusion"> adaptive weighted fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a> </p> <a href="https://publications.waset.org/abstracts/98264/an-adaptive-back-propagation-network-and-kalman-filter-based-multi-sensor-fusion-method-for-train-location-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">252</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19262</span> Multi-Sensor Image Fusion for Visible and Infrared Thermal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amit%20Kumar%20Happy">Amit Kumar Happy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=IR%20thermal%20imager" title=" IR thermal imager"> IR thermal imager</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-sensor" title=" multi-sensor"> multi-sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20transform" title=" multi-scale transform"> multi-scale transform</a> </p> <a href="https://publications.waset.org/abstracts/138086/multi-sensor-image-fusion-for-visible-and-infrared-thermal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138086.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">115</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19261</span> Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tomas%20Trainys">Tomas Trainys</a>, <a href="https://publications.waset.org/abstracts/search?q=Algimantas%20Venckauskas"> Algimantas Venckauskas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bio-cryptography" title="bio-cryptography">bio-cryptography</a>, <a href="https://publications.waset.org/abstracts/search?q=biometrics" title=" biometrics"> biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=cryptographic%20key%20generation" title=" cryptographic key generation"> cryptographic key generation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title=" data fusion"> data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20security" title=" information security"> information security</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20vein%20method." title=" finger vein method."> finger vein method.</a> </p> <a href="https://publications.waset.org/abstracts/97366/preprocessing-and-fusion-of-multiple-representation-of-finger-vein-patterns-using-conventional-and-machine-learning-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97366.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19260</span> Entry Inhibitors Are Less Effective at Preventing Cell-Associated HIV-2 Infection than HIV-1</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Diniz">A. R. Diniz</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Borrego"> P. Borrego</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20B%C3%A1rtolo"> I. Bártolo</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Taveira"> N. Taveira</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cell-to-cell transmission plays a critical role in the spread of HIV-1 infection in vitro and in vivo. Inhibition of HIV-1 cell-associated infection by antiretroviral drugs and neutralizing antibodies (NAbs) is more difficult compared to cell-free infection. Limited data exists on cell-associated infection by HIV-2 and its inhibition. In this work, we determined the ability of entry inhibitors to inhibit HIV-1 and HIV-2 cell-to cell fusion as a proxy to cell-associated infection. We developed a method in which Hela-CD4-cells are first transfected with a Tat expressing plasmid (pcDNA3.1+/Tat101) and infected with recombinant vaccinia viruses expressing either the HIV-1 (vPE16: from isolate HTLV-IIIB, clone BH8, X4 tropism) or HIV-2 (vSC50: from HIV-2SBL/ISY, R5 and X4 tropism) envelope glycoproteins (M.O.I.=1 PFU/cell).These cells are added to TZM-bl cells. When cell-to-cell fusion (syncytia) occurs the Tat protein diffuses to the TZM-bl cells activating the expression of a reporter gene (luciferase). We tested several entry inhibitors including the fusion inhibitors T1249, T20 and P3, the CCR5 antagonists MVC and TAK-779, the CXCR4 antagonist AMD3100 and several HIV-2 neutralizing antibodies (Nabs). All compounds inhibited HIV-1 and HIV-2 cell fusion albeit to different levels. Maximum percentage of HIV-2 inhibition (MPI) was higher for fusion inhibitors (T1249- 99.8%; P3- 95%, T20-90%) followed by co-receptor antagonists (MVC- 63%; TAK-779- 55%; AMD3100- 45%). NAbs from HIV-2 infected patients did not prevent cell fusion up to the tested concentration of 4μg/ml. As for HIV-1, MPI reached 100% with TAK-779 and T1249. For the other antivirals, MPIs were: P3-79%; T20-75%; AMD3100-61%; MVC-65%.These results are consistent with published data. Maraviroc had the lowest IC50 both for HIV-2 and HIV-1 (IC50 HIV-2= 0.06 μM; HIV-1=0.0076μM). Highest IC50 were observed with T20 for HIV-2 (3.86μM) and with TAK-779 for HIV-1 (12.64μM). Overall, our results show that entry inhibitors in clinical use are less effective at preventing Env mediated cell-to-cell-fusion in HIV-2 than in HIV-1 which suggests that cell-associated HIV-2 infection will be more difficult to inhibit compared to HIV-1. The method described here will be useful to screen for new HIV entry inhibitors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cell-to-cell%20fusion" title="cell-to-cell fusion">cell-to-cell fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=entry%20inhibitors" title=" entry inhibitors"> entry inhibitors</a>, <a href="https://publications.waset.org/abstracts/search?q=HIV" title=" HIV"> HIV</a>, <a href="https://publications.waset.org/abstracts/search?q=NAbs" title=" NAbs"> NAbs</a>, <a href="https://publications.waset.org/abstracts/search?q=vaccinia%20virus" title=" vaccinia virus"> vaccinia virus</a> </p> <a href="https://publications.waset.org/abstracts/42899/entry-inhibitors-are-less-effective-at-preventing-cell-associated-hiv-2-infection-than-hiv-1" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42899.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">309</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19259</span> Construction of a Fusion Gene Carrying E10A and K5 with 2A Peptide-Linked by Using Overlap Extension PCR</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tiancheng%20Lan">Tiancheng Lan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> E10A is a kind of replication-defective adenovirus which carries the human endostatin gene to inhibit the growth of tumors. Kringle 5(K5) has almost the same function as angiostatin to also inhibit the growth of tumors since they are all the byproduct of the proteolytic cleavage of plasminogen. Tumor size increasing can be suppressed because both of the endostatin and K5 can restrain the angiogenesis process. Therefore, in order to improve the treatment effect on tumor, 2A peptide is used to construct a fusion gene carrying both E10A and K5. Using 2A peptide is an ideal strategy when a fusion gene is expressed because it can avoid many problems during the expression of more than one kind of protein. The overlap extension PCR is also used to connect 2A peptide with E10A and K5. The final construction of fusion gene E10A-2A-K5 can provide a possible new method of the anti-angiogenesis treatment with a better expression performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=E10A" title="E10A">E10A</a>, <a href="https://publications.waset.org/abstracts/search?q=Kringle%205" title=" Kringle 5"> Kringle 5</a>, <a href="https://publications.waset.org/abstracts/search?q=2A%20peptide" title=" 2A peptide"> 2A peptide</a>, <a href="https://publications.waset.org/abstracts/search?q=overlap%20extension%20PCR" title=" overlap extension PCR"> overlap extension PCR</a> </p> <a href="https://publications.waset.org/abstracts/132643/construction-of-a-fusion-gene-carrying-e10a-and-k5-with-2a-peptide-linked-by-using-overlap-extension-pcr" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132643.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19258</span> Simulation for the Magnetized Plasma Compression Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victor%20V.%20Kuzenov">Victor V. Kuzenov</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergei%20V.%20Ryzhkov"> Sergei V. Ryzhkov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ongoing experimental and theoretical studies on magneto-inertial confinement fusion (Angara, C-2, CJS-100, General Fusion, MagLIF, MAGPIE, MC-1, YG-1, Omega) and new constructing facilities (Baikal, C-2W, Z300 and Z800) require adequate modeling and description of the physical processes occurring in high-temperature dense plasma in a strong magnetic field. This paper presents a mathematical model, numerical method, and results of the computer analysis of the compression process and the energy transfer in the target plasma, used in magneto-inertial fusion (MIF). The computer simulation of the compression process of the magnetized target by the high-power laser pulse and the high-speed plasma jets is presented. The characteristic patterns of the two methods of the target compression are being analysed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=magnetized%20target" title="magnetized target">magnetized target</a>, <a href="https://publications.waset.org/abstracts/search?q=magneto-inertial%20fusion" title=" magneto-inertial fusion"> magneto-inertial fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20model" title=" mathematical model"> mathematical model</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20and%20laser%20beams" title=" plasma and laser beams"> plasma and laser beams</a> </p> <a href="https://publications.waset.org/abstracts/66035/simulation-for-the-magnetized-plasma-compression-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66035.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19257</span> Evaluation of Fusion Sonar and Stereo Camera System for 3D Reconstruction of Underwater Archaeological Object</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yadpiroon%20Onmek">Yadpiroon Onmek</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean%20Triboulet"> Jean Triboulet</a>, <a href="https://publications.waset.org/abstracts/search?q=Sebastien%20Druon"> Sebastien Druon</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Jouvencel"> Bruno Jouvencel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this paper is to develop the 3D underwater reconstruction of archaeology object, which is based on the fusion between a sonar system and stereo camera system. The underwater images are obtained from a calibrated camera system. The multiples image pairs are input, and we first solve the problem of image processing by applying the well-known filter, therefore to improve the quality of underwater images. The features of interest between image pairs are selected by well-known methods: a FAST detector and FLANN descriptor. Subsequently, the RANSAC method is applied to reject outlier points. The putative inliers are matched by triangulation to produce the local sparse point clouds in 3D space, using a pinhole camera model and Euclidean distance estimation. The SFM technique is used to carry out the global sparse point clouds. Finally, the ICP method is used to fusion the sonar information with the stereo model. The final 3D models have a précised by measurement comparing with the real object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20reconstruction" title="3D reconstruction">3D reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=archaeology" title=" archaeology"> archaeology</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20system" title=" stereo system"> stereo system</a>, <a href="https://publications.waset.org/abstracts/search?q=sonar%20system" title=" sonar system"> sonar system</a>, <a href="https://publications.waset.org/abstracts/search?q=underwater" title=" underwater"> underwater</a> </p> <a href="https://publications.waset.org/abstracts/73700/evaluation-of-fusion-sonar-and-stereo-camera-system-for-3d-reconstruction-of-underwater-archaeological-object" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73700.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">299</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19256</span> Sensor Registration in Multi-Static Sonar Fusion Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Longxiang%20Guo">Longxiang Guo</a>, <a href="https://publications.waset.org/abstracts/search?q=Haoyan%20Hao"> Haoyan Hao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xueli%20Sheng"> Xueli Sheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Hanjun%20Yu"> Hanjun Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingwei%20Yin"> Jingwei Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title="data fusion">data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-static%20sonar%20detection" title=" multi-static sonar detection"> multi-static sonar detection</a>, <a href="https://publications.waset.org/abstracts/search?q=offline%20estimation" title=" offline estimation"> offline estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20registration%20problem" title=" sensor registration problem"> sensor registration problem</a> </p> <a href="https://publications.waset.org/abstracts/103631/sensor-registration-in-multi-static-sonar-fusion-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/103631.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19255</span> Variations in the Angulation of the First Sacral Spinous Process Angle Associated with Sacrocaudal Fusion in Greyhounds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sa%27ad%20M.%20Ismail">Sa&#039;ad M. Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Hung-Hsun%20Yen"> Hung-Hsun Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=Christina%20M.%20Murray"> Christina M. Murray</a>, <a href="https://publications.waset.org/abstracts/search?q=Helen%20M.%20S.%20Davies"> Helen M. S. Davies</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the dog, the median sacral crest is formed by the fusion of three sacral spinous processes. In greyhounds with standard sacrums, this fusion in the median sacral crest consists of the fusion of three sacral spinous processes while it consists of four in greyhounds with sacrocaudal fusion. In the present study, variations in the angulation of the first sacral spinous process in association with different types of sacrocaudal fusion in the greyhound were investigated. Sacrums were collected from 207 greyhounds (102 sacrums; type A (unfused) and 105 with different types of sacrocaudal fusion; types: B, C and D). Sacrums were cleaned by boiling and dried and then were placed on their ventral surface on a flat surface and photographed from the left side using a digital camera at a fixed distance. The first sacral spinous process angle (1st SPA) was defined as the angle formed between the cranial border of the cranial ridge of the first sacral spinous process and the line extending across the most dorsal surface points of the spinous processes of the S1, S2, and S3. Image-Pro Express Version 5.0 imaging software was used to draw and measure the angles. Two photographs were taken for each sacrum and two repeat measurements were also taken of each angle. The mean value of the 1st SPA in greyhounds with sacrocaudal fusion was less (98.99°, SD ± 11, n = 105) than those in greyhounds with standard sacrums (99.77°, SD ± 9.18, n = 102) but was not significantly different (P < 0.05). Among greyhounds with different types of sacrocaudal fusion the mean value of the 1st SPA was as follows: type B; 97.73°, SD ± 10.94, n = 39, type C: 101.42°, SD ± 10.51, n = 52, and type D: 94.22°, SD ± 11.30, n = 12. For all types of fusion these angles were significantly different from each other (P < 0.05). Comparing the mean value of the1st SPA in standard sacrums (Type A) with that for each type of fusion separately showed that the only significantly different angulation (P < 0.05) was between standard sacrums and sacrums with sacrocaudal fusion sacrum type D (only body fusion between the S1 and Ca1). Different types of sacrocaudal fusion were associated with variations in the angle of the first sacral spinous process. These variations may affect the alignment and biomechanics of the sacral area and the pattern of movement and/or the force produced by both hind limbs to the cranial parts of the body and may alter the loading of other parts of the body. We concluded that any variations in the sacrum anatomical features might change the function of the sacrum or surrounding anatomical structures during movement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=angulation%20of%20first%20sacral%20spinous%20process" title="angulation of first sacral spinous process">angulation of first sacral spinous process</a>, <a href="https://publications.waset.org/abstracts/search?q=biomechanics" title=" biomechanics"> biomechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=greyhound" title=" greyhound"> greyhound</a>, <a href="https://publications.waset.org/abstracts/search?q=locomotion" title=" locomotion"> locomotion</a>, <a href="https://publications.waset.org/abstracts/search?q=sacrocaudal%20fusion" title=" sacrocaudal fusion"> sacrocaudal fusion</a> </p> <a href="https://publications.waset.org/abstracts/74942/variations-in-the-angulation-of-the-first-sacral-spinous-process-angle-associated-with-sacrocaudal-fusion-in-greyhounds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74942.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">311</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19254</span> Morphological Features Fusion for Identifying INBREAST-Database Masses Using Neural Networks and Support Vector Machines </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadia%20el%20Atlas">Nadia el Atlas</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20el%20Aroussi"> Mohammed el Aroussi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Wahbi"> Mohammed Wahbi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper a novel technique of mass characterization based on robust features-fusion is presented. The proposed method consists of mainly four stages: (a) the first phase involves segmenting the masses using edge information’s. (b) The second phase is to calculate and fuse the most relevant morphological features. (c) The last phase is the classification step which allows us to classify the images into benign and malignant masses. In this step we have implemented Support Vectors Machines (SVM) and Artificial Neural Networks (ANN), which were evaluated with the following performance criteria: confusion matrix, accuracy, sensitivity, specificity, receiver operating characteristic ROC, and error histogram. The effectiveness of this new approach was evaluated by a recently developed database: INBREAST database. The fusion of the most appropriate morphological features provided very good results. The SVM gives accuracy to within 64.3%. Whereas the ANN classifier gives better results with an accuracy of 97.5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title="breast cancer">breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a>, <a href="https://publications.waset.org/abstracts/search?q=CAD%20system" title=" CAD system"> CAD system</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/22407/morphological-features-fusion-for-identifying-inbreast-database-masses-using-neural-networks-and-support-vector-machines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22407.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">599</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19253</span> Multi Biomertric Personal Identification System Based On Hybird Intellegence Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laheeb%20M.%20Ibrahim">Laheeb M. Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20A.%20Salih"> Ibrahim A. Salih</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics is a technology that has been widely used in many official and commercial identification applications. The increased concerns in security during recent years (especially during the last decades) have essentially resulted in more attention being given to biometric-based verification techniques. Here, a novel fusion approach of palmprint, dental traits has been suggested. These traits which are authentication techniques have been employed in a range of biometric applications that can identify any postmortem PM person and antemortem AM. Besides improving the accuracy, the fusion of biometrics has several advantages such as increasing, deterring spoofing activities and reducing enrolment failure. In this paper, a first unimodel biometric system has been made by using (palmprint and dental) traits, for each one classification applying an artificial neural network and a hybrid technique that combines swarm intelligence and neural network together, then attempt has been made to combine palmprint and dental biometrics. Principally, the fusion of palmprint and dental biometrics and their potential application has been explored as biometric identifiers. To address this issue, investigations have been carried out about the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. Also the results of the multimodal approach have been compared with each one of these two traits authentication approaches. This paper studies the features and decision fusion levels in multimodal biometrics. To determine the accuracy of GAR to parallel system decision-fusion including (AND, OR, Majority fating) has been used. The backpropagation method has been used for classification and has come out with result (92%, 99%, 97%) respectively for GAR, while the GAR) for this algorithm using hybrid technique for classification (95%, 99%, 98%) respectively. To determine the accuracy of the multibiometric system for feature level fusion has been used, while the same preceding methods have been used for classification. The results have been (98%, 99%) respectively while to determine the GAR of feature level different methods have been used and have come out with (98%). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=back%20propagation%20neural%20network%20BP%20ANN" title="back propagation neural network BP ANN">back propagation neural network BP ANN</a>, <a href="https://publications.waset.org/abstracts/search?q=multibiometric%20system" title=" multibiometric system"> multibiometric system</a>, <a href="https://publications.waset.org/abstracts/search?q=parallel%20system%20decision-fusion" title=" parallel system decision-fusion"> parallel system decision-fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=practical%20swarm%20intelligent%20PSO" title=" practical swarm intelligent PSO"> practical swarm intelligent PSO</a> </p> <a href="https://publications.waset.org/abstracts/23856/multi-biomertric-personal-identification-system-based-on-hybird-intellegence-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23856.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">533</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19252</span> Multi-Channel Information Fusion in C-OTDR Monitoring Systems: Various Approaches to Classify of Targeted Events</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents new results concerning selection of optimal information fusion formula for ensembles of C-OTDR channels. The goal of information fusion is to create an integral classificator designed for effective classification of seismoacoustic target events. The LPBoost (LP-β and LP-B variants), the Multiple Kernel Learning, and Weighing of Inversely as Lipschitz Constants (WILC) approaches were compared. The WILC is a brand new approach to optimal fusion of Lipschitz Classifiers Ensembles. Results of practical usage are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lipschitz%20Classifier" title="Lipschitz Classifier">Lipschitz Classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=classifiers%20ensembles" title=" classifiers ensembles"> classifiers ensembles</a>, <a href="https://publications.waset.org/abstracts/search?q=LPBoost" title=" LPBoost"> LPBoost</a>, <a href="https://publications.waset.org/abstracts/search?q=C-OTDR%20systems" title=" C-OTDR systems"> C-OTDR systems</a> </p> <a href="https://publications.waset.org/abstracts/21072/multi-channel-information-fusion-in-c-otdr-monitoring-systems-various-approaches-to-classify-of-targeted-events" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21072.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">461</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19251</span> Variations in the 7th Lumbar (L7) Vertebra Length Associated with Sacrocaudal Fusion in Greyhounds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sa%60ad%20M.%20Ismail">Sa`ad M. Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Hung-Hsun%20Yen"> Hung-Hsun Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=Christina%20M.%20Murray"> Christina M. Murray</a>, <a href="https://publications.waset.org/abstracts/search?q=Helen%20M.%20S.%20Davies"> Helen M. S. Davies</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The lumbosacral junction (where the 7th lumbar vertebra (L7) articulates with the sacrum) is a clinically important area in the dog. The 7th lumbar vertebra (L7) is normally shorter than other lumbar vertebrae, and it has been reported that variations in the L7 length may be associated with other abnormal anatomical findings. These variations included the reduction or absence of the portion of the median sacral crest. In this study, 53 greyhound cadavers were placed in right lateral recumbency, and two lateral radiographs were taken of the lumbosacral region for each greyhound. The length of the 6th lumbar (L6) vertebra and L7 were measured using radiographic measurement software and was defined to be the mean of three lines drawn from the caudal to the cranial edge of the L6 and L7 vertebrae (a dorsal, middle, and ventral line) between specific landmarks. Sacrocaudal fusion was found in 41.5% of the greyhounds. The mean values of the length of L6, L7, and the ratio of the L6/L7 length of the greyhounds with sacrocaudal fusion were all greater than those with standard sacrums (three sacral vertebrae). There was a significant difference (P < 0.05) in the mean values of the length of L7 between the greyhounds without sacrocaudal fusion (mean = 29.64, SD ± 2.07) and those with sacrocaudal fusion (mean = 30.86, SD ± 1.80), but, there was no significant difference in the mean value of the length of the L6 measurement. Among different types of sacrocaudal fusion, the longest L7 was found in greyhounds with sacrum type D, intermediate length in those with sacrum type B, and the shortest was found in those with sacrums type C, and the mean values of the ratio of the L6/L7 were 1.11 (SD ± 0.043), 1.15, (SD ± 0.025), and 1.15 (SD ± 0.011) for the types B, C, and D respectively. No significant differences in the mean values of the length of L6 or L7 were found among the different types of sacrocaudal fusion. The occurrence of sacrocaudal fusion might affect direct anatomically connected structures such as the L7. The variation in the length of L7 between greyhounds with sacrocaudal fusion and those without may reflect the possible sequences of the process of fusion. Variations in the length of the L7 vertebra in greyhounds may be associated with the occurrence of sacrocaudal fusion. The variation in the vertebral length may affect the alignment and biomechanical properties of the sacrum and may alter the loading. We concluded that any variations in the sacrum anatomical features might change the function of the sacrum or the surrounding anatomical structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biomechanics" title="biomechanics">biomechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=Greyhound" title=" Greyhound"> Greyhound</a>, <a href="https://publications.waset.org/abstracts/search?q=sacrocaudal%20fusion" title=" sacrocaudal fusion"> sacrocaudal fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=locomotion" title=" locomotion"> locomotion</a>, <a href="https://publications.waset.org/abstracts/search?q=6th%20Lumbar%20%28L6%29%20Vertebra" title=" 6th Lumbar (L6) Vertebra"> 6th Lumbar (L6) Vertebra</a>, <a href="https://publications.waset.org/abstracts/search?q=7th%20Lumbar%20%28L7%29%20Vertebra" title=" 7th Lumbar (L7) Vertebra"> 7th Lumbar (L7) Vertebra</a>, <a href="https://publications.waset.org/abstracts/search?q=ratio%20of%20the%20L6%2FL7%20length" title=" ratio of the L6/L7 length"> ratio of the L6/L7 length</a> </p> <a href="https://publications.waset.org/abstracts/74939/variations-in-the-7th-lumbar-l7-vertebra-length-associated-with-sacrocaudal-fusion-in-greyhounds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19250</span> Clinical Relevance of TMPRSS2-ERG Fusion Marker for Prostate Cancer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shalu%20Jain">Shalu Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=Anju%20Bansal"> Anju Bansal</a>, <a href="https://publications.waset.org/abstracts/search?q=Anup%20Kumar"> Anup Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunita%20Saxena"> Sunita Saxena</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objectives: The novel TMPRSS2:ERG gene fusion is a common somatic event in prostate cancer that in some studies is linked with a more aggressive disease phenotype. Thus, this study aims to determine whether clinical variables are associated with the presence of TMPRSS2:ERG-fusion gene transcript in Indian patients of prostate cancer. Methods: We evaluated the clinical variables with presence and absence of TMPRSS2:ERG gene fusion in prostate cancer and BPH association of clinical patients. Patients referred for prostate biopsy because of abnormal DRE or/and elevated sPSA were enrolled for this prospective clinical study. TMPRSS2:ERG mRNA copies in samples were quantified using a Taqman chemistry by real time PCR assay in prostate biopsy samples (N=42). The T2:ERG assay detects the gene fusion mRNA isoform TMPRSS2 exon1 to ERG exon4. Results: Histopathology report has confirmed 25 cases as prostate cancer adenocarcinoma (PCa) and 17 patients as benign prostate hyperplasia (BPH). Out of 25 PCa cases, 16 (64%) were T2: ERG fusion positive. All 17 BPH controls were fusion negative. The T2:ERG fusion transcript was exclusively specific for prostate cancer as no case of BPH was detected having T2:ERG fusion, showing 100% specificity. The positive predictive value of fusion marker for prostate cancer is thus 100% and the negative predictive value is 65.3%. The T2:ERG fusion marker is significantly associated with clinical variables like no. of positive cores in prostate biopsy, Gleason score, serum PSA, perineural invasion, perivascular invasion and periprostatic fat involvement. Conclusions: Prostate cancer is a heterogeneous disease that may be defined by molecular subtypes such as the TMPRSS2:ERG fusion. In the present prospective study, the T2:ERG quantitative assay demonstrated high specificity for predicting biopsy outcome; sensitivity was similar to the prevalence of T2:ERG gene fusions in prostate tumors. These data suggest that further improvement in diagnostic accuracy could be achieved using a nomogram that combines T2:ERG with other markers and risk factors for prostate cancer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prostate%20cancer" title="prostate cancer">prostate cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20rearrangement" title=" genetic rearrangement"> genetic rearrangement</a>, <a href="https://publications.waset.org/abstracts/search?q=TMPRSS2%3AERG%20fusion" title=" TMPRSS2:ERG fusion"> TMPRSS2:ERG fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20variables" title=" clinical variables"> clinical variables</a> </p> <a href="https://publications.waset.org/abstracts/8830/clinical-relevance-of-tmprss2-erg-fusion-marker-for-prostate-cancer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8830.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">444</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=642">642</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=643">643</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion%20method&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10