CINXE.COM

Search results for: fusion

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: fusion</title> <meta name="description" content="Search results for: fusion"> <meta name="keywords" content="fusion"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="fusion" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="fusion"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 480</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: fusion</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">480</span> The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a>, <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20V.%20Egorov"> Dmitry V. Egorov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification%20accuracy" title="classification accuracy">classification accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion%20solution" title=" fusion solution"> fusion solution</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20error%20rate" title=" total error rate"> total error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion%20classifier" title=" multimodal fusion classifier"> multimodal fusion classifier</a> </p> <a href="https://publications.waset.org/abstracts/26088/the-optimization-of-decision-rules-in-multimodal-decision-level-fusion-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26088.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">466</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">479</span> Age Determination from Epiphyseal Union of Bones at Shoulder Joint in Girls of Central India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Tirpude">B. Tirpude</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Surwade"> V. Surwade</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Murkey"> P. Murkey</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Wankhade"> P. Wankhade</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Meena"> S. Meena </a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is no statistical data to establish variation in epiphyseal fusion in girls in central India population. This significant oversight can lead to exclusion of persons of interest in a forensic investigation. Epiphyseal fusion of proximal end of humerus in eighty females were analyzed on radiological basis to assess the range of variation of epiphyseal fusion at each age. In the study, the X ray films of the subjects were divided into three groups on the basis of degree of fusion. Firstly, those which were showing No Epiphyseal Fusion (N), secondly those showing Partial Union (PC), and thirdly those showing Complete Fusion (C). Observations made were compared with the previous studies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=epiphyseal%20union" title="epiphyseal union">epiphyseal union</a>, <a href="https://publications.waset.org/abstracts/search?q=shoulder%20joint" title=" shoulder joint"> shoulder joint</a>, <a href="https://publications.waset.org/abstracts/search?q=proximal%20end%20of%20humerus" title=" proximal end of humerus"> proximal end of humerus</a> </p> <a href="https://publications.waset.org/abstracts/19684/age-determination-from-epiphyseal-union-of-bones-at-shoulder-joint-in-girls-of-central-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19684.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">495</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">478</span> Performance of Hybrid Image Fusion: Implementation of Dual-Tree Complex Wavelet Transform Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Gupta">Manoj Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nirmendra%20Singh%20Bhadauria"> Nirmendra Singh Bhadauria</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of the applications in image processing require high spatial and high spectral resolution in a single image. For example satellite image system, the traffic monitoring system, and long range sensor fusion system all use image processing. However, most of the available equipment is not capable of providing this type of data. The sensor in the surveillance system can only cover the view of a small area for a particular focus, yet the demanding application of this system requires a view with a high coverage of the field. Image fusion provides the possibility of combining different sources of information. In this paper, we have decomposed the image using DTCWT and then fused using average and hybrid of (maxima and average) pixel level techniques and then compared quality of both the images using PSNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/abstracts/search?q=DT-CWT" title=" DT-CWT"> DT-CWT</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20image%20fusion" title=" average image fusion"> average image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20image%20fusion" title=" hybrid image fusion"> hybrid image fusion</a> </p> <a href="https://publications.waset.org/abstracts/19207/performance-of-hybrid-image-fusion-implementation-of-dual-tree-complex-wavelet-transform-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">477</span> Changes in the Median Sacral Crest Associated with Sacrocaudal Fusion in the Greyhound</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Ismail">S. M. Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=H-H%20Yen"> H-H Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20M.%20Murray"> C. M. Murray</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20M.%20S.%20Davies"> H. M. S. Davies</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A recent study reported a 33% incidence of complete sacrocaudal fusion in greyhounds compared to a 3% incidence in other dogs. In the dog, the median sacral crest is formed by the fusion of sacral spinous processes. Separation of the 1st spinous process from the median crest of the sacrum in the dog has been reported as a diagnostic tool of type one lumbosacral transitional vertebra (LTV). LTV is a congenital spinal anomaly, which includes either sacralization of the caudal lumbar part or lumbarization of the most cranial sacral segment of the spine. In this study, the absence or reduction of fusion (presence of separation) between the 1st and 2ndspinous processes of the median sacral crest has been identified in association with sacrocaudal fusion in the greyhound, without any feature of LTV. In order to provide quantitative data on the absence or reduction of fusion in the median sacral crest between the 1st and 2nd sacral spinous processes, in association with sacrocaudal fusion. 204 dog sacrums free of any pathological changes (192 greyhound, 9 beagles and 3 labradors) were grouped based on the occurrence and types of fusion and the presence, absence, or reduction in the median sacral crest between the 1st and 2nd sacral spinous processes., Sacrums were described and classified as follows: F: Complete fusion (crest is present), N: Absence (fusion is absent), and R: Short crest (fusion reduced but not absent (reduction). The incidence of sacrocaudal fusion in the 204 sacrums: 57% of the sacrums were standard (3 vertebrae) and 43% were fused (4 vertebrae). Type of sacrum had a significant (p < .05) association with the absence and reduction of fusion between the 1st and 2nd sacral spinous processes of the median sacral crest. In the 108 greyhounds with standard sacrums (3 vertebrae) the percentages of F, N and R were 45% 23% and 23% respectively, while in the 84 fused (4 vertebrae) sacrums, the percentages of F, N and R were 3%, 87% and 10% respectively and these percentages were significantly different between standard (3 vertebrae) and fused (4 vertebrae) sacrums (p < .05). This indicates that absence of spinous process fusion in the median sacral crest was found in a large percentage of the greyhounds in this study and was found to be particularly prevalent in those with sacrocaudal fusion – therefore in this breed, at least, absence of sacral spinous process fusion may be unlikely to be associated with LTV. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=greyhound" title="greyhound">greyhound</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20sacral%20crest" title=" median sacral crest"> median sacral crest</a>, <a href="https://publications.waset.org/abstracts/search?q=sacrocaudal%20fusion" title=" sacrocaudal fusion"> sacrocaudal fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=sacral%20spinous%20process" title=" sacral spinous process"> sacral spinous process</a> </p> <a href="https://publications.waset.org/abstracts/47980/changes-in-the-median-sacral-crest-associated-with-sacrocaudal-fusion-in-the-greyhound" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">476</span> Implementation of Sensor Fusion Structure of 9-Axis Sensors on the Multipoint Control Unit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun%20Gil%20Ahn">Jun Gil Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Jong%20Tae%20Kim"> Jong Tae Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we study the sensor fusion structure on the multipoint control unit (MCU). Sensor fusion using Kalman filter for 9-axis sensors is considered. The 9-axis inertial sensor is the combination of 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. We implement the sensor fusion structure among the sensor hubs in MCU and measure the execution time, power consumptions, and total energy. Experiments with real data from 9-axis sensor in 20Mhz show that the average power consumptions are 44mW and 48mW on Cortx-M0 and Cortex-M3 MCU, respectively. Execution times are 613.03 us and 305.6 us respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=9-axis%20sensor" title="9-axis sensor">9-axis sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=MCU" title=" MCU"> MCU</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20fusion" title=" sensor fusion"> sensor fusion</a> </p> <a href="https://publications.waset.org/abstracts/84323/implementation-of-sensor-fusion-structure-of-9-axis-sensors-on-the-multipoint-control-unit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84323.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">504</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">475</span> Efficient Feature Fusion for Noise Iris in Unconstrained Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao-Hong%20Tsai">Yao-Hong Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet" title=" wavelet"> wavelet</a> </p> <a href="https://publications.waset.org/abstracts/17027/efficient-feature-fusion-for-noise-iris-in-unconstrained-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17027.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">474</span> Sampling Two-Channel Nonseparable Wavelets and Its Applications in Multispectral Image Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bin%20Liu">Bin Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Weijie%20Liu"> Weijie Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bin%20Sun"> Bin Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Yihui%20Luo"> Yihui Luo </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to solve the problem of lower spatial resolution and block effect in the fusion method based on separable wavelet transform in the resulting fusion image, a new sampling mode based on multi-resolution analysis of two-channel non separable wavelet transform, whose dilation matrix is [1,1;1,-1], is presented and a multispectral image fusion method based on this kind of sampling mode is proposed. Filter banks related to this kind of wavelet are constructed, and multiresolution decomposition of the intensity of the MS and panchromatic image are performed in the sampled mode using the constructed filter bank. The low- and high-frequency coefficients are fused by different fusion rules. The experiment results show that this method has good visual effect. The fusion performance has been noted to outperform the IHS fusion method, as well as, the fusion methods based on DWT, IHS-DWT, IHS-Contourlet transform, and IHS-Curvelet transform in preserving both spectral quality and high spatial resolution information. Furthermore, when compared with the fusion method based on nonsubsampled two-channel non separable wavelet, the proposed method has been observed to have higher spatial resolution and good global spectral information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=two-channel%20sampled%20nonseparable%20wavelets" title=" two-channel sampled nonseparable wavelets"> two-channel sampled nonseparable wavelets</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral%20image" title=" multispectral image"> multispectral image</a>, <a href="https://publications.waset.org/abstracts/search?q=panchromatic%20image" title=" panchromatic image"> panchromatic image</a> </p> <a href="https://publications.waset.org/abstracts/15357/sampling-two-channel-nonseparable-wavelets-and-its-applications-in-multispectral-image-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15357.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">473</span> Variations in the Angulation of the First Sacral Spinous Process Angle Associated with Sacrocaudal Fusion in Greyhounds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sa%27ad%20M.%20Ismail">Sa&#039;ad M. Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Hung-Hsun%20Yen"> Hung-Hsun Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=Christina%20M.%20Murray"> Christina M. Murray</a>, <a href="https://publications.waset.org/abstracts/search?q=Helen%20M.%20S.%20Davies"> Helen M. S. Davies</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the dog, the median sacral crest is formed by the fusion of three sacral spinous processes. In greyhounds with standard sacrums, this fusion in the median sacral crest consists of the fusion of three sacral spinous processes while it consists of four in greyhounds with sacrocaudal fusion. In the present study, variations in the angulation of the first sacral spinous process in association with different types of sacrocaudal fusion in the greyhound were investigated. Sacrums were collected from 207 greyhounds (102 sacrums; type A (unfused) and 105 with different types of sacrocaudal fusion; types: B, C and D). Sacrums were cleaned by boiling and dried and then were placed on their ventral surface on a flat surface and photographed from the left side using a digital camera at a fixed distance. The first sacral spinous process angle (1st SPA) was defined as the angle formed between the cranial border of the cranial ridge of the first sacral spinous process and the line extending across the most dorsal surface points of the spinous processes of the S1, S2, and S3. Image-Pro Express Version 5.0 imaging software was used to draw and measure the angles. Two photographs were taken for each sacrum and two repeat measurements were also taken of each angle. The mean value of the 1st SPA in greyhounds with sacrocaudal fusion was less (98.99°, SD ± 11, n = 105) than those in greyhounds with standard sacrums (99.77°, SD ± 9.18, n = 102) but was not significantly different (P < 0.05). Among greyhounds with different types of sacrocaudal fusion the mean value of the 1st SPA was as follows: type B; 97.73°, SD ± 10.94, n = 39, type C: 101.42°, SD ± 10.51, n = 52, and type D: 94.22°, SD ± 11.30, n = 12. For all types of fusion these angles were significantly different from each other (P < 0.05). Comparing the mean value of the1st SPA in standard sacrums (Type A) with that for each type of fusion separately showed that the only significantly different angulation (P < 0.05) was between standard sacrums and sacrums with sacrocaudal fusion sacrum type D (only body fusion between the S1 and Ca1). Different types of sacrocaudal fusion were associated with variations in the angle of the first sacral spinous process. These variations may affect the alignment and biomechanics of the sacral area and the pattern of movement and/or the force produced by both hind limbs to the cranial parts of the body and may alter the loading of other parts of the body. We concluded that any variations in the sacrum anatomical features might change the function of the sacrum or surrounding anatomical structures during movement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=angulation%20of%20first%20sacral%20spinous%20process" title="angulation of first sacral spinous process">angulation of first sacral spinous process</a>, <a href="https://publications.waset.org/abstracts/search?q=biomechanics" title=" biomechanics"> biomechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=greyhound" title=" greyhound"> greyhound</a>, <a href="https://publications.waset.org/abstracts/search?q=locomotion" title=" locomotion"> locomotion</a>, <a href="https://publications.waset.org/abstracts/search?q=sacrocaudal%20fusion" title=" sacrocaudal fusion"> sacrocaudal fusion</a> </p> <a href="https://publications.waset.org/abstracts/74942/variations-in-the-angulation-of-the-first-sacral-spinous-process-angle-associated-with-sacrocaudal-fusion-in-greyhounds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74942.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">311</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">472</span> Multi-Channel Information Fusion in C-OTDR Monitoring Systems: Various Approaches to Classify of Targeted Events</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents new results concerning selection of optimal information fusion formula for ensembles of C-OTDR channels. The goal of information fusion is to create an integral classificator designed for effective classification of seismoacoustic target events. The LPBoost (LP-β and LP-B variants), the Multiple Kernel Learning, and Weighing of Inversely as Lipschitz Constants (WILC) approaches were compared. The WILC is a brand new approach to optimal fusion of Lipschitz Classifiers Ensembles. Results of practical usage are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lipschitz%20Classifier" title="Lipschitz Classifier">Lipschitz Classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=classifiers%20ensembles" title=" classifiers ensembles"> classifiers ensembles</a>, <a href="https://publications.waset.org/abstracts/search?q=LPBoost" title=" LPBoost"> LPBoost</a>, <a href="https://publications.waset.org/abstracts/search?q=C-OTDR%20systems" title=" C-OTDR systems"> C-OTDR systems</a> </p> <a href="https://publications.waset.org/abstracts/21072/multi-channel-information-fusion-in-c-otdr-monitoring-systems-various-approaches-to-classify-of-targeted-events" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21072.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">461</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">471</span> Variations in the 7th Lumbar (L7) Vertebra Length Associated with Sacrocaudal Fusion in Greyhounds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sa%60ad%20M.%20Ismail">Sa`ad M. Ismail</a>, <a href="https://publications.waset.org/abstracts/search?q=Hung-Hsun%20Yen"> Hung-Hsun Yen</a>, <a href="https://publications.waset.org/abstracts/search?q=Christina%20M.%20Murray"> Christina M. Murray</a>, <a href="https://publications.waset.org/abstracts/search?q=Helen%20M.%20S.%20Davies"> Helen M. S. Davies</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The lumbosacral junction (where the 7th lumbar vertebra (L7) articulates with the sacrum) is a clinically important area in the dog. The 7th lumbar vertebra (L7) is normally shorter than other lumbar vertebrae, and it has been reported that variations in the L7 length may be associated with other abnormal anatomical findings. These variations included the reduction or absence of the portion of the median sacral crest. In this study, 53 greyhound cadavers were placed in right lateral recumbency, and two lateral radiographs were taken of the lumbosacral region for each greyhound. The length of the 6th lumbar (L6) vertebra and L7 were measured using radiographic measurement software and was defined to be the mean of three lines drawn from the caudal to the cranial edge of the L6 and L7 vertebrae (a dorsal, middle, and ventral line) between specific landmarks. Sacrocaudal fusion was found in 41.5% of the greyhounds. The mean values of the length of L6, L7, and the ratio of the L6/L7 length of the greyhounds with sacrocaudal fusion were all greater than those with standard sacrums (three sacral vertebrae). There was a significant difference (P < 0.05) in the mean values of the length of L7 between the greyhounds without sacrocaudal fusion (mean = 29.64, SD ± 2.07) and those with sacrocaudal fusion (mean = 30.86, SD ± 1.80), but, there was no significant difference in the mean value of the length of the L6 measurement. Among different types of sacrocaudal fusion, the longest L7 was found in greyhounds with sacrum type D, intermediate length in those with sacrum type B, and the shortest was found in those with sacrums type C, and the mean values of the ratio of the L6/L7 were 1.11 (SD ± 0.043), 1.15, (SD ± 0.025), and 1.15 (SD ± 0.011) for the types B, C, and D respectively. No significant differences in the mean values of the length of L6 or L7 were found among the different types of sacrocaudal fusion. The occurrence of sacrocaudal fusion might affect direct anatomically connected structures such as the L7. The variation in the length of L7 between greyhounds with sacrocaudal fusion and those without may reflect the possible sequences of the process of fusion. Variations in the length of the L7 vertebra in greyhounds may be associated with the occurrence of sacrocaudal fusion. The variation in the vertebral length may affect the alignment and biomechanical properties of the sacrum and may alter the loading. We concluded that any variations in the sacrum anatomical features might change the function of the sacrum or the surrounding anatomical structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biomechanics" title="biomechanics">biomechanics</a>, <a href="https://publications.waset.org/abstracts/search?q=Greyhound" title=" Greyhound"> Greyhound</a>, <a href="https://publications.waset.org/abstracts/search?q=sacrocaudal%20fusion" title=" sacrocaudal fusion"> sacrocaudal fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=locomotion" title=" locomotion"> locomotion</a>, <a href="https://publications.waset.org/abstracts/search?q=6th%20Lumbar%20%28L6%29%20Vertebra" title=" 6th Lumbar (L6) Vertebra"> 6th Lumbar (L6) Vertebra</a>, <a href="https://publications.waset.org/abstracts/search?q=7th%20Lumbar%20%28L7%29%20Vertebra" title=" 7th Lumbar (L7) Vertebra"> 7th Lumbar (L7) Vertebra</a>, <a href="https://publications.waset.org/abstracts/search?q=ratio%20of%20the%20L6%2FL7%20length" title=" ratio of the L6/L7 length"> ratio of the L6/L7 length</a> </p> <a href="https://publications.waset.org/abstracts/74939/variations-in-the-7th-lumbar-l7-vertebra-length-associated-with-sacrocaudal-fusion-in-greyhounds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">470</span> Clinical Relevance of TMPRSS2-ERG Fusion Marker for Prostate Cancer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shalu%20Jain">Shalu Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=Anju%20Bansal"> Anju Bansal</a>, <a href="https://publications.waset.org/abstracts/search?q=Anup%20Kumar"> Anup Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunita%20Saxena"> Sunita Saxena</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objectives: The novel TMPRSS2:ERG gene fusion is a common somatic event in prostate cancer that in some studies is linked with a more aggressive disease phenotype. Thus, this study aims to determine whether clinical variables are associated with the presence of TMPRSS2:ERG-fusion gene transcript in Indian patients of prostate cancer. Methods: We evaluated the clinical variables with presence and absence of TMPRSS2:ERG gene fusion in prostate cancer and BPH association of clinical patients. Patients referred for prostate biopsy because of abnormal DRE or/and elevated sPSA were enrolled for this prospective clinical study. TMPRSS2:ERG mRNA copies in samples were quantified using a Taqman chemistry by real time PCR assay in prostate biopsy samples (N=42). The T2:ERG assay detects the gene fusion mRNA isoform TMPRSS2 exon1 to ERG exon4. Results: Histopathology report has confirmed 25 cases as prostate cancer adenocarcinoma (PCa) and 17 patients as benign prostate hyperplasia (BPH). Out of 25 PCa cases, 16 (64%) were T2: ERG fusion positive. All 17 BPH controls were fusion negative. The T2:ERG fusion transcript was exclusively specific for prostate cancer as no case of BPH was detected having T2:ERG fusion, showing 100% specificity. The positive predictive value of fusion marker for prostate cancer is thus 100% and the negative predictive value is 65.3%. The T2:ERG fusion marker is significantly associated with clinical variables like no. of positive cores in prostate biopsy, Gleason score, serum PSA, perineural invasion, perivascular invasion and periprostatic fat involvement. Conclusions: Prostate cancer is a heterogeneous disease that may be defined by molecular subtypes such as the TMPRSS2:ERG fusion. In the present prospective study, the T2:ERG quantitative assay demonstrated high specificity for predicting biopsy outcome; sensitivity was similar to the prevalence of T2:ERG gene fusions in prostate tumors. These data suggest that further improvement in diagnostic accuracy could be achieved using a nomogram that combines T2:ERG with other markers and risk factors for prostate cancer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prostate%20cancer" title="prostate cancer">prostate cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20rearrangement" title=" genetic rearrangement"> genetic rearrangement</a>, <a href="https://publications.waset.org/abstracts/search?q=TMPRSS2%3AERG%20fusion" title=" TMPRSS2:ERG fusion"> TMPRSS2:ERG fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical%20variables" title=" clinical variables"> clinical variables</a> </p> <a href="https://publications.waset.org/abstracts/8830/clinical-relevance-of-tmprss2-erg-fusion-marker-for-prostate-cancer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8830.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">444</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">469</span> Multimodal Data Fusion Techniques in Audiovisual Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hadeer%20M.%20Sayed">Hadeer M. Sayed</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesham%20E.%20El%20Deeb"> Hesham E. El Deeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Shereen%20A.%20Taie"> Shereen A. Taie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20data" title="multimodal data">multimodal data</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title=" data fusion"> data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition" title=" audio-visual speech recognition"> audio-visual speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/157362/multimodal-data-fusion-techniques-in-audiovisual-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">111</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">468</span> Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xudong%20Guan">Xudong Guan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ainong%20Li"> Ainong Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaohuan%20Liu"> Gaohuan Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chong%20Huang"> Chong Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhao"> Wei Zhao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20fusion" title=" decision fusion"> decision fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-temporal" title=" multi-temporal"> multi-temporal</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a> </p> <a href="https://publications.waset.org/abstracts/112195/integrating-time-series-and-high-spatial-remote-sensing-data-based-on-multilevel-decision-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112195.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">467</span> Medical Imaging Fusion: A Teaching-Learning Simulation Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cristina%20Maria%20Ribeiro%20Martins%20Pereira%20Caridade">Cristina Maria Ribeiro Martins Pereira Caridade</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Rita%20Ferreira%20Morais"> Ana Rita Ferreira Morais</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching-learning%20simulation%20tool" title=" teaching-learning simulation tool"> teaching-learning simulation tool</a>, <a href="https://publications.waset.org/abstracts/search?q=biomedical%20engineering%20education" title=" biomedical engineering education"> biomedical engineering education</a> </p> <a href="https://publications.waset.org/abstracts/164987/medical-imaging-fusion-a-teaching-learning-simulation-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164987.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">466</span> A Multi Sensor Monochrome Video Fusion Using Image Quality Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Prema%20Kumar">M. Prema Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Rajesh%20Kumar"> P. Rajesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The increasing interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. This paper gives a novel approach of merging the information content from several videos taken from the same scene in order to rack up a combined video that contains the finest information coming from different source videos. This process is known as video fusion which helps in providing superior quality (The term quality, connote measurement on the particular application.) image than the source images. In this technique different sensors (whose redundant information can be reduced) are used for various cameras that are imperative for capturing the required images and also help in reducing. In this paper Image fusion technique based on multi-resolution singular value decomposition (MSVD) has been used. The image fusion by MSVD is almost similar to that of wavelets. The idea behind MSVD is to replace the FIR filters in wavelet transform with singular value decomposition (SVD). It is computationally very simple and is well suited for real time applications like in remote sensing and in astronomy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi%20sensor%20image%20fusion" title="multi sensor image fusion">multi sensor image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=MSVD" title=" MSVD"> MSVD</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20video" title=" monochrome video"> monochrome video</a> </p> <a href="https://publications.waset.org/abstracts/14866/a-multi-sensor-monochrome-video-fusion-using-image-quality-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">572</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">465</span> Multi-Focus Image Fusion Using SFM and Wavelet Packet</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somkait%20Udomhunsakul">Somkait Udomhunsakul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-focus%20image%20fusion" title="multi-focus image fusion">multi-focus image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20packet" title=" wavelet packet"> wavelet packet</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20frequency%20measurement" title=" spatial frequency measurement"> spatial frequency measurement</a> </p> <a href="https://publications.waset.org/abstracts/4886/multi-focus-image-fusion-using-sfm-and-wavelet-packet" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4886.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">464</span> Biimodal Biometrics System Using Fusion of Iris and Fingerprint</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Attallah%20Bilal">Attallah Bilal</a>, <a href="https://publications.waset.org/abstracts/search?q=Hendel%20Fatiha"> Hendel Fatiha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris" title="iris">iris</a>, <a href="https://publications.waset.org/abstracts/search?q=fingerprint" title=" fingerprint"> fingerprint</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20rule" title=" sum rule"> sum rule</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/18556/biimodal-biometrics-system-using-fusion-of-iris-and-fingerprint" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18556.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">463</span> Quantom Magnetic Effects of P-B Fusion in Plasma Focus Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Habibi">M. Habibi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The feasibility of proton-boron fusion in plasmoids caused by magneto hydrodynamics instabilities in plasma focus devices is studied analytically. In plasmoids, fusion power for 76 keV < Ti < 1500 keV exceeds bremsstrahlung loss (W/Pb=5.39). In such situation gain factor and the ratio of Te to Ti for a typical 150 kJ plasma focus device will be 7.8 and 4.8 respectively. Also with considering the ion viscous heating effect, W/Pb and Ti/Te will be 2.7 and 6 respectively. Strong magnetic field will reduces ion-electron collision rate due to quantization of electron orbits. While approximately there is no change in electron-ion collision rate, the effect of quantum magnetic field makes ions much hotter than electrons which enhance the fraction of fusion power to bremsstrahlung loss. Therefore self-sustained p-11B fusion reactions would be possible and it could be said that p-11B fuelled plasma focus device is a clean and efficient source of energy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=plasmoids" title="plasmoids">plasmoids</a>, <a href="https://publications.waset.org/abstracts/search?q=p11B%20fuel" title=" p11B fuel"> p11B fuel</a>, <a href="https://publications.waset.org/abstracts/search?q=ion%20viscous%20heating" title=" ion viscous heating"> ion viscous heating</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20magnetic%20field" title=" quantum magnetic field"> quantum magnetic field</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20focus%20device" title=" plasma focus device"> plasma focus device</a> </p> <a href="https://publications.waset.org/abstracts/26776/quantom-magnetic-effects-of-p-b-fusion-in-plasma-focus-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26776.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">463</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">462</span> Comparative Analysis of Hybrid Dynamic Stabilization and Fusion for Degenerative Disease of the Lumbosacral Spine: Finite Element Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Bendoukha">Mohamed Bendoukha</a>, <a href="https://publications.waset.org/abstracts/search?q=Mustapha%20Mosbah"> Mustapha Mosbah </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Radiographic apparent assumed that the asymptomatic adjacent segment disease ASD is common after lumbar fusion, but this does not correlate with the functional outcomes while compensatory increased motion and stresses at the adjacent level of fusion is well-known to be associated to ASD. Newly developed, the hybrid stabilization are allocated to substituted for mostly the superior level of the fusion in an attempt to reduce the number of fusion levels and likelihood of degeneration process at the adjacent levels during the fusion with pedicle screws. Nevertheless, its biomechanical efficiencies still remain unknown and complications associated with failure of constructs such screw loosening and toggling should be elucidated In the current study, a finite element (FE) study was performed using a validated L2/S1 model subjected to a moment of 7.5 Nm and follower load of 400 N to assess the biomedical behavior of hybrid constructs based on dynamic topping off, semi rigid fusion. The residual range of motion (ROM), stress distribution at the fused and adjacent levels, stress distribution at the disc and the cage-endplate interface with respect to changes of bone quality were investigated. The hybrid instrumentation was associated with a reduction in compressive stresses compared to the fusion construct in the adjacent-level disc and showed high substantial axial force in the implant while fusion instrumentation increased the motion for both flexion and extension. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intervertebral%20disc" title="intervertebral disc">intervertebral disc</a>, <a href="https://publications.waset.org/abstracts/search?q=lumbar%20spine" title=" lumbar spine"> lumbar spine</a>, <a href="https://publications.waset.org/abstracts/search?q=degenerative%20nuclesion" title=" degenerative nuclesion"> degenerative nuclesion</a>, <a href="https://publications.waset.org/abstracts/search?q=L4-L5" title=" L4-L5"> L4-L5</a>, <a href="https://publications.waset.org/abstracts/search?q=range%20of%20motion%20finite%20element%20model" title=" range of motion finite element model"> range of motion finite element model</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperelasticy" title=" hyperelasticy"> hyperelasticy</a> </p> <a href="https://publications.waset.org/abstracts/89019/comparative-analysis-of-hybrid-dynamic-stabilization-and-fusion-for-degenerative-disease-of-the-lumbosacral-spine-finite-element-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89019.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">461</span> Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sam%20Khozama">Sam Khozama</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20M.%20Mayya"> Ali M. Mayya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer%20prediction" title=" cancer prediction"> cancer prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title=" breast cancer"> breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/155602/breast-cancer-prediction-using-score-level-fusion-of-machine-learning-and-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">460</span> Adaptive Dehazing Using Fusion Strategy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Ramesh%20Kanthan">M. Ramesh Kanthan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Naga%20Nandini%20Sujatha"> S. Naga Nandini Sujatha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20image" title="single image">single image</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=dehazing" title=" dehazing"> dehazing</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20fusion" title=" multi-scale fusion"> multi-scale fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=per-pixel" title=" per-pixel"> per-pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20map" title=" weight map"> weight map</a> </p> <a href="https://publications.waset.org/abstracts/32544/adaptive-dehazing-using-fusion-strategy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">459</span> Dual Biometrics Fusion Based Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prakash">Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=Vikash%20Kumar"> Vikash Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Bansal"> Vinay Bansal</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20N.%20Das"> L. N. Das</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dual biometrics is a subpart of multimodal biometrics, which refers to the use of a variety of modalities to identify and authenticate persons rather than just one. We limit the risks of mistakes by mixing several modals, and hackers have a tiny possibility of collecting information. Our goal is to collect the precise characteristics of iris and palmprint, produce a fusion of both methodologies, and ensure that authentication is only successful when the biometrics match a particular user. After combining different modalities, we created an effective strategy with a mean DI and EER of 2.41 and 5.21, respectively. A biometric system has been proposed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=palmprint" title=" palmprint"> palmprint</a>, <a href="https://publications.waset.org/abstracts/search?q=Iris" title=" Iris"> Iris</a>, <a href="https://publications.waset.org/abstracts/search?q=EER" title=" EER"> EER</a>, <a href="https://publications.waset.org/abstracts/search?q=DI" title=" DI"> DI</a> </p> <a href="https://publications.waset.org/abstracts/149996/dual-biometrics-fusion-based-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">458</span> Multi-Sensor Image Fusion for Visible and Infrared Thermal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amit%20Kumar%20Happy">Amit Kumar Happy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=IR%20thermal%20imager" title=" IR thermal imager"> IR thermal imager</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-sensor" title=" multi-sensor"> multi-sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20transform" title=" multi-scale transform"> multi-scale transform</a> </p> <a href="https://publications.waset.org/abstracts/138086/multi-sensor-image-fusion-for-visible-and-infrared-thermal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138086.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">115</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">457</span> Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samar%20M.%20Alqhtani">Samar M. Alqhtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Suhuai%20Luo"> Suhuai Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Brian%20Regan"> Brian Regan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title="data fusion">data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=Dempster-Shafer%20theory" title=" Dempster-Shafer theory"> Dempster-Shafer theory</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=event%20detection" title=" event detection"> event detection</a> </p> <a href="https://publications.waset.org/abstracts/34741/multimedia-data-fusion-for-event-detection-in-twitter-by-using-dempster-shafer-evidence-theory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34741.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">456</span> Theoretical Investigation of Proton-Bore Fusion in Hot Spots </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Morteza%20Habibi">Morteza Habibi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As an alternative to D–T fuel, one can consider advanced fuels like D3-He and p-11B fuels, which have potential advantages concerning availability and/or environmental impact. Hot spots are micron-sized magnetically self-contained sources observed in pinched plasma devices. In hot spots, fusion power for 120 keV < Ti < 800 keV and 32 keV < Te < 129 keV exceeds bremsstrahlung loss and fraction of fusion power to bremsstrahlung loss reaches to 1.9. In this case, gain factor for a 150 kJ typical pulsed generator as a hot spot source will be 7.8 which is considerable for a commercial pinched plasma device. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=P-B%20fuel" title="P-B fuel">P-B fuel</a>, <a href="https://publications.waset.org/abstracts/search?q=hot%20spot" title=" hot spot"> hot spot</a>, <a href="https://publications.waset.org/abstracts/search?q=bremmsstrahlung%20loss" title=" bremmsstrahlung loss"> bremmsstrahlung loss</a>, <a href="https://publications.waset.org/abstracts/search?q=ion%20temperature" title=" ion temperature "> ion temperature </a> </p> <a href="https://publications.waset.org/abstracts/30369/theoretical-investigation-of-proton-bore-fusion-in-hot-spots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30369.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">526</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">455</span> Implementation and Comparative Analysis of PET and CT Image Fusion Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Guruprasad">S. Guruprasad</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20N.%20Suma"> H. N. Suma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical imaging modalities are becoming life saving components. These modalities are very much essential to doctors for proper diagnosis, treatment planning and follow up. Some modalities provide anatomical information such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), X-rays and some provides only functional information such as Positron Emission Tomography (PET). Therefore, single modality image does not give complete information. This paper presents the fusion of structural information in CT and functional information present in PET image. This fused image is very much essential in detecting the stages and location of abnormalities and in particular very much needed in oncology for improved diagnosis and treatment. We have implemented and compared image fusion techniques like pyramid, wavelet, and principal components fusion methods along with hybrid method of DWT and PCA. The performances of the algorithms are evaluated quantitatively and qualitatively. The system is implemented and tested by using MATLAB software. Based on the MSE, PSNR and ENTROPY analysis, PCA and DWT-PCA methods showed best results over all experiments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=pyramid" title=" pyramid"> pyramid</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelets" title=" wavelets"> wavelets</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis" title=" principal component analysis"> principal component analysis</a> </p> <a href="https://publications.waset.org/abstracts/60736/implementation-and-comparative-analysis-of-pet-and-ct-image-fusion-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60736.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">454</span> Construction of a Fusion Gene Carrying E10A and K5 with 2A Peptide-Linked by Using Overlap Extension PCR</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tiancheng%20Lan">Tiancheng Lan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> E10A is a kind of replication-defective adenovirus which carries the human endostatin gene to inhibit the growth of tumors. Kringle 5(K5) has almost the same function as angiostatin to also inhibit the growth of tumors since they are all the byproduct of the proteolytic cleavage of plasminogen. Tumor size increasing can be suppressed because both of the endostatin and K5 can restrain the angiogenesis process. Therefore, in order to improve the treatment effect on tumor, 2A peptide is used to construct a fusion gene carrying both E10A and K5. Using 2A peptide is an ideal strategy when a fusion gene is expressed because it can avoid many problems during the expression of more than one kind of protein. The overlap extension PCR is also used to connect 2A peptide with E10A and K5. The final construction of fusion gene E10A-2A-K5 can provide a possible new method of the anti-angiogenesis treatment with a better expression performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=E10A" title="E10A">E10A</a>, <a href="https://publications.waset.org/abstracts/search?q=Kringle%205" title=" Kringle 5"> Kringle 5</a>, <a href="https://publications.waset.org/abstracts/search?q=2A%20peptide" title=" 2A peptide"> 2A peptide</a>, <a href="https://publications.waset.org/abstracts/search?q=overlap%20extension%20PCR" title=" overlap extension PCR"> overlap extension PCR</a> </p> <a href="https://publications.waset.org/abstracts/132643/construction-of-a-fusion-gene-carrying-e10a-and-k5-with-2a-peptide-linked-by-using-overlap-extension-pcr" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132643.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">453</span> Simulation for the Magnetized Plasma Compression Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victor%20V.%20Kuzenov">Victor V. Kuzenov</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergei%20V.%20Ryzhkov"> Sergei V. Ryzhkov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ongoing experimental and theoretical studies on magneto-inertial confinement fusion (Angara, C-2, CJS-100, General Fusion, MagLIF, MAGPIE, MC-1, YG-1, Omega) and new constructing facilities (Baikal, C-2W, Z300 and Z800) require adequate modeling and description of the physical processes occurring in high-temperature dense plasma in a strong magnetic field. This paper presents a mathematical model, numerical method, and results of the computer analysis of the compression process and the energy transfer in the target plasma, used in magneto-inertial fusion (MIF). The computer simulation of the compression process of the magnetized target by the high-power laser pulse and the high-speed plasma jets is presented. The characteristic patterns of the two methods of the target compression are being analysed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=magnetized%20target" title="magnetized target">magnetized target</a>, <a href="https://publications.waset.org/abstracts/search?q=magneto-inertial%20fusion" title=" magneto-inertial fusion"> magneto-inertial fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20model" title=" mathematical model"> mathematical model</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20and%20laser%20beams" title=" plasma and laser beams"> plasma and laser beams</a> </p> <a href="https://publications.waset.org/abstracts/66035/simulation-for-the-magnetized-plasma-compression-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66035.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">452</span> Characterization of Inertial Confinement Fusion Targets Based on Transmission Holographic Mach-Zehnder Interferometer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Zare-Farsani">B. Zare-Farsani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Valieghbal"> M. Valieghbal</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Tarkashvand"> M. Tarkashvand</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20H.%20Farahbod"> A. H. Farahbod</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To provide the conditions for nuclear fusion by high energy and powerful laser beams, it is required to have a high degree of symmetry and surface uniformity of the spherical capsules to reduce the Rayleigh-Taylor hydrodynamic instabilities. In this paper, we have used the digital microscopic holography based on Mach-Zehnder interferometer to study the quality of targets for inertial fusion. The interferometric pattern of the target has been registered by a CCD camera and analyzed by Holovision software. The uniformity of the surface and shell thickness are investigated and measured in reconstructed image. We measured shell thickness in different zone where obtained non uniformity 22.82 percent. &nbsp; <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inertial%20confinement%20fusion" title="inertial confinement fusion">inertial confinement fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=mach-zehnder%20interferometer" title=" mach-zehnder interferometer"> mach-zehnder interferometer</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20holographic%20microscopy" title=" digital holographic microscopy"> digital holographic microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=holovision" title=" holovision"> holovision</a> </p> <a href="https://publications.waset.org/abstracts/45440/characterization-of-inertial-confinement-fusion-targets-based-on-transmission-holographic-mach-zehnder-interferometer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45440.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">304</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">451</span> Clinical Outcomes and Surgical Complications in Patients with Cervical Disk Degeneration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mirzashahi%20Babak">Mirzashahi Babak</a>, <a href="https://publications.waset.org/abstracts/search?q=Mansouri%20Pejman"> Mansouri Pejman</a>, <a href="https://publications.waset.org/abstracts/search?q=Najafi%20Arvin"> Najafi Arvin</a>, <a href="https://publications.waset.org/abstracts/search?q=Farzan%20Mahmoud"> Farzan Mahmoud</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: There are several surgical treatment choices for cervical spondylotic myelopathy (CSM). The aim of this study is to evaluate clinical outcomes and surgical complications in patients with cervical disk degeneration (CDD) undergoing either anterior cervical discectomy with or without fusion or cervical laminectomy and fusion. Methods: This prospective case series study included 45 consecutive patients with cervical spondylotic myelopathy between January 2010 and November 2014. There were 28 males and 17 females, with a mean age of 47 (range 37-68) years. The mean clinical follow-up was 14 months (range 3-24 months). The Neck Disability Index (NDI), visual analog scale (VAS) neck and arm pain, Short Form-36 (SF-36) were used as the functional outcome measurements. All of the complications in our patients were recorded. Results: In our study group, 26 patients underwent only one or two level anterior cervical discectomy. Ten patients underwent anterior cervical discectomy and fusion (ACDF) and nine cases underwent posterior laminectomy and fusion. We have found a statistically significant improvement between mean preoperative (29, range 19-43) and postoperative (7, range 0-12) NDI scores following surgery (P < 0.05). Also, there was a statistically significant difference between pre and post-operative VAS and SF-36 score (p < 0.05). There was a 7% overall complication rate (n = 3). The only complication in our patients was surgical site cellulitis which has been managed with oral antibiotic therapy. Conclusion: Both anterior cervical discectomy with or without fusion or posterior laminectomy and fusion are safe and efficacious treatment options for the management of CSM. The clinical outcomes seem to be fairly reproducible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cervical" title="cervical">cervical</a>, <a href="https://publications.waset.org/abstracts/search?q=myelopathy" title=" myelopathy"> myelopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=discectomy" title=" discectomy"> discectomy</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=laminectomy" title=" laminectomy"> laminectomy</a> </p> <a href="https://publications.waset.org/abstracts/37427/clinical-outcomes-and-surgical-complications-in-patients-with-cervical-disk-degeneration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37427.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">350</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=15">15</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=16">16</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=fusion&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10