CINXE.COM

Search results for: deformable multimodal image registration

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: deformable multimodal image registration</title> <meta name="description" content="Search results for: deformable multimodal image registration"> <meta name="keywords" content="deformable multimodal image registration"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="deformable multimodal image registration" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="deformable multimodal image registration"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3208</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: deformable multimodal image registration</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3208</span> NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mirjana%20Ruppel">Mirjana Ruppel</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajendra%20Persad"> Rajendra Persad</a>, <a href="https://publications.waset.org/abstracts/search?q=Amit%20Bahl"> Amit Bahl</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanja%20Dogramadzi"> Sanja Dogramadzi</a>, <a href="https://publications.waset.org/abstracts/search?q=Chris%20Melhuish"> Chris Melhuish</a>, <a href="https://publications.waset.org/abstracts/search?q=Lyndon%20Smith"> Lyndon Smith</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images &circ;A, &circ;B which were registered to &tilde;B, &tilde;A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 &plusmn; 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cycle%20consistency" title="cycle consistency">cycle consistency</a>, <a href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration" title=" deformable multimodal image registration"> deformable multimodal image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=GAN" title=" GAN"> GAN</a> </p> <a href="https://publications.waset.org/abstracts/126911/nancy-combining-adversarial-networks-with-cycle-consistency-for-robust-multi-modal-image-registration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126911.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3207</span> Robust Image Registration Based on an Adaptive Normalized Mutual Information Metric</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huda%20Algharib">Huda Algharib</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20Algharib"> Amal Algharib</a>, <a href="https://publications.waset.org/abstracts/search?q=Hanan%20Algharib"> Hanan Algharib</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Mohammad%20Alqudah"> Ali Mohammad Alqudah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image registration is an important topic for many imaging systems and computer vision applications. The standard image registration techniques such as Mutual information/ Normalized mutual information -based methods have a limited performance because they do not consider the spatial information or the relationships between the neighbouring pixels or voxels. In addition, the amount of image noise may significantly affect the registration accuracy. Therefore, this paper proposes an efficient method that explicitly considers the relationships between the adjacent pixels, where the gradient information of the reference and scene images is extracted first, and then the cosine similarity of the extracted gradient information is computed and used to improve the accuracy of the standard normalized mutual information measure. Our experimental results on different data types (i.e. CT, MRI and thermal images) show that the proposed method outperforms a number of image registration techniques in terms of the accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title="image registration">image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=mutual%20information" title=" mutual information"> mutual information</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20gradients" title=" image gradients"> image gradients</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20transformations" title=" image transformations"> image transformations</a> </p> <a href="https://publications.waset.org/abstracts/82815/robust-image-registration-based-on-an-adaptive-normalized-mutual-information-metric" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82815.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3206</span> A Hybrid Normalized Gradient Correlation Based Thermal Image Registration for Morphoea</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=L.%20I.%20Izhar">L. I. Izhar</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Stathaki"> T. Stathaki</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Howell"> K. Howell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Analyzing and interpreting of thermograms have been increasingly employed in the diagnosis and monitoring of diseases thanks to its non-invasive, non-harmful nature and low cost. In this paper, a novel system is proposed to improve diagnosis and monitoring of morphoea skin disorder based on integration with the published lines of Blaschko. In the proposed system, image registration based on global and local registration methods are found inevitable. This paper presents a modified normalized gradient cross-correlation (NGC) method to reduce large geometrical differences between two multimodal images that are represented by smooth gray edge maps is proposed for the global registration approach. This method is improved further by incorporating an iterative-based normalized cross-correlation coefficient (NCC) method. It is found that by replacing the final registration part of the NGC method where translational differences are solved in the spatial Fourier domain with the NCC method performed in the spatial domain, the performance and robustness of the NGC method can be greatly improved. It is shown in this paper that the hybrid NGC method not only outperforms phase correlation (PC) method but also improved misregistration due to translation, suffered by the modified NGC method alone for thermograms with ill-defined jawline. This also demonstrates that by using the gradients of the gray edge maps and a hybrid technique, the performance of the PC based image registration method can be greatly improved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Blaschko%E2%80%99s%20lines" title="Blaschko’s lines">Blaschko’s lines</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title=" image registration"> image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=morphoea" title=" morphoea"> morphoea</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20imaging" title=" thermal imaging"> thermal imaging</a> </p> <a href="https://publications.waset.org/abstracts/44663/a-hybrid-normalized-gradient-correlation-based-thermal-image-registration-for-morphoea" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44663.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3205</span> Mutual Information Based Image Registration of Satellite Images Using PSO-GA Hybrid Algorithm </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dipti%20Patra">Dipti Patra</a>, <a href="https://publications.waset.org/abstracts/search?q=Guguloth%20Uma"> Guguloth Uma</a>, <a href="https://publications.waset.org/abstracts/search?q=Smita%20Pradhan"> Smita Pradhan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Registration is a fundamental task in image processing. It is used to transform different sets of data into one coordinate system, where data are acquired from different times, different viewing angles, and/or different sensors. The registration geometrically aligns two images (the reference and target images). Registration techniques are used in satellite images and it is important in order to be able to compare or integrate the data obtained from these different measurements. In this work, mutual information is considered as a similarity metric for registration of satellite images. The transformation is assumed to be a rigid transformation. An attempt has been made here to optimize the transformation function. The proposed image registration technique hybrid PSO-GA incorporates the notion of Particle Swarm Optimization and Genetic Algorithm and is used for finding the best optimum values of transformation parameters. The performance comparision obtained with the experiments on satellite images found that the proposed hybrid PSO-GA algorithm outperforms the other algorithms in terms of mutual information and registration accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title="image registration">image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20swarm%20optimization" title=" particle swarm optimization"> particle swarm optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20PSO-GA%20algorithm%20and%20mutual%20information" title=" hybrid PSO-GA algorithm and mutual information"> hybrid PSO-GA algorithm and mutual information</a> </p> <a href="https://publications.waset.org/abstracts/9683/mutual-information-based-image-registration-of-satellite-images-using-pso-ga-hybrid-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">408</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3204</span> A Review on Medical Image Registration Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shadrack%20Mambo">Shadrack Mambo</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Djouani"> Karim Djouani</a>, <a href="https://publications.waset.org/abstracts/search?q=Yskandar%20Hamam"> Yskandar Hamam</a>, <a href="https://publications.waset.org/abstracts/search?q=Barend%20van%20Wyk"> Barend van Wyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Patrick%20Siarry"> Patrick Siarry</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community&rsquo;s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20registration%20techniques" title="image registration techniques">image registration techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=optimisaztion" title=" optimisaztion"> optimisaztion</a>, <a href="https://publications.waset.org/abstracts/search?q=transformation" title=" transformation"> transformation</a> </p> <a href="https://publications.waset.org/abstracts/80442/a-review-on-medical-image-registration-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3203</span> Retina Registration for Biometrics Based on Characterization of Retinal Feature Points</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nougrara%20Zineb">Nougrara Zineb</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The unique structure of the blood vessels in the retina has been used for biometric identification. The retina blood vessel pattern is a unique pattern in each individual and it is almost impossible to forge that pattern in a false individual. The retina biometrics’ advantages include high distinctiveness, universality, and stability overtime of the blood vessel pattern. Once the creases have been extracted from the images, a registration stage is necessary, since the position of the retinal vessel structure could change between acquisitions due to the movements of the eye. Image registration consists of following steps: Feature detection, feature matching, transform model estimation and image resembling and transformation. In this paper, we present an algorithm of registration; it is based on the characterization of retinal feature points. For experiments, retinal images from the DRIVE database have been tested. The proposed methodology achieves good results for registration in general. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fovea" title="fovea">fovea</a>, <a href="https://publications.waset.org/abstracts/search?q=optic%20disc" title=" optic disc"> optic disc</a>, <a href="https://publications.waset.org/abstracts/search?q=registration" title=" registration"> registration</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a> </p> <a href="https://publications.waset.org/abstracts/72438/retina-registration-for-biometrics-based-on-characterization-of-retinal-feature-points" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3202</span> Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Philipp%20Ruf">Philipp Ruf</a>, <a href="https://publications.waset.org/abstracts/search?q=Massiwa%20Chabbi"> Massiwa Chabbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Christoph%20Reich"> Christoph Reich</a>, <a href="https://publications.waset.org/abstracts/search?q=Djaffar%20Ould-Abdeslam"> Djaffar Ould-Abdeslam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=tabular%20data" title=" tabular data"> tabular data</a>, <a href="https://publications.waset.org/abstracts/search?q=mixed%20dataset" title=" mixed dataset"> mixed dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20transformation" title=" data transformation"> data transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion" title=" multimodal fusion"> multimodal fusion</a> </p> <a href="https://publications.waset.org/abstracts/171840/analysing-techniques-for-fusing-multimodal-data-in-predictive-scenarios-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3201</span> Interactive Image Search for Mobile Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Komal%20V.%20Aher">Komal V. Aher</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanjay%20B.%20Waykar"> Sanjay B. Waykar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays every individual having mobile device with them. In both computer vision and information retrieval Image search is currently hot topic with many applications. The proposed intelligent image search system is fully utilizing multimodal and multi-touch functionalities of smart phones which allows search with Image, Voice, and Text on mobile phones. The system will be more useful for users who already have pictures in their minds but have no proper descriptions or names to address them. The paper gives system with ability to form composite visual query to express user’s intention more clearly which helps to give more precise or appropriate results to user. The proposed algorithm will considerably get better in different aspects. System also uses Context based Image retrieval scheme to give significant outcomes. So system is able to achieve gain in terms of search performance, accuracy and user satisfaction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20device" title=" mobile device"> mobile device</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20visual%20search" title=" mobile visual search"> mobile visual search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20search" title=" multimodal search "> multimodal search </a> </p> <a href="https://publications.waset.org/abstracts/33265/interactive-image-search-for-mobile-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3200</span> Enhanced Face Recognition with Daisy Descriptors Using 1BT Based Registration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sevil%20Igit">Sevil Igit</a>, <a href="https://publications.waset.org/abstracts/search?q=Merve%20Meric"> Merve Meric</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarp%20Erturk"> Sarp Erturk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, it is proposed to improve Daisy descriptor based face recognition using a novel One-Bit Transform (1BT) based pre-registration approach. The 1BT based pre-registration procedure is fast and has low computational complexity. It is shown that the face recognition accuracy is improved with the proposed approach. The proposed approach can facilitate highly accurate face recognition using DAISY descriptor with simple matching and thereby facilitate a low-complexity approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisy%20descriptor" title=" Daisy descriptor"> Daisy descriptor</a>, <a href="https://publications.waset.org/abstracts/search?q=One-Bit%20Transform" title=" One-Bit Transform"> One-Bit Transform</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title=" image registration"> image registration</a> </p> <a href="https://publications.waset.org/abstracts/12593/enhanced-face-recognition-with-daisy-descriptors-using-1bt-based-registration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12593.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3199</span> New Approach for Constructing a Secure Biometric Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Kebbeb">A. Kebbeb</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Mostefai"> M. Mostefai</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Benmerzoug"> F. Benmerzoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Chahir"> Y. Chahir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The multimodal biometric identification is the combination of several biometric systems. The challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20databases" title="biometric databases">biometric databases</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20biometrics" title=" multimodal biometrics"> multimodal biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20authentication" title=" security authentication"> security authentication</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20watermarking" title=" digital watermarking"> digital watermarking</a> </p> <a href="https://publications.waset.org/abstracts/3126/new-approach-for-constructing-a-secure-biometric-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3126.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">391</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3198</span> Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chhabi%20Nigam">Chhabi Nigam</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ramakrishnan"> S. Ramakrishnan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ambiguous%20target" title="ambiguous target">ambiguous target</a>, <a href="https://publications.waset.org/abstracts/search?q=Doppler%20Centroid" title=" Doppler Centroid"> Doppler Centroid</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title=" image registration"> image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=Airborne%20SAR" title=" Airborne SAR"> Airborne SAR</a> </p> <a href="https://publications.waset.org/abstracts/62254/airborne-sar-data-analysis-for-impact-of-doppler-centroid-on-image-quality-and-registration-accuracy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">218</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3197</span> Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongyeob%20Han">Dongyeob Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Jungwon%20Huh"> Jungwon Huh</a>, <a href="https://publications.waset.org/abstracts/search?q=Quang%20Huy%20Tran"> Quang Huy Tran</a>, <a href="https://publications.waset.org/abstracts/search?q=Choonghyun%20Kang"> Choonghyun Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building" title="building">building</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=temperature" title=" temperature"> temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a> </p> <a href="https://publications.waset.org/abstracts/85064/registration-of-multi-temporal-unmanned-aerial-vehicle-images-for-facility-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3196</span> Study on the Effect of Coupling Fluid Compressible-Deformable Wall on the Flow of Molten Polymers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Driouich">Mohamed Driouich</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20Gueraoui"> Kamal Gueraoui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Sammouda"> Mohamed Sammouda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main objective of this work is to establish a numerical code for studying the flow of molten polymers in deformable pipes. Using an iterative numerical method based on finite differences, we determine the profiles of the fluid velocity, the temperature and the apparent viscosity of the fluid. The numerical code presented can also be applied to other industrial applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=numerical%20code" title="numerical code">numerical code</a>, <a href="https://publications.waset.org/abstracts/search?q=molten%20polymers" title=" molten polymers"> molten polymers</a>, <a href="https://publications.waset.org/abstracts/search?q=deformable%20pipes" title=" deformable pipes"> deformable pipes</a>, <a href="https://publications.waset.org/abstracts/search?q=finite%20differences" title=" finite differences"> finite differences</a> </p> <a href="https://publications.waset.org/abstracts/8493/study-on-the-effect-of-coupling-fluid-compressible-deformable-wall-on-the-flow-of-molten-polymers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">574</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3195</span> Faster Pedestrian Recognition Using Deformable Part Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Preziosi">Alessandro Preziosi</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20Prioletti"> Antonio Prioletti</a>, <a href="https://publications.waset.org/abstracts/search?q=Luca%20Castangia"> Luca Castangia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20vehicles" title="autonomous vehicles">autonomous vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=deformable%20part%20model" title=" deformable part model"> deformable part model</a>, <a href="https://publications.waset.org/abstracts/search?q=dpm" title=" dpm"> dpm</a>, <a href="https://publications.waset.org/abstracts/search?q=pedestrian%20detection" title=" pedestrian detection"> pedestrian detection</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20time" title=" real time"> real time</a> </p> <a href="https://publications.waset.org/abstracts/51665/faster-pedestrian-recognition-using-deformable-part-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51665.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">281</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3194</span> Active Deformable Micro-Cutters with Nano-Abrasives </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Pappa">M. Pappa</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Efstathiou"> C. Efstathiou</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Livanos"> G. Livanos</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Xidas"> P. Xidas</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Vakondios"> D. Vakondios</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Maravelakis"> E. Maravelakis</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Zervakis"> M. Zervakis</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Antoniadis"> A. Antoniadis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The choice of cutting tools in manufacturing processes is an essential parameter on which the required manufacturing time, the consumed energy and the cost effort all depend. If the number of tool changing times could be minimized or even eliminated by using a single convex tool providing multiple profiles, then a significant benefit of time and energy saving, as well as tool cost, would be achieved. A typical machine contains a variety of tools in order to deal with different curvatures and material removal rates. In order to minimize the required cutting tool changes, Actively Deformable micro-Cutters (ADmC) will be developed. The design of the Actively Deformable micro-Cutters will be based on the same cutting technique and mounting method as that in typical cutters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deformable%20cutters" title="deformable cutters">deformable cutters</a>, <a href="https://publications.waset.org/abstracts/search?q=cutting%20tool" title=" cutting tool"> cutting tool</a>, <a href="https://publications.waset.org/abstracts/search?q=milling" title=" milling"> milling</a>, <a href="https://publications.waset.org/abstracts/search?q=turning" title=" turning"> turning</a>, <a href="https://publications.waset.org/abstracts/search?q=manufacturing" title=" manufacturing"> manufacturing</a> </p> <a href="https://publications.waset.org/abstracts/33058/active-deformable-micro-cutters-with-nano-abrasives" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33058.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3193</span> Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=LuoJiaoyang">LuoJiaoyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Hongyang"> Yu Hongyang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=three%20modalities" title=" three modalities"> three modalities</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D" title=" RGB-D"> RGB-D</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20verification" title=" identity verification"> identity verification</a> </p> <a href="https://publications.waset.org/abstracts/163265/identity-verification-based-on-multimodal-machine-learning-on-red-green-blue-rgb-red-green-blue-depth-rgb-d-voice-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3192</span> Optimizing Pick and Place Operations in a Simulated Work Cell for Deformable 3D Objects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Troels%20Bo%20J%C3%B8rgensen">Troels Bo Jørgensen</a>, <a href="https://publications.waset.org/abstracts/search?q=Preben%20Hagh%20Strunge%20Holm"> Preben Hagh Strunge Holm</a>, <a href="https://publications.waset.org/abstracts/search?q=Henrik%20Gordon%20Petersen"> Henrik Gordon Petersen</a>, <a href="https://publications.waset.org/abstracts/search?q=Norbert%20Kruger"> Norbert Kruger</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a simulation framework for using machine learning techniques to determine robust robotic motions for handling deformable objects. The main focus is on applications in the meat sector, which mainly handle three-dimensional objects. In order to optimize the robotic handling, the robot motions have been parameterized in terms of grasp points, robot trajectory and robot speed. The motions are evaluated based on a dynamic simulation environment for robotic control of deformable objects. The evaluation indicates certain parameter setups, which produce robust motions in the simulated environment, and based on a visual analysis indicate satisfactory solutions for a real world system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deformable%20objects" title="deformable objects">deformable objects</a>, <a href="https://publications.waset.org/abstracts/search?q=robotic%20manipulation" title=" robotic manipulation"> robotic manipulation</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20world%20system" title=" real world system "> real world system </a> </p> <a href="https://publications.waset.org/abstracts/27353/optimizing-pick-and-place-operations-in-a-simulated-work-cell-for-deformable-3d-objects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27353.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">281</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3191</span> Integrating Critical Stylistics and Visual Grammar: A Multimodal Stylistic Approach to the Analysis of Non-Literary Texts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shatha%20Khuzaee">Shatha Khuzaee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study develops multimodal stylistic approach to analyse a number of BBC online news articles reporting some key events from the so called ‘Arab Uprisings’. Critical stylistics (CS) and visual grammar (VG) provide insightful arguments to the ways ideology is projected through different verbal and visual modes, yet they are mode specific because they examine how each mode projects its meaning separately and do not attempt to clarify what happens intersemiotically when the two modes co-occur. Therefore, it is the task undertaken in this research to propose multimodal stylistic approach that addresses the issue of ideology construction when the two modes co-occur. Informed by functional grammar and social semiotics, the analysis attempts to integrate three linguistic models developed in critical stylistics, namely, transitivity choices, prioritizing and hypothesizing along with their visual equivalents adopted from visual grammar to investigate the way ideology is constructed, in multimodal text, when text/image participate and interrelate in the process of meaning making on the textual level of analysis. The analysis provides comprehensive theoretical and analytical elaborations on the different points of integration between CS linguistic models and VG equivalents which operate on the textual level of analysis to better account for ideology construction in news as non-literary multimodal texts. It is argued that the analysis well thought out a plan that would remark the first step towards the integration between the well-established linguistic models of critical stylistics and that of visual analysis to analyse multimodal texts on the textual level. Both approaches are compatible to produce multimodal stylistic approach because they intend to analyse text and image depending on whatever textual evidence is available. This supports the analysis maintain the rigor and replicability needed for a stylistic analysis like the one undertaken in this study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodality" title="multimodality">multimodality</a>, <a href="https://publications.waset.org/abstracts/search?q=stylistics" title=" stylistics"> stylistics</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20grammar" title=" visual grammar"> visual grammar</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20semiotics" title=" social semiotics"> social semiotics</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20grammar" title=" functional grammar"> functional grammar</a> </p> <a href="https://publications.waset.org/abstracts/77486/integrating-critical-stylistics-and-visual-grammar-a-multimodal-stylistic-approach-to-the-analysis-of-non-literary-texts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3190</span> A Comparative Study on Multimodal Metaphors in Public Service Advertising of China and Germany</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xing%20Lyu">Xing Lyu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multimodal metaphor promotes the further development and refinement of multimodal discourse study. Cultural aspects matter a lot not only in creating but also in comprehending multimodal metaphor. By analyzing the target domain and the source domain in 10 public service advertisements of China and Germany about environmental protection, this paper compares the source when the target is alike in each multimodal metaphor in order to seek similarities and differences across cultures. The findings are as follows: first, the multimodal metaphors center around three major topics: the earth crisis, consequences of environmental damage, and appeal for environmental protection; second, the multimodal metaphors mainly grounded in three universal conceptual metaphors which focused on high level is up; earth is mother and all lives are precious. However, there are five Chinese culture-specific multimodal metaphors which are not discovered in Germany ads: east is high leve; a purposeful life is a journey; a nation is a person; good is clean, and water is mother. Since metaphors are excellent instruments on studying ideology, this study can be helpful on intercultural/cross-cultural communication. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20metaphor" title="multimodal metaphor">multimodal metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=cultural%20aspects" title=" cultural aspects"> cultural aspects</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20service%20advertising" title=" public service advertising"> public service advertising</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-cultural%20communication" title=" cross-cultural communication"> cross-cultural communication</a> </p> <a href="https://publications.waset.org/abstracts/112889/a-comparative-study-on-multimodal-metaphors-in-public-service-advertising-of-china-and-germany" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3189</span> The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a>, <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20V.%20Egorov"> Dmitry V. Egorov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification%20accuracy" title="classification accuracy">classification accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion%20solution" title=" fusion solution"> fusion solution</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20error%20rate" title=" total error rate"> total error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion%20classifier" title=" multimodal fusion classifier"> multimodal fusion classifier</a> </p> <a href="https://publications.waset.org/abstracts/26088/the-optimization-of-decision-rules-in-multimodal-decision-level-fusion-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26088.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">466</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3188</span> Barnard Feature Point Detector for Low-Contractperiapical Radiography Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chih-Yi%20Ho">Chih-Yi Ho</a>, <a href="https://publications.waset.org/abstracts/search?q=Tzu-Fang%20Chang"> Tzu-Fang Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih-Chia%20Huang"> Chih-Chia Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chia-Yen%20Lee"> Chia-Yen Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In dental clinics, the dentists use the periapical radiography image to assess the effectiveness of endodontic treatment of teeth with chronic apical periodontitis. Periapical radiography images are taken at different times to assess alveolar bone variation before and after the root canal treatment, and furthermore to judge whether the treatment was successful. Current clinical assessment of apical tissue recovery relies only on dentist personal experience. It is difficult to have the same standard and objective interpretations due to the dentist or radiologist personal background and knowledge. If periapical radiography images at the different time could be registered well, the endodontic treatment could be evaluated. In the image registration area, it is necessary to assign representative control points to the transformation model for good performances of registration results. However, detection of representative control points (feature points) on periapical radiography images is generally very difficult. Regardless of which traditional detection methods are practiced, sufficient feature points may not be detected due to the low-contrast characteristics of the x-ray image. Barnard detector is an algorithm for feature point detection based on grayscale value gradients, which can obtain sufficient feature points in the case of gray-scale contrast is not obvious. However, the Barnard detector would detect too many feature points, and they would be too clustered. This study uses the local extrema of clustering feature points and the suppression radius to overcome the problem, and compared different feature point detection methods. In the preliminary result, the feature points could be detected as representative control points by the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20detection" title="feature detection">feature detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Barnard%20detector" title=" Barnard detector"> Barnard detector</a>, <a href="https://publications.waset.org/abstracts/search?q=registration" title=" registration"> registration</a>, <a href="https://publications.waset.org/abstracts/search?q=periapical%20radiography%20image" title=" periapical radiography image"> periapical radiography image</a>, <a href="https://publications.waset.org/abstracts/search?q=endodontic%20treatment" title=" endodontic treatment"> endodontic treatment</a> </p> <a href="https://publications.waset.org/abstracts/67658/barnard-feature-point-detector-for-low-contractperiapical-radiography-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67658.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3187</span> Multimodal Data Fusion Techniques in Audiovisual Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hadeer%20M.%20Sayed">Hadeer M. Sayed</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesham%20E.%20El%20Deeb"> Hesham E. El Deeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Shereen%20A.%20Taie"> Shereen A. Taie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20data" title="multimodal data">multimodal data</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title=" data fusion"> data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition" title=" audio-visual speech recognition"> audio-visual speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/157362/multimodal-data-fusion-techniques-in-audiovisual-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3186</span> Efficient Layout-Aware Pretraining for Multimodal Form Understanding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Armineh%20Nourbakhsh">Armineh Nourbakhsh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sameena%20Shah"> Sameena Shah</a>, <a href="https://publications.waset.org/abstracts/search?q=Carolyn%20Rose"> Carolyn Rose</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=layout%20understanding" title="layout understanding">layout understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=form%20understanding" title=" form understanding"> form understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20document%20understanding" title=" multimodal document understanding"> multimodal document understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=bias-augmented%20attention" title=" bias-augmented attention"> bias-augmented attention</a> </p> <a href="https://publications.waset.org/abstracts/147955/efficient-layout-aware-pretraining-for-multimodal-form-understanding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147955.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3185</span> Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Latha%20Subbiah">Latha Subbiah</a>, <a href="https://publications.waset.org/abstracts/search?q=Dhanalakshmi%20Samiappan"> Dhanalakshmi Samiappan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curvelet" title="curvelet">curvelet</a>, <a href="https://publications.waset.org/abstracts/search?q=decomposition" title=" decomposition"> decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=levelset" title=" levelset"> levelset</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound" title=" ultrasound"> ultrasound</a> </p> <a href="https://publications.waset.org/abstracts/56351/automated-ultrasound-carotid-artery-image-segmentation-using-curvelet-threshold-decomposition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56351.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">340</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3184</span> OPEN-EmoRec-II-A Multimodal Corpus of Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefanie%20Rukavina">Stefanie Rukavina</a>, <a href="https://publications.waset.org/abstracts/search?q=Sascha%20Gruss"> Sascha Gruss</a>, <a href="https://publications.waset.org/abstracts/search?q=Steffen%20Walter"> Steffen Walter</a>, <a href="https://publications.waset.org/abstracts/search?q=Holger%20Hoffmann"> Holger Hoffmann</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20C.%20Traue"> Harald C. Traue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> OPEN-EmoRecII is an open multimodal corpus with experimentally induced emotions. In the first half of the experiment, emotions were induced with standardized picture material and in the second half during a human-computer interaction (HCI), realized with a wizard-of-oz design. The induced emotions are based on the dimensional theory of emotions (valence, arousal and dominance). These emotional sequences - recorded with multimodal data (mimic reactions, speech, audio and physiological reactions) during a naturalistic-like HCI-environment one can improve classification methods on a multimodal level. This database is the result of an HCI-experiment, for which 30 subjects in total agreed to a publication of their data including the video material for research purposes. The now available open corpus contains sensory signal of: video, audio, physiology (SCL, respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus Major) and mimic annotations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=open%20multimodal%20emotion%20corpus" title="open multimodal emotion corpus">open multimodal emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=annotated%20labels" title=" annotated labels"> annotated labels</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20interaction" title=" intelligent interaction"> intelligent interaction</a> </p> <a href="https://publications.waset.org/abstracts/29365/open-emorec-ii-a-multimodal-corpus-of-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29365.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3183</span> Teaching and Learning with Picturebooks: Developing Multimodal Literacy with a Community of Primary School Teachers in China</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fuling%20Deng">Fuling Deng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Today’s children are frequently exposed to multimodal texts that adopt diverse modes to communicate myriad meanings within different cultural contexts. To respond to the new textual landscape, scholars have considered new literacy theories which propose picturebooks as important educational resources. Picturebooks are multimodal, with their meaning conveyed through the synchronisation of multiple modes, including linguistic, visual, spatial, and gestural acting as access to multimodal literacy. Picturebooks have been popular reading materials in primary educational settings in China. However, often viewed as “easy” texts directed at the youngest readers, picturebooks remain on the margins of Chinese upper primary classrooms, where they are predominantly used for linguistic tasks, with little value placed on their multimodal affordances. Practices with picturebooks in the upper grades in Chinese primary schools also encounter many challenges associated with the curation of texts for use, designing curriculum, and assessment. To respond to these issues, a qualitative study was conducted with a community of Chinese primary teachers using multi-methods such as interviews, focus groups, and documents. The findings showed the impact of the teachers’ increased awareness of picturebooks' multimodal affordances on their pedagogical decisions in using picturebooks as educational resources in upper primary classrooms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=picturebook%20education" title="picturebook education">picturebook education</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20literacy" title=" multimodal literacy"> multimodal literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=teachers%27%20response%20to%20contemporary%20picturebooks" title=" teachers&#039; response to contemporary picturebooks"> teachers&#039; response to contemporary picturebooks</a>, <a href="https://publications.waset.org/abstracts/search?q=community%20of%20practice" title=" community of practice"> community of practice</a> </p> <a href="https://publications.waset.org/abstracts/156547/teaching-and-learning-with-picturebooks-developing-multimodal-literacy-with-a-community-of-primary-school-teachers-in-china" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156547.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3182</span> Modeling and Tracking of Deformable Structures in Medical Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Said%20Ettaieb">Said Ettaieb</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamel%20Hamrouni"> Kamel Hamrouni</a>, <a href="https://publications.waset.org/abstracts/search?q=Su%20Ruan"> Su Ruan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a new method based both on Active Shape Model and a priori knowledge about the spatio-temporal shape variation for tracking deformable structures in medical imaging. The main idea is to exploit the a priori knowledge of shape that exists in ASM and introduce new knowledge about the shape variation over time. The aim is to define a new more stable method, allowing the reliable detection of structures whose shape changes considerably in time. This method can also be used for the three-dimensional segmentation by replacing the temporal component by the third spatial axis (z). The proposed method is applied for the functional and morphological study of the heart pump. The functional aspect was studied through temporal sequences of scintigraphic images and morphology was studied through MRI volumes. The obtained results are encouraging and show the performance of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=active%20shape%20model" title="active shape model">active shape model</a>, <a href="https://publications.waset.org/abstracts/search?q=a%20priori%20knowledge" title=" a priori knowledge"> a priori knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiotemporal%20shape%20variation" title=" spatiotemporal shape variation"> spatiotemporal shape variation</a>, <a href="https://publications.waset.org/abstracts/search?q=deformable%20structures" title=" deformable structures"> deformable structures</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a> </p> <a href="https://publications.waset.org/abstracts/29394/modeling-and-tracking-of-deformable-structures-in-medical-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3181</span> Evaluation of Deformable Boundary Condition Using Finite Element Method and Impact Test for Steel Tubes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abed%20Ahmed">Abed Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehrdad%20Asadi"> Mehrdad Asadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jennifer%20Martay"> Jennifer Martay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Stainless steel pipelines are crucial components to transportation and storage in the oil and gas industry. However, the rise of random attacks and vandalism on these pipes for their valuable transport has led to more security and protection for incoming surface impacts. These surface impacts can lead to large global deformations of the pipe and place the pipe under strain, causing the eventual failure of the pipeline. Therefore, understanding how these surface impact loads affect the pipes is vital to improving the pipes&rsquo; security and protection. In this study, experimental test and finite element analysis (FEA) have been carried out on EN3B stainless steel specimens to study the impact behaviour. Low velocity impact tests at 9 m/s with 16 kg dome impactor was used to simulate for high momentum impact for localised failure. FEA models of clamped and deformable boundaries were modelled to study the effect of the boundaries on the pipes impact behaviour on its impact resistance, using experimental and FEA approach. Comparison of experimental and FE simulation shows good correlation to the deformable boundaries in order to validate the robustness of the FE model to be implemented in pipe models with complex anisotropic structure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20impact" title="dynamic impact">dynamic impact</a>, <a href="https://publications.waset.org/abstracts/search?q=deformable%20boundary%20conditions" title=" deformable boundary conditions"> deformable boundary conditions</a>, <a href="https://publications.waset.org/abstracts/search?q=finite%20element%20modelling" title=" finite element modelling"> finite element modelling</a>, <a href="https://publications.waset.org/abstracts/search?q=LS-DYNA" title=" LS-DYNA"> LS-DYNA</a>, <a href="https://publications.waset.org/abstracts/search?q=stainless%20steel%20pipe" title=" stainless steel pipe"> stainless steel pipe</a> </p> <a href="https://publications.waset.org/abstracts/116559/evaluation-of-deformable-boundary-condition-using-finite-element-method-and-impact-test-for-steel-tubes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/116559.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3180</span> Legal Warranty in Real Estate Registry in Albania</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elona%20Saliaj">Elona Saliaj</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The registration of real estate in Albania after the 90's has been a long process in time and with high cost for the country. Passing the registration system from a centralized system to a free market private system, it’s accompanied by legal uncertainties that have led to economic instability. The reforms that have been undertaken in terms of property rights have been numerous and continuous throughout the years. But despite the reforms, the system of registration of real estate, has failed to be standards requirements established by the European Union. The completion of initial registration of real estate, legal treatment of previous owners or legalization of illegal constructions remain among the main problems that prevent the development of the country in its economic sector. The performance of the registration of real estate system and dealing with issues that have appeared in the Court of First Instance, the civil section of the Albanian constitute the core of handling this analysis. This paper presents a detailed analysis on the registration system that is chosen to be applied in our country for real estate. In its content it is also determined the institution that administrates these properties, the management technique and the law that determinate its functionality. The strategy is determined for creating a modern and functional registration system and for the country remains a challenge to achieve. Identifying practical problems and providing their solutions are also the focus of reference in order to improve and modernize this important system to a state law that aims to become a member of the European Union. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=real%20estates%20registration%20system" title="real estates registration system">real estates registration system</a>, <a href="https://publications.waset.org/abstracts/search?q=comparative%20aspects" title=" comparative aspects"> comparative aspects</a>, <a href="https://publications.waset.org/abstracts/search?q=cadastral%20area" title=" cadastral area"> cadastral area</a>, <a href="https://publications.waset.org/abstracts/search?q=property%20certificate" title=" property certificate"> property certificate</a>, <a href="https://publications.waset.org/abstracts/search?q=legal%20reform" title=" legal reform"> legal reform</a> </p> <a href="https://publications.waset.org/abstracts/23609/legal-warranty-in-real-estate-registry-in-albania" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23609.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">491</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3179</span> Quick Response(QR) Code for Vehicle Registration and Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Malarvizhi">S. Malarvizhi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sadiq%20Basha"> S. Sadiq Basha</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Santhosh%20Kumar"> M. Santhosh Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Saravanan"> K. Saravanan</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Sasikumar"> R. Sasikumar</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Satheesh"> R. Satheesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This is a web based application which provides authorization for the vehicle identification and registration. It also provides mutual authentication between the police and users in order to avoid misusage. The QR code generation in this application overcomes the difficulty in the manual registration of the vehicle documents. This generated QR code is placed in the number plates of the vehicles. The QR code is scanned using the QR Reader installed in the smart devices. The police officials can check the vehicle details and file cases on accidents, theft and traffic rules violations using QR code. In addition to vehicle insurance payments and renewals, the renewal alert is sent to the vehicle owner about payment deadline. The non-permitted vehicles can be blocked in the next check-post by sending the alert messages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=QR%20code" title="QR code">QR code</a>, <a href="https://publications.waset.org/abstracts/search?q=QR%20reader" title=" QR reader"> QR reader</a>, <a href="https://publications.waset.org/abstracts/search?q=registration" title=" registration"> registration</a>, <a href="https://publications.waset.org/abstracts/search?q=authentication" title=" authentication"> authentication</a>, <a href="https://publications.waset.org/abstracts/search?q=idenfication" title=" idenfication"> idenfication</a> </p> <a href="https://publications.waset.org/abstracts/1704/quick-responseqr-code-for-vehicle-registration-and-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1704.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">494</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=106">106</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=107">107</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=deformable%20multimodal%20image%20registration&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10