CINXE.COM

Search results for: dynamic PET images

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: dynamic PET images</title> <meta name="description" content="Search results for: dynamic PET images"> <meta name="keywords" content="dynamic PET images"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="dynamic PET images" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="dynamic PET images"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 6211</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: dynamic PET images</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6211</span> 3D Guided Image Filtering to Improve Quality of Short-Time Binned Dynamic PET Images Using MRI Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tabassum%20Husain">Tabassum Husain</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Peng%20Li"> Shen Peng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhaolin%20Chen"> Zhaolin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper evaluates the usability of 3D Guided Image Filtering to enhance the quality of short-time binned dynamic PET images by using MRI images. Guided image filtering is an edge-preserving filter proposed to enhance 2D images. The 3D filter is applied on 1 and 5-minute binned images. The results are compared with 15-minute binned images and the Gaussian filtering. The guided image filter enhances the quality of dynamic PET images while also preserving important information of the voxels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images" title="dynamic PET images">dynamic PET images</a>, <a href="https://publications.waset.org/abstracts/search?q=guided%20image%20filter" title=" guided image filter"> guided image filter</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20preservation%20filtering" title=" information preservation filtering"> information preservation filtering</a> </p> <a href="https://publications.waset.org/abstracts/152864/3d-guided-image-filtering-to-improve-quality-of-short-time-binned-dynamic-pet-images-using-mri-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6210</span> Rhetoric and Renarrative Structure of Digital Images in Trans-Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Geng">Yang Geng</a>, <a href="https://publications.waset.org/abstracts/search?q=Anqi%20Zhao"> Anqi Zhao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The misreading theory of Harold Bloom provides a new diachronic perspective as an approach to the consistency between rhetoric of digital technology, dynamic movement of digital images and uncertain meaning of text. Reinterpreting the diachroneity of 'intertextuality' in the context of misreading theory extended the range of the 'intermediality' of transmedia to the intense tension between digital images and symbolic images throughout history of images. With the analogy between six categories of revisionary ratios and six steps of digital transformation, digital rhetoric might be illustrated as a linear process reflecting dynamic, intensive relations between digital moving images and original static images. Finally, it was concluded that two-way framework of the rhetoric of transformation of digital images and reversed served as a renarrative structure to revive static images by reconnecting them with digital moving images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rhetoric" title="rhetoric">rhetoric</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20art" title=" digital art"> digital art</a>, <a href="https://publications.waset.org/abstracts/search?q=intermediality" title=" intermediality"> intermediality</a>, <a href="https://publications.waset.org/abstracts/search?q=misreading%20theory" title=" misreading theory"> misreading theory</a> </p> <a href="https://publications.waset.org/abstracts/100230/rhetoric-and-renarrative-structure-of-digital-images-in-trans-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">255</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6209</span> Medical Images Enhancement Using New Dynamic Band Pass Filter</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdellatif%20Baba">Abdellatif Baba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to facilitate medical images analysis by improving their quality and readability, we present in this paper a new dynamic band pass filter as a general and suitable operator for different types of medical images. Our objective is to enrich the details of any treated medical image to make it sufficiently clear enough to give an understood and simplified meaning even for unspecialized people in the medical domain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20enhancement" title="medical image enhancement">medical image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20band%20pass%20filter" title=" dynamic band pass filter"> dynamic band pass filter</a>, <a href="https://publications.waset.org/abstracts/search?q=analysis%20improvement" title=" analysis improvement"> analysis improvement</a> </p> <a href="https://publications.waset.org/abstracts/14660/medical-images-enhancement-using-new-dynamic-band-pass-filter" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14660.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6208</span> Multi-Atlas Segmentation Based on Dynamic Energy Model: Application to Brain MR Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jie%20Huo">Jie Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Wu"> Jonathan Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Segmentation of anatomical structures in medical images is essential for scientific inquiry into the complex relationships between biological structure and clinical diagnosis, treatment and assessment. As a method of incorporating the prior knowledge and the anatomical structure similarity between a target image and atlases, multi-atlas segmentation has been successfully applied in segmenting a variety of medical images, including the brain, cardiac, and abdominal images. The basic idea of multi-atlas segmentation is to transfer the labels in atlases to the coordinate of the target image by matching the target patch to the atlas patch in the neighborhood. However, this technique is limited by the pairwise registration between target image and atlases. In this paper, a novel multi-atlas segmentation approach is proposed by introducing a dynamic energy model. First, the target is mapped to each atlas image by minimizing the dynamic energy function, then the segmentation of target image is generated by weighted fusion based on the energy. The method is tested on MICCAI 2012 Multi-Atlas Labeling Challenge dataset which includes 20 target images and 15 atlases images. The paper also analyzes the influence of different parameters of the dynamic energy model on the segmentation accuracy and measures the dice coefficient by using different feature terms with the energy model. The highest mean dice coefficient obtained with the proposed method is 0.861, which is competitive compared with the recently published method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20MRI%20segmentation" title="brain MRI segmentation">brain MRI segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20energy%20model" title=" dynamic energy model"> dynamic energy model</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-atlas%20segmentation" title=" multi-atlas segmentation"> multi-atlas segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20minimization" title=" energy minimization"> energy minimization</a> </p> <a href="https://publications.waset.org/abstracts/68716/multi-atlas-segmentation-based-on-dynamic-energy-model-application-to-brain-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68716.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6207</span> Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasuyuki%20Takahashi">Yasuyuki Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Maya%20Yamashita"> Maya Yamashita</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyoko%20Saito"> Kyoko Saito</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20SPECT" title="dynamic SPECT">dynamic SPECT</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20resolution" title=" time resolution"> time resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=180-degree%20interpolation%20method" title=" 180-degree interpolation method"> 180-degree interpolation method</a>, <a href="https://publications.waset.org/abstracts/search?q=99mTc-GSA." title=" 99mTc-GSA."> 99mTc-GSA.</a> </p> <a href="https://publications.waset.org/abstracts/12762/increasing-the-apparent-time-resolution-of-tc-99m-diethylenetriamine-pentaacetic-acid-galactosyl-human-serum-albumin-dynamic-spect-by-use-of-an-180-degree-interpolation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12762.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6206</span> Detecting Manipulated Media Using Deep Capsule Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joseph%20Uzuazomaro%20Oju">Joseph Uzuazomaro Oju</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The ease at which manipulated media can be created, and the increasing difficulty in identifying fake media makes it a great threat. Most of the applications used for the creation of these high-quality fake videos and images are built with deep learning. Hence, the use of deep learning in creating a detection mechanism cannot be overemphasized. Any successful fake media that is being detected before it reached the populace will save people from the self-doubt of either a content is genuine or fake and will ensure the credibility of videos and images. The methodology introduced in this paper approaches the manipulated media detection challenge using a combo of VGG-19 and a deep capsule network. In the case of videos, they are converted into frames, which, in turn, are resized and cropped to the face region. These preprocessed images/videos are fed to the VGG-19 network to extract the latent features. The extracted latent features are inputted into a deep capsule network enhanced with a 3D -convolution dynamic routing agreement. The 3D –convolution dynamic routing agreement algorithm helps to reduce the linkages between capsules networks. Thereby limiting the poor learning shortcoming of multiple capsule network layers. The resultant output from the deep capsule network will indicate a media to be either genuine or fake. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20capsule%20network" title="deep capsule network">deep capsule network</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20routing" title=" dynamic routing"> dynamic routing</a>, <a href="https://publications.waset.org/abstracts/search?q=fake%20media%20detection" title=" fake media detection"> fake media detection</a>, <a href="https://publications.waset.org/abstracts/search?q=manipulated%20media" title=" manipulated media"> manipulated media</a> </p> <a href="https://publications.waset.org/abstracts/123371/detecting-manipulated-media-using-deep-capsule-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123371.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6205</span> On Dynamic Chaotic S-BOX Based Advanced Encryption Standard Algorithm for Image Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ajish%20Sreedharan">Ajish Sreedharan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Security in transmission and storage of digital images has its importance in today’s image communications and confidential video conferencing. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. Advanced Encryption Standard (AES) is a well known block cipher that has several advantages in data encryption. However, it is not suitable for real-time applications. This paper presents modifications to the Advanced Encryption Standard to reflect a high level security and better image encryption. The modifications are done by adjusting the ShiftRow Transformation and using On Dynamic chaotic S-BOX. In AES the Substitute bytes, Shift row and Mix columns by themselves would provide no security because they do not use the key. In Dynamic chaotic S-BOX Based AES the Substitute bytes provide security because the S-Box is constructed from the key. Experimental results verify and prove that the proposed modification to image cryptosystem is highly secure from the cryptographic viewpoint. The results also prove that with a comparison to original AES encryption algorithm the modified algorithm gives better encryption results in terms of security against statistical attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20encryption%20standard%20%28AES%29" title="advanced encryption standard (AES)">advanced encryption standard (AES)</a>, <a href="https://publications.waset.org/abstracts/search?q=on%20dynamic%20chaotic%20S-BOX" title=" on dynamic chaotic S-BOX"> on dynamic chaotic S-BOX</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title=" image encryption"> image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20analysis" title=" security analysis"> security analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ShiftRow%20transformation" title=" ShiftRow transformation"> ShiftRow transformation</a> </p> <a href="https://publications.waset.org/abstracts/16549/on-dynamic-chaotic-s-box-based-advanced-encryption-standard-algorithm-for-image-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">437</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6204</span> Analysis of Vocal Fold Vibrations from High-Speed Digital Images Based on Dynamic Time Warping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20I.%20A.%20Rahman">A. I. A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Sh-Hussain%20Salleh"> Sh-Hussain Salleh</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Ahmad"> K. Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Anuar"> K. Anuar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Analysis of vocal fold vibration is essential for understanding the mechanism of voice production and for improving clinical assessment of voice disorders. This paper presents a Dynamic Time Warping (DTW) based approach to analyze and objectively classify vocal fold vibration patterns. The proposed technique was designed and implemented on a Glottal Area Waveform (GAW) extracted from high-speed laryngeal images by delineating the glottal edges for each image frame. Feature extraction from the GAW was performed using Linear Predictive Coding (LPC). Several types of voice reference templates from simulations of clear, breathy, fry, pressed and hyperfunctional voice productions were used. The patterns of the reference templates were first verified using the analytical signal generated through Hilbert transformation of the GAW. Samples from normal speakers’ voice recordings were then used to evaluate and test the effectiveness of this approach. The classification of the voice patterns using the technique of LPC and DTW gave the accuracy of 81%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20time%20warping" title="dynamic time warping">dynamic time warping</a>, <a href="https://publications.waset.org/abstracts/search?q=glottal%20area%20waveform" title=" glottal area waveform"> glottal area waveform</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20predictive%20coding" title=" linear predictive coding"> linear predictive coding</a>, <a href="https://publications.waset.org/abstracts/search?q=high-speed%20laryngeal%20images" title=" high-speed laryngeal images"> high-speed laryngeal images</a>, <a href="https://publications.waset.org/abstracts/search?q=Hilbert%20transform" title=" Hilbert transform"> Hilbert transform</a> </p> <a href="https://publications.waset.org/abstracts/9923/analysis-of-vocal-fold-vibrations-from-high-speed-digital-images-based-on-dynamic-time-warping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9923.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">239</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6203</span> Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20images" title="big images">big images</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20images" title=" binary images"> binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20similarity" title=" image similarity"> image similarity</a> </p> <a href="https://publications.waset.org/abstracts/89963/quick-similarity-measurement-of-binary-images-via-probabilistic-pixel-mapping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6202</span> Dynamic Web-Based 2D Medical Image Visualization and Processing Software</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelhalim.%20N.%20Mohammed">Abdelhalim. N. Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed.%20Y.%20Esmail"> Mohammed. Y. Esmail</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP &lsquo;apache server&rsquo; is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error &lsquo;MSE&rsquo;, peak signal to noise ratio &lsquo;PSNR&rsquo; and compression ratio &lsquo;CR&rsquo; that achieved (83.86%) when &lsquo;coif3&rsquo; wavelet filter is used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DICOM" title="DICOM">DICOM</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=PACS" title=" PACS"> PACS</a>, <a href="https://publications.waset.org/abstracts/search?q=HIS" title=" HIS"> HIS</a>, <a href="https://publications.waset.org/abstracts/search?q=LAN" title=" LAN"> LAN</a> </p> <a href="https://publications.waset.org/abstracts/102066/dynamic-web-based-2d-medical-image-visualization-and-processing-software" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6201</span> Automatic Classification Using Dynamic Fuzzy C Means Algorithm and Mathematical Morphology: Application in 3D MRI Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdelkhalek%20Bakkari">Abdelkhalek Bakkari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is a critical step in image processing and pattern recognition. In this paper, we proposed a new robust automatic image classification based on a dynamic fuzzy c-means algorithm and mathematical morphology. The proposed segmentation algorithm (DFCM_MM) has been applied to MR perfusion images. The obtained results show the validity and robustness of the proposed approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic" title=" dynamic"> dynamic</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20c-means" title=" fuzzy c-means"> fuzzy c-means</a>, <a href="https://publications.waset.org/abstracts/search?q=MR%20image" title=" MR image"> MR image</a> </p> <a href="https://publications.waset.org/abstracts/13711/automatic-classification-using-dynamic-fuzzy-c-means-algorithm-and-mathematical-morphology-application-in-3d-mri-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13711.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6200</span> Study on the Morphology and Dynamic Mechanical and Thermal Properties of HIPS/Graphene Nanocomposites</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amirhosein%20Rostampour">Amirhosein Rostampour</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehdi%20Sharif"> Mehdi Sharif</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article, a series of high impact polystyrene/graphene (HIPS/Gr) nanocomposites were prepared by solution mixing method and their morphology and dynamic mechanical properties were investigated as a function of graphene content. SEM images and X-Ray diffraction data confirm that the graphene platelets are well dispersed in HIPS matrix for the nanocomposites with Gr contents up to 5.0 wt%. Mechanical properties analysis demonstrates that yielding strength and initial modulus of HIPS/Gr nanocomposites are highly improved with the increment of Gr content compared to pure HIPS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nanocomposite" title="nanocomposite">nanocomposite</a>, <a href="https://publications.waset.org/abstracts/search?q=graphene" title=" graphene"> graphene</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20mechanical%20properties" title=" dynamic mechanical properties"> dynamic mechanical properties</a>, <a href="https://publications.waset.org/abstracts/search?q=morphology" title=" morphology"> morphology</a> </p> <a href="https://publications.waset.org/abstracts/29222/study-on-the-morphology-and-dynamic-mechanical-and-thermal-properties-of-hipsgraphene-nanocomposites" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29222.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">536</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6199</span> Reduction of Speckle Noise in Echocardiographic Images: A Survey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fathi%20Kallel">Fathi Kallel</a>, <a href="https://publications.waset.org/abstracts/search?q=Saida%20Khachira"> Saida Khachira</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Ben%20Slima"> Mohamed Ben Slima</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ben%20Hamida"> Ahmed Ben Hamida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speckle noise is a main characteristic of cardiac ultrasound images, it corresponding to grainy appearance that degrades the image quality. For this reason, the ultrasound images are difficult to use automatically in clinical use, then treatments are required for this type of images. Then a filtering procedure of these images is necessary to eliminate the speckle noise and to improve the quality of ultrasound images which will be then segmented to extract the necessary forms that exist. In this paper, we present the importance of the pre-treatment step for segmentation. This work is applied to cardiac ultrasound images. In a first step, a comparative study of speckle filtering method will be presented and then we use a segmentation algorithm to locate and extract cardiac structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20processing" title="medical image processing">medical image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound%20images" title=" ultrasound images"> ultrasound images</a>, <a href="https://publications.waset.org/abstracts/search?q=Speckle%20noise" title=" Speckle noise"> Speckle noise</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=speckle%20filtering" title=" speckle filtering"> speckle filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=snakes" title=" snakes"> snakes</a> </p> <a href="https://publications.waset.org/abstracts/19064/reduction-of-speckle-noise-in-echocardiographic-images-a-survey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6198</span> Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emhimed%20Saffor">Emhimed Saffor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection "> edge detection </a> </p> <a href="https://publications.waset.org/abstracts/44926/subjective-evaluation-of-mathematical-morphology-edge-detection-on-computed-tomography-ct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6197</span> Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nidhal%20K.%20Azawi">Nidhal K. Azawi</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20M.%20Gauch"> John M. Gauch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colonoscopy%20classification" title="colonoscopy classification">colonoscopy classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20alignment" title=" image alignment"> image alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/92461/automatic-method-for-classification-of-informative-and-noninformative-images-in-colonoscopy-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6196</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6195</span> Constructing Masculinity through Images: Content Analysis of Lifestyle Magazines in Croatia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marija%20Lon%C4%8Dar">Marija Lončar</a>, <a href="https://publications.waset.org/abstracts/search?q=Zorana%20%C5%A0uljug%20Vu%C4%8Dica"> Zorana Šuljug Vučica</a>, <a href="https://publications.waset.org/abstracts/search?q=Magdalena%20Nigoevi%C4%87"> Magdalena Nigoević</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diverse social, cultural and economic trends and changes in contemporary societies influence the ways masculinity is represented in a variety of media. Masculinity is constructed within media images as a dynamic process that changes slowly over time and is shaped by various social factors. In many societies, dominant masculinity is still associated with authority, heterosexuality, marriage, professional and financial success, ethnic dominance and physical strength. But contemporary media depict men in ways that suggest a change in the approach to media images. The number of media images of men, which promote men&rsquo;s identity through their body, have increased. With the male body more scrutinized and commodified, it is necessary to highlight how the body is represented and which visual elements are crucial since the body has an important role in the construction of masculinities. The study includes content analysis of male body images in the advertisements of different men&rsquo;s and women&rsquo;s lifestyle magazines available in Croatia. The main aim was to explore how masculinities are currently being portrayed through body regarding age, physical appearance, fashion, touch and gaze. The findings are also discussed in relation to female images since women are central in many of the processes constructing masculinities and according to the recent conceptualization of masculinity. Although the construction of male images varies through body features, almost all of them convey the message that men&rsquo;s identity could be managed through manipulation and by enhancing the appearance. Furthermore, they suggest that men should engage in &ldquo;bodywork&rdquo; through advertised products, activities and/or practices, in order to achieve their preferred social image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=body%20images" title="body images">body images</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20analysis" title=" content analysis"> content analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=lifestyle%20magazines" title=" lifestyle magazines"> lifestyle magazines</a>, <a href="https://publications.waset.org/abstracts/search?q=masculinity" title=" masculinity"> masculinity</a> </p> <a href="https://publications.waset.org/abstracts/56871/constructing-masculinity-through-images-content-analysis-of-lifestyle-magazines-in-croatia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56871.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">245</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6194</span> Effective Texture Features for Segmented Mammogram Images Based on Multi-Region of Interest Segmentation Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramayanam%20Suresh">Ramayanam Suresh</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Nagaraja%20Rao"> A. Nagaraja Rao</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Eswara%20Reddy"> B. Eswara Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture features of mammogram images are useful for finding masses or cancer cases in mammography, which have been used by radiologists. Textures are greatly succeeded for segmented images rather than normal images. It is necessary to perform segmentation for exclusive specification of cancer and non-cancer regions separately. Region of interest (ROI) is most commonly used technique for mammogram segmentation. Limitation of this method is that it is unable to explore segmentation for large collection of mammogram images. Therefore, this paper is proposed multi-ROI segmentation for addressing the above limitation. It supports greatly in finding the best texture features of mammogram images. Experimental study demonstrates the effectiveness of proposed work using benchmarked images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20features" title="texture features">texture features</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-ROI%20segmentation" title=" multi-ROI segmentation"> multi-ROI segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmarked%20images" title=" benchmarked images "> benchmarked images </a> </p> <a href="https://publications.waset.org/abstracts/88666/effective-texture-features-for-segmented-mammogram-images-based-on-multi-region-of-interest-segmentation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88666.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6193</span> Optimization Query Image Using Search Relevance Re-Ranking Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20G.%20Asmitha%20Chandini">T. G. Asmitha Chandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Web-based image search re-ranking, as an successful method to get better the results. In a query keyword, the first stair is store the images is first retrieve based on the text-based information. The user to select a query keywordimage, by using this query keyword other images are re-ranked based on their visual properties with images.Now a day to day, people projected to match images in a semantic space which is used attributes or reference classes closely related to the basis of semantic image. though, understanding a worldwide visual semantic space to demonstrate highly different images from the web is difficult and inefficient. The re-ranking images, which automatically offline part learns dissimilar semantic spaces for different query keywords. The features of images are projected into their related semantic spaces to get particular images. At the online stage, images are re-ranked by compare their semantic signatures obtained the semantic précised by the query keyword image. The query-specific semantic signatures extensively improve both the proper and efficiency of image re-ranking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Query" title="Query">Query</a>, <a href="https://publications.waset.org/abstracts/search?q=keyword" title=" keyword"> keyword</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=re-ranking" title=" re-ranking"> re-ranking</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic" title=" semantic"> semantic</a>, <a href="https://publications.waset.org/abstracts/search?q=signature" title=" signature"> signature</a> </p> <a href="https://publications.waset.org/abstracts/28398/optimization-query-image-using-search-relevance-re-ranking-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28398.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">549</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6192</span> Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Hari%20Prasath">T. Hari Prasath</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Ithaya%20Rani"> P. Ithaya Rani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detecting%20face" title="detecting face">detecting face</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-class%20AdaBoost%20classifier" title=" multi-class AdaBoost classifier"> multi-class AdaBoost classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=Z-score%20normalization" title=" Z-score normalization"> Z-score normalization</a> </p> <a href="https://publications.waset.org/abstracts/85005/dynamic-gabor-filter-facial-features-based-recognition-of-emotion-in-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85005.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6191</span> Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edgardo%20V.%20Gubatanga%20Jr.">Edgardo V. Gubatanga Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joshua%20Salvacion"> Mark Joshua Salvacion</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20LiDAR" title="aerial LiDAR">aerial LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=colorization" title=" colorization"> colorization</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20images" title=" intensity images"> intensity images</a> </p> <a href="https://publications.waset.org/abstracts/94116/application-of-deep-learning-in-colorization-of-lidar-derived-intensity-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94116.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6190</span> Monocular Depth Estimation Benchmarking with Thermal Dataset</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Akyar">Ali Akyar</a>, <a href="https://publications.waset.org/abstracts/search?q=Osman%20Serdar%20Gedik"> Osman Serdar Gedik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20depth%20estimation" title="monocular depth estimation">monocular depth estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20dataset" title=" thermal dataset"> thermal dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmarking" title=" benchmarking"> benchmarking</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformers" title=" vision transformers"> vision transformers</a> </p> <a href="https://publications.waset.org/abstracts/186398/monocular-depth-estimation-benchmarking-with-thermal-dataset" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186398.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6189</span> Dynamic Contrast-Enhanced Breast MRI Examinations: Clinical Use and Technical Challenges</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janet%20Wing-Chong%20Wai">Janet Wing-Chong Wai</a>, <a href="https://publications.waset.org/abstracts/search?q=Alex%20Chiu-Wing%20Lee"> Alex Chiu-Wing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Hailey%20Hoi-Ching%20Tsang"> Hailey Hoi-Ching Tsang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jeffrey%20Chiu"> Jeffrey Chiu</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwok-Wing%20Tang"> Kwok-Wing Tang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Mammography has limited sensitivity and specificity though it is the primary imaging technique for detection of early breast cancer. Ultrasound imaging and contrast-enhanced MRI are useful adjunct tools to mammography. The advantage of breast MRI is high sensitivity for invasive breast cancer. Therefore, indications for and use of breast magnetic resonance imaging have increased over the past decade. Objectives: 1. Cases demonstration on different indications for breast MR imaging. 2. To review of the common artifacts and pitfalls in breast MR imaging. Materials and Methods: This is a retrospective study including all patients underwent dynamic contrast-enhanced breast MRI examination in our centre, performed from Jan 2011 to Dec 2017. The clinical data and radiological images were retrieved from the EPR (electronic patient record), RIS (Radiology Information System) and PACS (Picture Archiving and Communication System). Results and Discussion: Cases including (1) Screening of the contralateral breast in patient with a new breast malignancy (2) Breast augmentation with free injection of unknown foreign materials (3) Finding of axillary adenopathy with an unknown site of primary malignancy (4) Neo-adjuvant chemotherapy: before, during, and after chemotherapy to evaluate treatment response and extent of residual disease prior to operation. Relevant images will be included and illustrated in the presentation. As with other types of MR imaging, there are different artifacts and pitfalls that can potentially limit interpretation of the images. Because of the coils and software specific to breast MR imaging, there are some other technical considerations that are unique to MR imaging of breast regions. Case demonstration images will be available in presentation. Conclusion: Breast MR imaging is a highly sensitive and reasonably specific method for the detection of breast cancer. Adherent to appropriate clinical indications and technical optimization are crucial for achieving satisfactory images for interpretation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MRI" title="MRI">MRI</a>, <a href="https://publications.waset.org/abstracts/search?q=breast" title=" breast"> breast</a>, <a href="https://publications.waset.org/abstracts/search?q=clinical" title=" clinical"> clinical</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer" title=" cancer"> cancer</a> </p> <a href="https://publications.waset.org/abstracts/86879/dynamic-contrast-enhanced-breast-mri-examinations-clinical-use-and-technical-challenges" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">241</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6188</span> Comparison of Vessel Detection in Standard vs Ultra-WideField Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maher%20un%20Nisa">Maher un Nisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahsan%20Khawaja"> Ahsan Khawaja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal imaging with Ultra-WideField (UWF) view technology has opened up new avenues in the field of retinal pathology detection. Recent developments in retinal imaging such as Optos California Imaging Device helps in acquiring high resolution images of the retina to help the Ophthalmologists in diagnosing and analyzing eye related pathologies more accurately. This paper investigates the acquired retinal details by comparing vessel detection in standard 450 color fundus images with the state of the art 2000 UWF retinal images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fundus" title="color fundus">color fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=ultra-widefield" title=" ultra-widefield"> ultra-widefield</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel%20detection" title=" vessel detection"> vessel detection</a> </p> <a href="https://publications.waset.org/abstracts/33520/comparison-of-vessel-detection-in-standard-vs-ultra-widefield-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6187</span> Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Razan%20Manofely"> Razan Manofely</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajab%20M.%20Ben%20Yousef"> Rajab M. Ben Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enhancement" title="enhancement">enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=x-rays" title=" x-rays"> x-rays</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20intensity%20values" title=" pixel intensity values"> pixel intensity values</a>, <a href="https://publications.waset.org/abstracts/search?q=MatLab" title=" MatLab"> MatLab</a> </p> <a href="https://publications.waset.org/abstracts/31031/enhancement-of-x-rays-images-intensity-using-pixel-values-adjustments-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6186</span> Filtering and Reconstruction System for Grey-Level Forensic Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahd%20Aljarf">Ahd Aljarf</a>, <a href="https://publications.waset.org/abstracts/search?q=Saad%20Amin"> Saad Amin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20filtering" title="image filtering">image filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=forensic%20images" title=" forensic images"> forensic images</a> </p> <a href="https://publications.waset.org/abstracts/15654/filtering-and-reconstruction-system-for-grey-level-forensic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6185</span> Mage Fusion Based Eye Tumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ashit">Ahmed Ashit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20tumor" title=" eye tumor"> eye tumor</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20operators" title=" canny operators"> canny operators</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposed" title=" superimposed"> superimposed</a> </p> <a href="https://publications.waset.org/abstracts/30750/mage-fusion-based-eye-tumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30750.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6184</span> Using Deep Learning in Lyme Disease Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teja%20Koduru">Teja Koduru</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lyme" title="Lyme">Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=untreated%20Lyme" title=" untreated Lyme"> untreated Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=erythema%20migrans%20rash" title=" erythema migrans rash"> erythema migrans rash</a>, <a href="https://publications.waset.org/abstracts/search?q=EM%20rash" title=" EM rash"> EM rash</a> </p> <a href="https://publications.waset.org/abstracts/135383/using-deep-learning-in-lyme-disease-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135383.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">239</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6183</span> Clustering-Based Detection of Alzheimer&#039;s Disease Using Brain MR Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Matoug">Sofia Matoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Amr%20Abdel-Dayem"> Amr Abdel-Dayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a comprehensive survey of recent research studies to segment and classify brain MR (magnetic resonance) images in order to detect significant changes to brain ventricles. The paper also presents a general framework for detecting regions that atrophy, which can help neurologists in detecting and staging Alzheimer. Furthermore, a prototype was implemented to segment brain MR images in order to extract the region of interest (ROI) and then, a classifier was employed to differentiate between normal and abnormal brain tissues. Experimental results show that the proposed scheme can provide a reliable second opinion that neurologists can benefit from. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alzheimer" title="Alzheimer">Alzheimer</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20images" title=" brain images"> brain images</a>, <a href="https://publications.waset.org/abstracts/search?q=classification%20techniques" title=" classification techniques"> classification techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=Magnetic%20Resonance%20Images%20MRI" title=" Magnetic Resonance Images MRI"> Magnetic Resonance Images MRI</a> </p> <a href="https://publications.waset.org/abstracts/49930/clustering-based-detection-of-alzheimers-disease-using-brain-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6182</span> Medical Imaging Fusion: A Teaching-Learning Simulation Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cristina%20Maria%20Ribeiro%20Martins%20Pereira%20Caridade">Cristina Maria Ribeiro Martins Pereira Caridade</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Rita%20Ferreira%20Morais"> Ana Rita Ferreira Morais</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching-learning%20simulation%20tool" title=" teaching-learning simulation tool"> teaching-learning simulation tool</a>, <a href="https://publications.waset.org/abstracts/search?q=biomedical%20engineering%20education" title=" biomedical engineering education"> biomedical engineering education</a> </p> <a href="https://publications.waset.org/abstracts/164987/medical-imaging-fusion-a-teaching-learning-simulation-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164987.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=207">207</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=208">208</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10