CINXE.COM
Search results for: material images
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: material images</title> <meta name="description" content="Search results for: material images"> <meta name="keywords" content="material images"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="material images" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="material images"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 8885</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: material images</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8885</span> Grain Boundary Detection Based on Superpixel Merges</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaokai%20Liu">Gaokai Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The distribution of material grain sizes reflects the strength, fracture, corrosion and other properties, and the grain size can be acquired via the grain boundary. In recent years, the automatic grain boundary detection is widely required instead of complex experimental operations. In this paper, an effective solution is applied to acquire the grain boundary of material images. First, the initial superpixel segmentation result is obtained via a superpixel approach. Then, a region merging method is employed to merge adjacent regions based on certain similarity criterions, the experimental results show that the merging strategy improves the superpixel segmentation result on material datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=grain%20boundary%20detection" title="grain boundary detection">grain boundary detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=material%20images" title=" material images"> material images</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20merging" title=" region merging"> region merging</a> </p> <a href="https://publications.waset.org/abstracts/133188/grain-boundary-detection-based-on-superpixel-merges" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133188.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8884</span> Rhetoric and Renarrative Structure of Digital Images in Trans-Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Geng">Yang Geng</a>, <a href="https://publications.waset.org/abstracts/search?q=Anqi%20Zhao"> Anqi Zhao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The misreading theory of Harold Bloom provides a new diachronic perspective as an approach to the consistency between rhetoric of digital technology, dynamic movement of digital images and uncertain meaning of text. Reinterpreting the diachroneity of 'intertextuality' in the context of misreading theory extended the range of the 'intermediality' of transmedia to the intense tension between digital images and symbolic images throughout history of images. With the analogy between six categories of revisionary ratios and six steps of digital transformation, digital rhetoric might be illustrated as a linear process reflecting dynamic, intensive relations between digital moving images and original static images. Finally, it was concluded that two-way framework of the rhetoric of transformation of digital images and reversed served as a renarrative structure to revive static images by reconnecting them with digital moving images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rhetoric" title="rhetoric">rhetoric</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20art" title=" digital art"> digital art</a>, <a href="https://publications.waset.org/abstracts/search?q=intermediality" title=" intermediality"> intermediality</a>, <a href="https://publications.waset.org/abstracts/search?q=misreading%20theory" title=" misreading theory"> misreading theory</a> </p> <a href="https://publications.waset.org/abstracts/100230/rhetoric-and-renarrative-structure-of-digital-images-in-trans-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8883</span> Synthesis and Performance Study of Co3O4 as a Bi-Functional Next Generation Material</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shrikaant%20Kulkarni">Shrikaant Kulkarni</a>, <a href="https://publications.waset.org/abstracts/search?q=Akshata%20Naik%20Nimbalkar"> Akshata Naik Nimbalkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this worki a method protocol has been developed for the synthesis of innovative Co3O4 material by using a method of chemical synthesis followed by calcination. The effect of calcination temperature on the morphology, structure and catalytic performance on material in question is investigated by using characterization tools like scanning electron microscopy (SEM), X-ray diffraction (XRD) spectroscopy and electrochemical techniques. The SEM images reveal that the morphology of the Co3O4 material undergoes a change from the rod to a beadlike shape on calcination at temperature of 700 °C. The XRD image shows that although the morphology of synthesized Co3O4 material exhibits a cubic phase but it differs in crystallinity depending upon morphology. Similarly spherical beadlike Co3O4 material has exhibited better activity than its rodlike counterpart which is reflected from electrochemical findings. Further, its performance in terms of bifunctional nature and hlods a lot much of promise as a excellent electrode material in the next generation batteries and fuel cells. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bifunctional" title="bifunctional">bifunctional</a>, <a href="https://publications.waset.org/abstracts/search?q=next%20generation%20material" title=" next generation material"> next generation material</a>, <a href="https://publications.waset.org/abstracts/search?q=Co3O4" title=" Co3O4"> Co3O4</a>, <a href="https://publications.waset.org/abstracts/search?q=XRD" title=" XRD"> XRD</a> </p> <a href="https://publications.waset.org/abstracts/16208/synthesis-and-performance-study-of-co3o4-as-a-bi-functional-next-generation-material" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16208.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">379</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8882</span> Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20images" title="big images">big images</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20images" title=" binary images"> binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20similarity" title=" image similarity"> image similarity</a> </p> <a href="https://publications.waset.org/abstracts/89963/quick-similarity-measurement-of-binary-images-via-probabilistic-pixel-mapping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8881</span> 3D Guided Image Filtering to Improve Quality of Short-Time Binned Dynamic PET Images Using MRI Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tabassum%20Husain">Tabassum Husain</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Peng%20Li"> Shen Peng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhaolin%20Chen"> Zhaolin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper evaluates the usability of 3D Guided Image Filtering to enhance the quality of short-time binned dynamic PET images by using MRI images. Guided image filtering is an edge-preserving filter proposed to enhance 2D images. The 3D filter is applied on 1 and 5-minute binned images. The results are compared with 15-minute binned images and the Gaussian filtering. The guided image filter enhances the quality of dynamic PET images while also preserving important information of the voxels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images" title="dynamic PET images">dynamic PET images</a>, <a href="https://publications.waset.org/abstracts/search?q=guided%20image%20filter" title=" guided image filter"> guided image filter</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20preservation%20filtering" title=" information preservation filtering"> information preservation filtering</a> </p> <a href="https://publications.waset.org/abstracts/152864/3d-guided-image-filtering-to-improve-quality-of-short-time-binned-dynamic-pet-images-using-mri-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8880</span> Practical Guidelines for Utilizing WipFrag Software to Assess Oversize Blast Material Using Both Orthomosaic and Digital Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Blessing%20Olamide%20Taiwo">Blessing Olamide Taiwo</a>, <a href="https://publications.waset.org/abstracts/search?q=Andrew%20Palangio"> Andrew Palangio</a>, <a href="https://publications.waset.org/abstracts/search?q=Chirag%20Savaliya"> Chirag Savaliya</a>, <a href="https://publications.waset.org/abstracts/search?q=Jenil%20Patel"> Jenil Patel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Oversized material resulting from blasting presents a notable drawback in the transportation of run-off-mine material due to increased expenses associated with handling, decreased efficiency in loading, and greater wear on digging equipment. Its irregular size and weight demand additional resources and time for secondary breakage, impacting overall productivity and profitability. This paper addresses the limitations of interpreting image analysis software results and applying them to the assessment of blast-generated oversized materials. This comprehensive guide utilizes both ortho mosaic and digital photos to provide critical approaches for optimizing fragmentation analysis and improving decision-making in mining operations. It briefly covers post-blast assessment, blast block heat map interpretation, and material loading decision-making recommendations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blast%20result%20assessment" title="blast result assessment">blast result assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=WipFrag" title=" WipFrag"> WipFrag</a>, <a href="https://publications.waset.org/abstracts/search?q=oversize%20identification" title=" oversize identification"> oversize identification</a>, <a href="https://publications.waset.org/abstracts/search?q=orthomosaic%20images" title=" orthomosaic images"> orthomosaic images</a>, <a href="https://publications.waset.org/abstracts/search?q=production%20optimization" title=" production optimization"> production optimization</a> </p> <a href="https://publications.waset.org/abstracts/187904/practical-guidelines-for-utilizing-wipfrag-software-to-assess-oversize-blast-material-using-both-orthomosaic-and-digital-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187904.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">39</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8879</span> Reduction of Speckle Noise in Echocardiographic Images: A Survey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fathi%20Kallel">Fathi Kallel</a>, <a href="https://publications.waset.org/abstracts/search?q=Saida%20Khachira"> Saida Khachira</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Ben%20Slima"> Mohamed Ben Slima</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ben%20Hamida"> Ahmed Ben Hamida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speckle noise is a main characteristic of cardiac ultrasound images, it corresponding to grainy appearance that degrades the image quality. For this reason, the ultrasound images are difficult to use automatically in clinical use, then treatments are required for this type of images. Then a filtering procedure of these images is necessary to eliminate the speckle noise and to improve the quality of ultrasound images which will be then segmented to extract the necessary forms that exist. In this paper, we present the importance of the pre-treatment step for segmentation. This work is applied to cardiac ultrasound images. In a first step, a comparative study of speckle filtering method will be presented and then we use a segmentation algorithm to locate and extract cardiac structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20image%20processing" title="medical image processing">medical image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound%20images" title=" ultrasound images"> ultrasound images</a>, <a href="https://publications.waset.org/abstracts/search?q=Speckle%20noise" title=" Speckle noise"> Speckle noise</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=speckle%20filtering" title=" speckle filtering"> speckle filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=snakes" title=" snakes"> snakes</a> </p> <a href="https://publications.waset.org/abstracts/19064/reduction-of-speckle-noise-in-echocardiographic-images-a-survey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">530</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8878</span> Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emhimed%20Saffor">Emhimed Saffor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection "> edge detection </a> </p> <a href="https://publications.waset.org/abstracts/44926/subjective-evaluation-of-mathematical-morphology-edge-detection-on-computed-tomography-ct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8877</span> Characterization of Kopff Crater Using Remote Sensing Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreekumari%20Patel">Shreekumari Patel</a>, <a href="https://publications.waset.org/abstracts/search?q=Prabhjot%20Kaur"> Prabhjot Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Paras%20Solanki"> Paras Solanki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Moon Mineralogy Mapper (M3), Miniature Radio Frequency (Mini-RF), Kaguya Terrain Camera images, Lunar Orbiter Laser Altimeter (LOLA) digital elevation model (DEM) and Lunar Reconnaissance Orbiter Camera (LROC)- Narrow angle camera (NAC) and Wide angle camera (WAC) images were used to study mineralogy, surface physical properties, and age of the 42 km diameter Kopff crater. M3 indicates the low albedo crater floor to be high-Ca pyroxene dominated associated with floor fracture suggesting the igneous activity of the gabbroic material. Signature of anorthositic material is sampled on the eastern edge as target material is excavated from ~3 km diameter impact crater providing access to the crustal composition. Several occurrences of spinel were detected in northwestern rugged terrain. Our observation can be explained by exposure of spinel by this crater that impacted onto the inner rings of Orientale basin. Spinel was part of the pre-impact target, an intrinsic unit of basin ring. Crater floor was dated by crater counts performed on Kaguya TC images. Nature of surface was studied in detail with LROC NAC and Mini-RF. Freshly exposed surface and boulder or debris seen in LROC NAC images have enhanced radar signal in comparison to mature terrain of Kopff crater. This multidisciplinary analysis of remote sensing data helps to assess lunar surface in detail. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=crater" title="crater">crater</a>, <a href="https://publications.waset.org/abstracts/search?q=mineralogy" title=" mineralogy"> mineralogy</a>, <a href="https://publications.waset.org/abstracts/search?q=moon" title=" moon"> moon</a>, <a href="https://publications.waset.org/abstracts/search?q=radar%20observations" title=" radar observations"> radar observations</a> </p> <a href="https://publications.waset.org/abstracts/96879/characterization-of-kopff-crater-using-remote-sensing-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">160</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8876</span> Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nidhal%20K.%20Azawi">Nidhal K. Azawi</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20M.%20Gauch"> John M. Gauch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=colonoscopy%20classification" title="colonoscopy classification">colonoscopy classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20alignment" title=" image alignment"> image alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/92461/automatic-method-for-classification-of-informative-and-noninformative-images-in-colonoscopy-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8875</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8874</span> Effective Texture Features for Segmented Mammogram Images Based on Multi-Region of Interest Segmentation Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramayanam%20Suresh">Ramayanam Suresh</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Nagaraja%20Rao"> A. Nagaraja Rao</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Eswara%20Reddy"> B. Eswara Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture features of mammogram images are useful for finding masses or cancer cases in mammography, which have been used by radiologists. Textures are greatly succeeded for segmented images rather than normal images. It is necessary to perform segmentation for exclusive specification of cancer and non-cancer regions separately. Region of interest (ROI) is most commonly used technique for mammogram segmentation. Limitation of this method is that it is unable to explore segmentation for large collection of mammogram images. Therefore, this paper is proposed multi-ROI segmentation for addressing the above limitation. It supports greatly in finding the best texture features of mammogram images. Experimental study demonstrates the effectiveness of proposed work using benchmarked images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20features" title="texture features">texture features</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-ROI%20segmentation" title=" multi-ROI segmentation"> multi-ROI segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmarked%20images" title=" benchmarked images "> benchmarked images </a> </p> <a href="https://publications.waset.org/abstracts/88666/effective-texture-features-for-segmented-mammogram-images-based-on-multi-region-of-interest-segmentation-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88666.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">311</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8873</span> Optimization Query Image Using Search Relevance Re-Ranking Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20G.%20Asmitha%20Chandini">T. G. Asmitha Chandini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Web-based image search re-ranking, as an successful method to get better the results. In a query keyword, the first stair is store the images is first retrieve based on the text-based information. The user to select a query keywordimage, by using this query keyword other images are re-ranked based on their visual properties with images.Now a day to day, people projected to match images in a semantic space which is used attributes or reference classes closely related to the basis of semantic image. though, understanding a worldwide visual semantic space to demonstrate highly different images from the web is difficult and inefficient. The re-ranking images, which automatically offline part learns dissimilar semantic spaces for different query keywords. The features of images are projected into their related semantic spaces to get particular images. At the online stage, images are re-ranked by compare their semantic signatures obtained the semantic précised by the query keyword image. The query-specific semantic signatures extensively improve both the proper and efficiency of image re-ranking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Query" title="Query">Query</a>, <a href="https://publications.waset.org/abstracts/search?q=keyword" title=" keyword"> keyword</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=re-ranking" title=" re-ranking"> re-ranking</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic" title=" semantic"> semantic</a>, <a href="https://publications.waset.org/abstracts/search?q=signature" title=" signature"> signature</a> </p> <a href="https://publications.waset.org/abstracts/28398/optimization-query-image-using-search-relevance-re-ranking-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28398.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">552</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8872</span> Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edgardo%20V.%20Gubatanga%20Jr.">Edgardo V. Gubatanga Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Joshua%20Salvacion"> Mark Joshua Salvacion</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20LiDAR" title="aerial LiDAR">aerial LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=colorization" title=" colorization"> colorization</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=intensity%20images" title=" intensity images"> intensity images</a> </p> <a href="https://publications.waset.org/abstracts/94116/application-of-deep-learning-in-colorization-of-lidar-derived-intensity-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94116.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8871</span> Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heeba%20A.%20Gurku">Heeba A. Gurku</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT%20images" title="CT images">CT images</a>, <a href="https://publications.waset.org/abstracts/search?q=CBCT%20images" title=" CBCT images"> CBCT images</a>, <a href="https://publications.waset.org/abstracts/search?q=cycle%20GAN" title=" cycle GAN"> cycle GAN</a>, <a href="https://publications.waset.org/abstracts/search?q=AGGAN" title=" AGGAN"> AGGAN</a> </p> <a href="https://publications.waset.org/abstracts/167226/generation-of-high-quality-synthetic-ct-images-from-cone-beam-ct-images-using-ai-based-generative-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8870</span> Comparison of Vessel Detection in Standard vs Ultra-WideField Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maher%20un%20Nisa">Maher un Nisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahsan%20Khawaja"> Ahsan Khawaja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal imaging with Ultra-WideField (UWF) view technology has opened up new avenues in the field of retinal pathology detection. Recent developments in retinal imaging such as Optos California Imaging Device helps in acquiring high resolution images of the retina to help the Ophthalmologists in diagnosing and analyzing eye related pathologies more accurately. This paper investigates the acquired retinal details by comparing vessel detection in standard 450 color fundus images with the state of the art 2000 UWF retinal images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20fundus" title="color fundus">color fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title=" retinal images"> retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=ultra-widefield" title=" ultra-widefield"> ultra-widefield</a>, <a href="https://publications.waset.org/abstracts/search?q=vessel%20detection" title=" vessel detection"> vessel detection</a> </p> <a href="https://publications.waset.org/abstracts/33520/comparison-of-vessel-detection-in-standard-vs-ultra-widefield-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">448</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8869</span> Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Razan%20Manofely"> Razan Manofely</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajab%20M.%20Ben%20Yousef"> Rajab M. Ben Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enhancement" title="enhancement">enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=x-rays" title=" x-rays"> x-rays</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20intensity%20values" title=" pixel intensity values"> pixel intensity values</a>, <a href="https://publications.waset.org/abstracts/search?q=MatLab" title=" MatLab"> MatLab</a> </p> <a href="https://publications.waset.org/abstracts/31031/enhancement-of-x-rays-images-intensity-using-pixel-values-adjustments-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8868</span> Filtering and Reconstruction System for Grey-Level Forensic Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahd%20Aljarf">Ahd Aljarf</a>, <a href="https://publications.waset.org/abstracts/search?q=Saad%20Amin"> Saad Amin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20filtering" title="image filtering">image filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=forensic%20images" title=" forensic images"> forensic images</a> </p> <a href="https://publications.waset.org/abstracts/15654/filtering-and-reconstruction-system-for-grey-level-forensic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">366</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8867</span> Mage Fusion Based Eye Tumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Ashit">Ahmed Ashit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20tumor" title=" eye tumor"> eye tumor</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20operators" title=" canny operators"> canny operators</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposed" title=" superimposed"> superimposed</a> </p> <a href="https://publications.waset.org/abstracts/30750/mage-fusion-based-eye-tumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30750.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8866</span> Using Deep Learning in Lyme Disease Diagnosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Teja%20Koduru">Teja Koduru</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lyme" title="Lyme">Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=untreated%20Lyme" title=" untreated Lyme"> untreated Lyme</a>, <a href="https://publications.waset.org/abstracts/search?q=erythema%20migrans%20rash" title=" erythema migrans rash"> erythema migrans rash</a>, <a href="https://publications.waset.org/abstracts/search?q=EM%20rash" title=" EM rash"> EM rash</a> </p> <a href="https://publications.waset.org/abstracts/135383/using-deep-learning-in-lyme-disease-diagnosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135383.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">240</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8865</span> Clustering-Based Detection of Alzheimer's Disease Using Brain MR Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sofia%20Matoug">Sofia Matoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Amr%20Abdel-Dayem"> Amr Abdel-Dayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a comprehensive survey of recent research studies to segment and classify brain MR (magnetic resonance) images in order to detect significant changes to brain ventricles. The paper also presents a general framework for detecting regions that atrophy, which can help neurologists in detecting and staging Alzheimer. Furthermore, a prototype was implemented to segment brain MR images in order to extract the region of interest (ROI) and then, a classifier was employed to differentiate between normal and abnormal brain tissues. Experimental results show that the proposed scheme can provide a reliable second opinion that neurologists can benefit from. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alzheimer" title="Alzheimer">Alzheimer</a>, <a href="https://publications.waset.org/abstracts/search?q=brain%20images" title=" brain images"> brain images</a>, <a href="https://publications.waset.org/abstracts/search?q=classification%20techniques" title=" classification techniques"> classification techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=Magnetic%20Resonance%20Images%20MRI" title=" Magnetic Resonance Images MRI"> Magnetic Resonance Images MRI</a> </p> <a href="https://publications.waset.org/abstracts/49930/clustering-based-detection-of-alzheimers-disease-using-brain-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8864</span> Subjective versus Objective Assessment for Magnetic Resonance (MR) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heshalini%20Rajagopal">Heshalini Rajagopal</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Sze%20Chow"> Li Sze Chow</a>, <a href="https://publications.waset.org/abstracts/search?q=Raveendran%20Paramesran"> Raveendran Paramesran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Magnetic Resonance Imaging (MRI) is one of the most important medical imaging modality. Subjective assessment of the image quality is regarded as the gold standard to evaluate MR images. In this study, a database of 210 MR images which contains ten reference images and 200 distorted images is presented. The reference images were distorted with four types of distortions: Rician Noise, Gaussian White Noise, Gaussian Blur and DCT compression. The 210 images were assessed by ten subjects. The subjective scores were presented in Difference Mean Opinion Score (DMOS). The DMOS values were compared with four FR-IQA metrics. We have used Pearson Linear Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) to validate the DMOS values. The high correlation values of PLCC and SROCC shows that the DMOS values are close to the objective FR-IQA metrics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=medical%20resonance%20%28MR%29%20images" title="medical resonance (MR) images">medical resonance (MR) images</a>, <a href="https://publications.waset.org/abstracts/search?q=difference%20mean%20opinion%20score%20%28DMOS%29" title=" difference mean opinion score (DMOS)"> difference mean opinion score (DMOS)</a>, <a href="https://publications.waset.org/abstracts/search?q=full%20reference%20image%20quality%20assessment%20%28FR-IQA%29" title=" full reference image quality assessment (FR-IQA)"> full reference image quality assessment (FR-IQA)</a> </p> <a href="https://publications.waset.org/abstracts/39606/subjective-versus-objective-assessment-for-magnetic-resonance-mr-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">458</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8863</span> Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haoqi%20Gao">Haoqi Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Koichi%20Ogawara"> Koichi Ogawara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20vascular%20segmentations" title="retinal vascular segmentations">retinal vascular segmentations</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20ad-versarial%20network" title=" generative ad-versarial network"> generative ad-versarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclegan" title=" cyclegan"> cyclegan</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a> </p> <a href="https://publications.waset.org/abstracts/110591/generative-adversarial-network-for-bidirectional-mappings-between-retinal-fundus-images-and-vessel-segmented-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8862</span> Timescape-Based Panoramic View for Historic Landmarks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Ali">H. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Whitehead"> A. Whitehead</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cultural%20heritage" title="cultural heritage">cultural heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title=" image registration"> image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subset%20selection" title=" image subset selection"> image subset selection</a>, <a href="https://publications.waset.org/abstracts/search?q=registered%20image%20similarity" title=" registered image similarity"> registered image similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20panorama" title=" temporal panorama"> temporal panorama</a>, <a href="https://publications.waset.org/abstracts/search?q=timescapes" title=" timescapes"> timescapes</a> </p> <a href="https://publications.waset.org/abstracts/101930/timescape-based-panoramic-view-for-historic-landmarks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8861</span> Security Analysis and Implementation of Achterbahn-128 for Images Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aissa%20Belmeguenai">Aissa Belmeguenai</a>, <a href="https://publications.waset.org/abstracts/search?q=Oulaya%20Berrak"> Oulaya Berrak</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaled%20Mansouri"> Khaled Mansouri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, efficiency implementation and security evaluation of the keystream generator of Achterbahn-128 for images encryption and decryption was introduced. The implementation for this simulated project is written with MATLAB.7.5. First of all, two different original images are used to validate the proposed design. The developed program is used to transform the original images data into digital image file. Finally, the proposed program is implemented to encrypt and decrypt images data. Several tests are done to prove the design performance, including visual tests and security evaluation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Achterbahn-128" title="Achterbahn-128">Achterbahn-128</a>, <a href="https://publications.waset.org/abstracts/search?q=keystream%20generator" title=" keystream generator"> keystream generator</a>, <a href="https://publications.waset.org/abstracts/search?q=stream%20cipher" title=" stream cipher"> stream cipher</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title=" image encryption"> image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20analysis" title=" security analysis"> security analysis</a> </p> <a href="https://publications.waset.org/abstracts/38107/security-analysis-and-implementation-of-achterbahn-128-for-images-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8860</span> Roof Material Detection Based on Object-Based Approach Using WorldView-2 Satellite Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ebrahim%20Taherzadeh">Ebrahim Taherzadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Helmi%20Z.%20M.%20Shafri"> Helmi Z. M. Shafri</a>, <a href="https://publications.waset.org/abstracts/search?q=Kaveh%20Shahi"> Kaveh Shahi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the most important tasks in urban area remote sensing is detection of impervious surface (IS), such as building roof and roads. However, detection of IS in heterogeneous areas still remains as one of the most challenging works. In this study, detection of concrete roof using an object-oriented approach was proposed. A new rule-based classification was developed to detect concrete roof tile. The proposed rule-based classification was applied to WorldView-2 image. Results showed that the proposed rule has good potential to predict concrete roof material from WorldView-2 images with 85% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=object-based" title="object-based">object-based</a>, <a href="https://publications.waset.org/abstracts/search?q=roof%20material" title=" roof material"> roof material</a>, <a href="https://publications.waset.org/abstracts/search?q=concrete%20tile" title=" concrete tile"> concrete tile</a>, <a href="https://publications.waset.org/abstracts/search?q=WorldView-2" title=" WorldView-2"> WorldView-2</a> </p> <a href="https://publications.waset.org/abstracts/13685/roof-material-detection-based-on-object-based-approach-using-worldview-2-satellite-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13685.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8859</span> Integral Image-Based Differential Filters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Inoue">Kohei Inoue</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenji%20Hara"> Kenji Hara</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiichi%20Urahama"> Kiichi Urahama</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=integral%20images" title="integral images">integral images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20images" title=" differential images"> differential images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20filters" title=" differential filters"> differential filters</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title=" image fusion"> image fusion</a> </p> <a href="https://publications.waset.org/abstracts/8531/integral-image-based-differential-filters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8858</span> Multimodal Database of Retina Images for Africa: The First Open Access Digital Repository for Retina Images in Sub Saharan Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simon%20Arunga">Simon Arunga</a>, <a href="https://publications.waset.org/abstracts/search?q=Teddy%20Kwaga"> Teddy Kwaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Rita%20Kageni"> Rita Kageni</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Gichangi"> Michael Gichangi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nyawira%20Mwangi"> Nyawira Mwangi</a>, <a href="https://publications.waset.org/abstracts/search?q=Fred%20Kagwa"> Fred Kagwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Rogers%20Mwavu"> Rogers Mwavu</a>, <a href="https://publications.waset.org/abstracts/search?q=Amos%20Baryashaba"> Amos Baryashaba</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20F.%20Nakayama"> Luis F. Nakayama</a>, <a href="https://publications.waset.org/abstracts/search?q=Katharine%20Morley"> Katharine Morley</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20Morley"> Michael Morley</a>, <a href="https://publications.waset.org/abstracts/search?q=Leo%20A.%20Celi"> Leo A. Celi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jessica%20Haberer"> Jessica Haberer</a>, <a href="https://publications.waset.org/abstracts/search?q=Celestino%20Obua"> Celestino Obua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: The main aim for creating the Multimodal Database of Retinal Images for Africa (MoDRIA) was to provide a publicly available repository of retinal images for responsible researchers to conduct algorithm development in a bid to curb the challenges of ophthalmic artificial intelligence (AI) in Africa. Methods: Data and retina images were ethically sourced from sites in Uganda and Kenya. Data on medical history, visual acuity, ocular examination, blood pressure, and blood sugar were collected. Retina images were captured using fundus cameras (Foru3-nethra and Canon CR-Mark-1). Images were stored on a secure online database. Results: The database consists of 7,859 retinal images in portable network graphics format from 1,988 participants. Images from patients with human immunodeficiency virus were 18.9%, 18.2% of images were from hypertensive patients, 12.8% from diabetic patients, and the rest from normal’ participants. Conclusion: Publicly available data repositories are a valuable asset in the development of AI technology. Therefore, is a need for the expansion of MoDRIA so as to provide larger datasets that are more representative of Sub-Saharan data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retina%20images" title="retina images">retina images</a>, <a href="https://publications.waset.org/abstracts/search?q=MoDRIA" title=" MoDRIA"> MoDRIA</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20repository" title=" image repository"> image repository</a>, <a href="https://publications.waset.org/abstracts/search?q=African%20database" title=" African database"> African database</a> </p> <a href="https://publications.waset.org/abstracts/169515/multimodal-database-of-retina-images-for-africa-the-first-open-access-digital-repository-for-retina-images-in-sub-saharan-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169515.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8857</span> Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdolvahab%20Ehsani%20Rad">Abdolvahab Ehsani Rad</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Shafry%20Mohd%20Rahim"> Mohd Shafry Mohd Rahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Alireza%20Norouzi"> Alireza Norouzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=integral%20production" title="integral production">integral production</a>, <a href="https://publications.waset.org/abstracts/search?q=level%20set%20method" title=" level set method"> level set method</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20operation" title=" morphological operation"> morphological operation</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/3681/level-set-and-morphological-operation-techniques-in-application-of-dental-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3681.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">317</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8856</span> An Algorithm for Removal of Noise from X-Ray Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sajidullah%20Khan">Sajidullah Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Najeeb%20Ullah"> Najeeb Ullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Yin%20Chai"> Wang Yin Chai</a>, <a href="https://publications.waset.org/abstracts/search?q=Chai%20Soo%20See"> Chai Soo See</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an approach to remove impulse and Poisson noise from X-ray images. Many filters have been used for impulse noise removal from color and gray scale images with their own strengths and weaknesses but X-ray images contain Poisson noise and unfortunately there is no intelligent filter which can detect impulse and Poisson noise from X-ray images. Our proposed filter uses the upgraded layer discrimination approach to detect both Impulse and Poisson noise corrupted pixels in X-ray images and then restores only those detected pixels with a simple efficient and reliable one line equation. Our Proposed algorithms are very effective and much more efficient than all existing filters used only for Impulse noise removal. The proposed method uses a new powerful and efficient noise detection method to determine whether the pixel under observation is corrupted or noise free. Results from computer simulations are used to demonstrate pleasing performance of our proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=X-ray%20image%20de-noising" title="X-ray image de-noising">X-ray image de-noising</a>, <a href="https://publications.waset.org/abstracts/search?q=impulse%20noise" title=" impulse noise"> impulse noise</a>, <a href="https://publications.waset.org/abstracts/search?q=poisson%20noise" title=" poisson noise"> poisson noise</a>, <a href="https://publications.waset.org/abstracts/search?q=PRWF" title=" PRWF"> PRWF</a> </p> <a href="https://publications.waset.org/abstracts/54256/an-algorithm-for-removal-of-noise-from-x-ray-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54256.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">383</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=296">296</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=297">297</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=material%20images&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>