CINXE.COM

Search results for: low contrast image

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: low contrast image</title> <meta name="description" content="Search results for: low contrast image"> <meta name="keywords" content="low contrast image"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="low contrast image" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="low contrast image"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4128</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: low contrast image</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4128</span> Edge Detection in Low Contrast Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Koushlendra%20Kumar%20Singh">Koushlendra Kumar Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Manish%20Kumar%20Bajpai"> Manish Kumar Bajpai</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajesh%20K.%20Pandey"> Rajesh K. Pandey</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The edges of low contrast images are not clearly distinguishable to the human eye. It is difficult to find the edges and boundaries in it. The present work encompasses a new approach for low contrast images. The Chebyshev polynomial based fractional order filter has been used for filtering operation on an image. The preprocessing has been performed by this filter on the input image. Laplacian of Gaussian method has been applied on preprocessed image for edge detection. The algorithm has been tested on two test images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image" title="low contrast image">low contrast image</a>, <a href="https://publications.waset.org/abstracts/search?q=fractional%20order%20differentiator" title="fractional order differentiator">fractional order differentiator</a>, <a href="https://publications.waset.org/abstracts/search?q=Laplacian%20of%20Gaussian%20%28LoG%29%20method" title="Laplacian of Gaussian (LoG) method">Laplacian of Gaussian (LoG) method</a>, <a href="https://publications.waset.org/abstracts/search?q=chebyshev%20polynomial" title=" chebyshev polynomial"> chebyshev polynomial</a> </p> <a href="https://publications.waset.org/abstracts/21264/edge-detection-in-low-contrast-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">636</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4127</span> New Variational Approach for Contrast Enhancement of Color Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wanhyun%20Cho">Wanhyun Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Seongchae%20Seo"> Seongchae Seo</a>, <a href="https://publications.waset.org/abstracts/search?q=Soonja%20Kang"> Soonja Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20image" title="color image">color image</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20enhancement%20technique" title=" contrast enhancement technique"> contrast enhancement technique</a>, <a href="https://publications.waset.org/abstracts/search?q=variational%20approach" title=" variational approach"> variational approach</a>, <a href="https://publications.waset.org/abstracts/search?q=Euler-Lagrang%20equation" title=" Euler-Lagrang equation"> Euler-Lagrang equation</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20approximation%20method" title=" dynamic approximation method"> dynamic approximation method</a>, <a href="https://publications.waset.org/abstracts/search?q=EME%20measure" title=" EME measure"> EME measure</a> </p> <a href="https://publications.waset.org/abstracts/10574/new-variational-approach-for-contrast-enhancement-of-color-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">449</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4126</span> Comparative Study of Different Enhancement Techniques for Computed Tomography Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20G.%20Jinimole">C. G. Jinimole</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Harsha"> A. Harsha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computed%20tomography" title="computed tomography">computed tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=enhancement%20techniques" title=" enhancement techniques"> enhancement techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=increasing%20contrast" title=" increasing contrast"> increasing contrast</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR%20and%20MSE" title=" PSNR and MSE"> PSNR and MSE</a> </p> <a href="https://publications.waset.org/abstracts/69868/comparative-study-of-different-enhancement-techniques-for-computed-tomography-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69868.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">314</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4125</span> Contrast Enhancement of Color Images with Color Morphing Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javed%20Khan">Javed Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Aamir%20Saeed%20Malik"> Aamir Saeed Malik</a>, <a href="https://publications.waset.org/abstracts/search?q=Nidal%20Kamel"> Nidal Kamel</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarat%20Chandra%20Dass"> Sarat Chandra Dass</a>, <a href="https://publications.waset.org/abstracts/search?q=Azura%20Mohd%20Affandi"> Azura Mohd Affandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20enhacement" title="contrast enhacement">contrast enhacement</a>, <a href="https://publications.waset.org/abstracts/search?q=normalized%20RGB" title=" normalized RGB"> normalized RGB</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20histogram%20equalization" title=" adaptive histogram equalization"> adaptive histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=cumulative%20variance." title=" cumulative variance."> cumulative variance.</a> </p> <a href="https://publications.waset.org/abstracts/42755/contrast-enhancement-of-color-images-with-color-morphing-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42755.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">377</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4124</span> Color Image Enhancement Using Multiscale Retinex and Image Fusion Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chang-Hsing%20Lee">Chang-Hsing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng-Chang%20Lien"> Cheng-Chang Lien</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Chuan%20Han"> Chin-Chuan Han</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an edge-strength guided multiscale retinex (EGMSR) approach will be proposed for color image contrast enhancement. In EGMSR, the pixel-dependent weight associated with each pixel in the single scale retinex output image is computed according to the edge strength around this pixel in order to prevent from over-enhancing the noises contained in the smooth dark/bright regions. Further, by fusing together the enhanced results of EGMSR and adaptive multiscale retinex (AMSR), we can get a natural fused image having high contrast and proper tonal rendition. Experimental results on several low-contrast images have shown that our proposed approach can produce natural and appealing enhanced images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=multiscale%20retinex" title=" multiscale retinex"> multiscale retinex</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title=" image fusion"> image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=EGMSR" title=" EGMSR"> EGMSR</a> </p> <a href="https://publications.waset.org/abstracts/15139/color-image-enhancement-using-multiscale-retinex-and-image-fusion-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15139.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">458</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4123</span> Improvement of Bone Scintography Image Using Image Texture Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Eltayeb%20Wagallah"> Eltayeb Wagallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image enhancement allows the observer to see details in images that may not be immediately observable in the original image. Image enhancement is the transformation or mapping of one image to another. The enhancement of certain features in images is accompanied by undesirable effects. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian scale mixture model and median filter were presented, which accomplishes nonlinearities from scattering a new nonlinear approach for contrast enhancement of bones in bone scan images using both gamma correction and negative transform methods. The usual assumption of a distribution of gamma and Poisson statistics only lead to overestimation of the noise variance in regions of low intensity but to underestimation in regions of high intensity and therefore to non-optional results. The contrast enhancement results were obtained and evaluated using MatLab program in nuclear medicine images of the bones. The optimal number of bins, in particular the number of gray-levels, is chosen automatically using entropy and average distance between the histogram of the original gray-level distribution and the contrast enhancement function’s curve. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bone%20scan" title="bone scan">bone scan</a>, <a href="https://publications.waset.org/abstracts/search?q=nuclear%20medicine" title=" nuclear medicine"> nuclear medicine</a>, <a href="https://publications.waset.org/abstracts/search?q=Matlab" title=" Matlab"> Matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing%20technique" title=" image processing technique"> image processing technique</a> </p> <a href="https://publications.waset.org/abstracts/13956/improvement-of-bone-scintography-image-using-image-texture-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13956.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">507</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4122</span> Adaptive Dehazing Using Fusion Strategy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Ramesh%20Kanthan">M. Ramesh Kanthan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Naga%20Nandini%20Sujatha"> S. Naga Nandini Sujatha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20image" title="single image">single image</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=dehazing" title=" dehazing"> dehazing</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20fusion" title=" multi-scale fusion"> multi-scale fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=per-pixel" title=" per-pixel"> per-pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20map" title=" weight map"> weight map</a> </p> <a href="https://publications.waset.org/abstracts/32544/adaptive-dehazing-using-fusion-strategy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4121</span> New Method to Increase Contrast of Electromicrograph of Rat Tissues Sections</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lise%20Paule%20Lab%C3%A9jof">Lise Paule Labéjof</a>, <a href="https://publications.waset.org/abstracts/search?q=Ra%C3%ADza%20Sales%20Pereira%20Bizerra"> Raíza Sales Pereira Bizerra</a>, <a href="https://publications.waset.org/abstracts/search?q=Galileu%20Barbosa%20Costa"> Galileu Barbosa Costa</a>, <a href="https://publications.waset.org/abstracts/search?q=Tha%C3%ADsa%20Barros%20dos%20Santos"> Thaísa Barros dos Santos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the beginning of the microscopy, improving the image quality has always been a concern of its users. Especially for transmission electron microscopy (TEM), the problem is even more important due to the complexity of the sample preparation technique and the many variables that can affect the conservation of structures, proper operation of the equipment used and then the quality of the images obtained. Animal tissues being transparent it is necessary to apply a contrast agent in order to identify the elements of their ultrastructural morphology. Several methods of contrastation of tissues for TEM imaging have already been developed. The most used are the “in block” contrastation and “in situ” contrastation. This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electromicrographies of the tissue sections have better contrast compared to that in situ and present no artefact of precipitation of contrast agent. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20quality" title="image quality">image quality</a>, <a href="https://publications.waset.org/abstracts/search?q=microscopy%20research" title=" microscopy research"> microscopy research</a>, <a href="https://publications.waset.org/abstracts/search?q=staining%20technique" title=" staining technique"> staining technique</a>, <a href="https://publications.waset.org/abstracts/search?q=ultra%20thin%20section" title=" ultra thin section"> ultra thin section</a> </p> <a href="https://publications.waset.org/abstracts/26993/new-method-to-increase-contrast-of-electromicrograph-of-rat-tissues-sections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26993.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4120</span> Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tian%20Xia">Tian Xia</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Yan%20Tang"> Yuan Yan Tang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=small%20target%20detection" title="small target detection">small target detection</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20contrast" title=" local contrast"> local contrast</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20vision%20system" title=" human vision system"> human vision system</a>, <a href="https://publications.waset.org/abstracts/search?q=Laplacian%20of%20Gaussian" title=" Laplacian of Gaussian"> Laplacian of Gaussian</a> </p> <a href="https://publications.waset.org/abstracts/19199/biologically-inspired-small-infrared-target-detection-using-local-contrast-mechanisms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19199.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4119</span> Contrast Enhancement of Masses in Mammograms Using Multiscale Morphology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amit%20Kamra">Amit Kamra</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20K.%20Jain"> V. K. Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=Pragya"> Pragya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mammography is widely used technique for breast cancer screening. There are various other techniques for breast cancer screening but mammography is the most reliable and effective technique. The images obtained through mammography are of low contrast which causes problem for the radiologists to interpret. Hence, a high quality image is mandatory for the processing of the image for extracting any kind of information from it. Many contrast enhancement algorithms have been developed over the years. In the present work, an efficient morphology based technique is proposed for contrast enhancement of masses in mammographic images. The proposed method is based on Multiscale Morphology and it takes into consideration the scale of the structuring element. The proposed method is compared with other state-of-the-art techniques. The experimental results show that the proposed method is better both qualitatively and quantitatively than the other standard contrast enhancement techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enhancement" title="enhancement">enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale" title=" multi-scale"> multi-scale</a>, <a href="https://publications.waset.org/abstracts/search?q=mathematical%20morphology" title=" mathematical morphology"> mathematical morphology</a> </p> <a href="https://publications.waset.org/abstracts/29677/contrast-enhancement-of-masses-in-mammograms-using-multiscale-morphology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4118</span> Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Razan%20Manofely"> Razan Manofely</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajab%20M.%20Ben%20Yousef"> Rajab M. Ben Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enhancement" title="enhancement">enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=x-rays" title=" x-rays"> x-rays</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20intensity%20values" title=" pixel intensity values"> pixel intensity values</a>, <a href="https://publications.waset.org/abstracts/search?q=MatLab" title=" MatLab"> MatLab</a> </p> <a href="https://publications.waset.org/abstracts/31031/enhancement-of-x-rays-images-intensity-using-pixel-values-adjustments-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4117</span> Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Z.%20Mortezaie">Z. Mortezaie</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Hassanpour"> H. Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Asadi%20Amiri"> S. Asadi Amiri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unsharp%20masking" title="unsharp masking">unsharp masking</a>, <a href="https://publications.waset.org/abstracts/search?q=blur%20image" title=" blur image"> blur image</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-region%20gradient" title=" sub-region gradient"> sub-region gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a> </p> <a href="https://publications.waset.org/abstracts/73795/contrast-enhancement-in-digital-images-using-an-adaptive-unsharp-masking-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">214</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4116</span> An Image Enhancement Method Based on Curvelet Transform for CBCT-Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shahriar%20Farzam">Shahriar Farzam</a>, <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Rastgarpour"> Maryam Rastgarpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curvelet%20transform" title="curvelet transform">curvelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=CBCT" title=" CBCT"> CBCT</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20denoising" title=" image denoising"> image denoising</a> </p> <a href="https://publications.waset.org/abstracts/69244/an-image-enhancement-method-based-on-curvelet-transform-for-cbct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69244.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4115</span> Deepnic, A Method to Transform Each Variable into Image for Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nguyen%20J.%20M.">Nguyen J. M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucas%20G."> Lucas G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Brunner%20M."> Brunner M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruan%20S."> Ruan S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonioli%20D."> Antonioli D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tabular%20data" title="tabular data">tabular data</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=perfect%20trees" title=" perfect trees"> perfect trees</a>, <a href="https://publications.waset.org/abstracts/search?q=NICS" title=" NICS"> NICS</a> </p> <a href="https://publications.waset.org/abstracts/152479/deepnic-a-method-to-transform-each-variable-into-image-for-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152479.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4114</span> Simulation of X-Ray Tissue Contrast and Dose Optimisation in Radiological Physics to Improve Medical Imaging Students’ Skills</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Peter%20J.%20Riley">Peter J. Riley</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical Imaging students must understand the roles of Photo-electric Absorption (PE) and Compton Scatter (CS) interactions in patients to enable optimal X-ray imaging in clinical practice. A simulator has been developed that shows relative interaction probabilities, color bars for patient dose from PE, % penetration to the detector, and obscuring CS as Peak Kilovoltage (kVp) changes. Additionally, an anthropomorphic chest X-ray image shows the relative tissue contrasts and overlying CS-fog at that kVp, which determine the detectability of a lesion in the image. A series of interactive exercises with MCQs evaluate the student's understanding; the simulation has improved student perception of the need to acquire "sufficient" rather than maximal contrast to enable patient dose reduction at higher kVp. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=patient%20dose%20optimization" title="patient dose optimization">patient dose optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=radiological%20physics" title=" radiological physics"> radiological physics</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=tissue%20contrast" title=" tissue contrast"> tissue contrast</a> </p> <a href="https://publications.waset.org/abstracts/165659/simulation-of-x-ray-tissue-contrast-and-dose-optimisation-in-radiological-physics-to-improve-medical-imaging-students-skills" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165659.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4113</span> Multi-Spectral Medical Images Enhancement Using a Weber’s law</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muna%20F.%20Al-Sammaraie">Muna F. Al-Sammaraie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this research is to present a multi spectral image enhancement methods used to achieve highly real digital image populates only a small portion of the available range of digital values. Also, a quantitative measure of image enhancement is presented. This measure is related with concepts of the Webers Low of the human visual system. For decades, several image enhancement techniques have been proposed. Although most techniques require profuse amount of advance and critical steps, the result for the perceive image are not as satisfied. This study involves changing the original values so that more of the available range is used; then increases the contrast between features and their backgrounds. It consists of reading the binary image on the basis of pixels taking them byte-wise and displaying it, calculating the statistics of an image, automatically enhancing the color of the image based on statistics calculation using algorithms and working with RGB color bands. Finally, the enhanced image is displayed along with image histogram. A number of experimental results illustrated the performance of these algorithms. Particularly the quantitative measure has helped to select optimal processing parameters: the best parameters and transform. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-spectral" title=" multi-spectral"> multi-spectral</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB" title=" RGB"> RGB</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a> </p> <a href="https://publications.waset.org/abstracts/8574/multi-spectral-medical-images-enhancement-using-a-webers-law" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">328</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4112</span> Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abder-Rahman%20Ali">Abder-Rahman Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Ad%C3%A9la%C3%AFde%20Albouy-Kissi"> Adélaïde Albouy-Kissi</a>, <a href="https://publications.waset.org/abstracts/search?q=Manuel%20Grand-Brochier"> Manuel Grand-Brochier</a>, <a href="https://publications.waset.org/abstracts/search?q=Viviane%20Ladan-Marcus"> Viviane Ladan-Marcus</a>, <a href="https://publications.waset.org/abstracts/search?q=Christine%20Hoeffl"> Christine Hoeffl</a>, <a href="https://publications.waset.org/abstracts/search?q=Claude%20Marcus"> Claude Marcus</a>, <a href="https://publications.waset.org/abstracts/search?q=Antoine%20Vacavant"> Antoine Vacavant</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Yves%20Boire"> Jean-Yves Boire</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=defuzzification" title="defuzzification">defuzzification</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20clustering" title=" fuzzy clustering"> fuzzy clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=type-II%20fuzzy%20sets" title=" type-II fuzzy sets"> type-II fuzzy sets</a> </p> <a href="https://publications.waset.org/abstracts/32293/liver-lesion-extraction-with-fuzzy-thresholding-in-contrast-enhanced-ultrasound-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4111</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4110</span> A Study on Real-Time Fluorescence-Photoacoustic Imaging System for Mouse Thrombosis Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sang%20Hun%20Park">Sang Hun Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Moung%20Young%20Lee"> Moung Young Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Su%20Min%20Yu"> Su Min Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun%20Sang%20Jo"> Hyun Sang Jo</a>, <a href="https://publications.waset.org/abstracts/search?q=Ji%20Hyeon%20Kim"> Ji Hyeon Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chul%20Gyu%20Song"> Chul Gyu Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A near-infrared light source used as a light source in the fluorescence imaging system is suitable for use in real-time during the operation since it has no interference in surgical vision. However, fluorescence images do not have depth information. In this paper, we configured the device with the research on molecular imaging systems for monitoring thrombus imaging using fluorescence and photoacoustic. Fluorescence imaging was performed using a phantom experiment in order to search the exact location, and the Photoacoustic image was in order to detect the depth. Fluorescence image obtained when evaluated through current phantom experiments when the concentration of the contrast agent is 25μg / ml, it was confirmed that it looked sharper. The phantom experiment is has shown the possibility with the fluorescence image and photoacoustic image using an indocyanine green contrast agent. For early diagnosis of cardiovascular diseases, more active research with the fusion of different molecular imaging devices is required. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fluorescence" title="fluorescence">fluorescence</a>, <a href="https://publications.waset.org/abstracts/search?q=photoacoustic" title=" photoacoustic"> photoacoustic</a>, <a href="https://publications.waset.org/abstracts/search?q=indocyanine%20green" title=" indocyanine green"> indocyanine green</a>, <a href="https://publications.waset.org/abstracts/search?q=carotid%20artery" title=" carotid artery"> carotid artery</a> </p> <a href="https://publications.waset.org/abstracts/93152/a-study-on-real-time-fluorescence-photoacoustic-imaging-system-for-mouse-thrombosis-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93152.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4109</span> A Comparison between Underwater Image Enhancement Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ouafa%20Benaida">Ouafa Benaida</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhamid%20Loukil"> Abdelhamid Loukil</a>, <a href="https://publications.waset.org/abstracts/search?q=Adda%20Ali%20Pacha"> Adda Ali Pacha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, the growing interest of scientists in the field of image processing and analysis of underwater images and videos has been strengthened following the emergence of new underwater exploration techniques, such as the emergence of autonomous underwater vehicles and the use of underwater image sensors facilitating the exploration of underwater mineral resources as well as the search for new species of aquatic life by biologists. Indeed, underwater images and videos have several defects and must be preprocessed before their analysis. Underwater landscapes are usually darkened due to the interaction of light with the marine environment: light is absorbed as it travels through deep waters depending on its wavelength. Additionally, light does not follow a linear direction but is scattered due to its interaction with microparticles in water, resulting in low contrast, low brightness, color distortion, and restricted visibility. The improvement of the underwater image is, therefore, more than necessary in order to facilitate its analysis. The research presented in this paper aims to implement and evaluate a set of classical techniques used in the field of improving the quality of underwater images in several color representation spaces. These methods have the particularity of being simple to implement and do not require prior knowledge of the physical model at the origin of the degradation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=underwater%20image%20enhancement" title="underwater image enhancement">underwater image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20normalization" title=" histogram normalization"> histogram normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20equalization" title=" histogram equalization"> histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20limited%20adaptive%20histogram%20equalization" title=" contrast limited adaptive histogram equalization"> contrast limited adaptive histogram equalization</a>, <a href="https://publications.waset.org/abstracts/search?q=single-scale%20retinex" title=" single-scale retinex"> single-scale retinex</a> </p> <a href="https://publications.waset.org/abstracts/163524/a-comparison-between-underwater-image-enhancement-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163524.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4108</span> Image Enhancement of Histological Slides by Using Nonlinear Transfer Function</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Suman">D. Suman</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Nikitha"> B. Nikitha</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Sarvani"> J. Sarvani</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Archana"> V. Archana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Histological slides provide clinical diagnostic information about the subjects from the ancient times. Even with the advent of high resolution imaging cameras the image tend to have some background noise which makes the analysis complex. A study of the histological slides is done by using a nonlinear transfer function based image enhancement method. The method processes the raw, color images acquired from the biological microscope, which, in general, is associated with background noise. The images usually appearing blurred does not convey the intended information. In this regard, an enhancement method is proposed and implemented on 50 histological slides of human tissue by using nonlinear transfer function method. The histological image is converted into HSV color image. The luminance value of the image is enhanced (V component) because change in the H and S components could change the color balance between HSV components. The HSV image is divided into smaller blocks for carrying out the dynamic range compression by using a linear transformation function. Each pixel in the block is enhanced based on the contrast of the center pixel and its neighborhood. After the processing the V component, the HSV image is transformed into a colour image. The study has shown improvement of the characteristics of the image so that the significant details of the histological images were improved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HSV%20space" title="HSV space">HSV space</a>, <a href="https://publications.waset.org/abstracts/search?q=histology" title=" histology"> histology</a>, <a href="https://publications.waset.org/abstracts/search?q=enhancement" title=" enhancement"> enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a> </p> <a href="https://publications.waset.org/abstracts/12167/image-enhancement-of-histological-slides-by-using-nonlinear-transfer-function" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4107</span> Enhancing the Bionic Eye: A Real-time Image Optimization Framework to Encode Color and Spatial Information Into Retinal Prostheses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=William%20Huang">William Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal prostheses are currently limited to low resolution grayscale images that lack color and spatial information. This study develops a novel real-time image optimization framework and tools to encode maximum information to the prostheses which are constrained by the number of electrodes. One key idea is to localize main objects in images while reducing unnecessary background noise through region-contrast saliency maps. A novel color depth mapping technique was developed through MiniBatchKmeans clustering and color space selection. The resulting image was downsampled using bicubic interpolation to reduce image size while preserving color quality. In comparison to current schemes, the proposed framework demonstrated better visual quality in tested images. The use of the region-contrast saliency map showed improvements in efficacy up to 30%. Finally, the computational speed of this algorithm is less than 380 ms on tested cases, making real-time retinal prostheses feasible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20implants" title="retinal implants">retinal implants</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20processing%20unit" title=" virtual processing unit"> virtual processing unit</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=saliency%20maps" title=" saliency maps"> saliency maps</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20quantization" title=" color quantization"> color quantization</a> </p> <a href="https://publications.waset.org/abstracts/147972/enhancing-the-bionic-eye-a-real-time-image-optimization-framework-to-encode-color-and-spatial-information-into-retinal-prostheses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147972.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4106</span> Contrast Media Effects and Radiation Dose Assessment in Contrast Enhanced Computed Tomography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Buhari%20Samaila">Buhari Samaila</a>, <a href="https://publications.waset.org/abstracts/search?q=Sabiu%20Abdullahi"> Sabiu Abdullahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Buhari%20Maidamma"> Buhari Maidamma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Contrast-enhanced computed tomography (CE-CT) is a technique that uses contrast media to improve image quality and diagnostic accuracy. It is a widely used imaging modality in medical diagnostics, offering high-resolution images for accurate diagnosis. However, concerns regarding the potential adverse effects of contrast media and radiation dose exposure have prompted ongoing investigation and assessment. It is important to assess the effects of contrast media and radiation dose in CE-CT procedures. Objective: This study aims to assess the effects of contrast media and radiation dose in contrast-enhanced computed tomography (CECT) procedures. Methods: A comprehensive review of the literature was conducted to identify studies related to contrast media effects and radiation dose assessment in CECT. Relevant data, including location, type of research, objective, method, findings, conclusion, authors, and year of publications, were extracted, analyzed, and reported. Results: The findings revealed that several studies have investigated the impacts of contrast media and radiation doses in CECT procedures, with iodinated contrast agents being the most commonly employed. Adverse effects associated with contrast media administration were reported, including allergic reactions, nephrotoxicity, and thyroid dysfunction, albeit at relatively low incidence rates. Additionally, radiation dose levels varied depending on the imaging protocol and anatomical region scanned. Efforts to minimize radiation exposure through optimization techniques were evident across studies. Conclusion: Contrast-enhanced computed tomography (CECT) remains an invaluable tool in medical imaging; however, careful consideration of contrast media effects and radiation dose exposure is imperative. Healthcare practitioners should weigh the diagnostic benefits against potential risks, employing strategies to mitigate adverse effects and optimize radiation dose levels for patient safety and effective diagnosis. Further research is warranted to enhance the understanding and management of contrast media effects and radiation dose optimization in CECT procedures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CT" title="CT">CT</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20media" title=" contrast media"> contrast media</a>, <a href="https://publications.waset.org/abstracts/search?q=radiation%20dose" title=" radiation dose"> radiation dose</a>, <a href="https://publications.waset.org/abstracts/search?q=effect%20of%20radiation" title=" effect of radiation"> effect of radiation</a> </p> <a href="https://publications.waset.org/abstracts/192678/contrast-media-effects-and-radiation-dose-assessment-in-contrast-enhanced-computed-tomography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192678.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">21</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4105</span> Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Dhana%20Lakshmi">M. Dhana Lakshmi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sakthivel%20Murugan"> S. Sakthivel Murugan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=underwater%20drone%20imagery" title="underwater drone imagery">underwater drone imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20normalization" title=" pixel normalization"> pixel normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=thresholding" title=" thresholding"> thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=masking" title=" masking"> masking</a>, <a href="https://publications.waset.org/abstracts/search?q=unsharp%20mask%20filter" title=" unsharp mask filter"> unsharp mask filter</a> </p> <a href="https://publications.waset.org/abstracts/142413/enhancement-of-underwater-haze-image-with-edge-reveal-using-pixel-normalization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142413.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4104</span> Improvement of Brain Tumors Detection Using Markers and Boundaries Transform </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousif%20Mohamed%20Y.%20Abdallah">Yousif Mohamed Y. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mommen%20A.%20Alkhir"> Mommen A. Alkhir</a>, <a href="https://publications.waset.org/abstracts/search?q=Amel%20S.%20Algaddal"> Amel S. Algaddal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This was experimental study conducted to study segmentation of brain in MRI images using edge detection and morphology filters. For brain MRI images each film scanned using digitizer scanner then treated by using image processing program (MatLab), where the segmentation was studied. The scanned image was saved in a TIFF file format to preserve the quality of the image. Brain tissue can be easily detected in MRI image if the object has sufficient contrast from the background. We use edge detection and basic morphology tools to detect a brain. The segmentation of MRI images steps using detection and morphology filters were image reading, detection entire brain, dilation of the image, filling interior gaps inside the image, removal connected objects on borders and smoothen the object (brain). The results of this study were that it showed an alternate method for displaying the segmented object would be to place an outline around the segmented brain. Those filters approaches can help in removal of unwanted background information and increase diagnostic information of Brain MRI. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=improvement" title="improvement">improvement</a>, <a href="https://publications.waset.org/abstracts/search?q=brain" title=" brain"> brain</a>, <a href="https://publications.waset.org/abstracts/search?q=matlab" title=" matlab"> matlab</a>, <a href="https://publications.waset.org/abstracts/search?q=markers" title=" markers"> markers</a>, <a href="https://publications.waset.org/abstracts/search?q=boundaries" title=" boundaries"> boundaries</a> </p> <a href="https://publications.waset.org/abstracts/31036/improvement-of-brain-tumors-detection-using-markers-and-boundaries-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31036.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4103</span> Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20Jurado">David Jurado</a>, <a href="https://publications.waset.org/abstracts/search?q=Carlos%20%C3%81vila"> Carlos Ávila</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title="breast cancer">breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20imaging" title=" X-ray imaging"> X-ray imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=hough%20transform" title=" hough transform"> hough transform</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a> </p> <a href="https://publications.waset.org/abstracts/162732/automatic-early-breast-cancer-segmentation-enhancement-by-image-analysis-and-hough-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162732.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4102</span> Review of Ultrasound Image Processing Techniques for Speckle Noise Reduction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kwazikwenkosi%20Sikhakhane">Kwazikwenkosi Sikhakhane</a>, <a href="https://publications.waset.org/abstracts/search?q=Suvendi%20Rimer"> Suvendi Rimer</a>, <a href="https://publications.waset.org/abstracts/search?q=Mpho%20Gololo"> Mpho Gololo</a>, <a href="https://publications.waset.org/abstracts/search?q=Khmaies%20Oahada"> Khmaies Oahada</a>, <a href="https://publications.waset.org/abstracts/search?q=Adnan%20Abu-Mahfouz"> Adnan Abu-Mahfouz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Medical ultrasound imaging is a crucial diagnostic technique due to its affordability and non-invasiveness compared to other imaging methods. However, the presence of speckle noise, which is a form of multiplicative noise, poses a significant obstacle to obtaining clear and accurate images in ultrasound imaging. Speckle noise reduces image quality by decreasing contrast, resolution, and signal-to-noise ratio (SNR). This makes it difficult for medical professionals to interpret ultrasound images accurately. To address this issue, various techniques have been developed to reduce speckle noise in ultrasound images, which improves image quality. This paper aims to review some of these techniques, highlighting the advantages and disadvantages of each algorithm and identifying the scenarios in which they work most effectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=noise" title=" noise"> noise</a>, <a href="https://publications.waset.org/abstracts/search?q=speckle" title=" speckle"> speckle</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound" title=" ultrasound"> ultrasound</a> </p> <a href="https://publications.waset.org/abstracts/166509/review-of-ultrasound-image-processing-techniques-for-speckle-noise-reduction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166509.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">110</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4101</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4100</span> Dark and Bright Envelopes for Dehazing Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zihan%20Yu">Zihan Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Inoue"> Kohei Inoue</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiichi%20Urahama"> Kiichi Urahama</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a method for de-hazing images. A dark envelope image is derived with the bilateral minimum filter and a bright envelope is derived with the bilateral maximum filter. The ambient light and transmission of the scene are estimated from these two envelope images. An image without haze is reconstructed from the estimated ambient light and transmission. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dehazing" title="image dehazing">image dehazing</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral%20minimum%20filter" title=" bilateral minimum filter"> bilateral minimum filter</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral%20maximum%20filter" title=" bilateral maximum filter"> bilateral maximum filter</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20contrast" title=" local contrast"> local contrast</a> </p> <a href="https://publications.waset.org/abstracts/8981/dark-and-bright-envelopes-for-dehazing-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8981.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">263</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4099</span> CT Doses Pre and Post SAFIRE: Sinogram Affirmed Iterative Reconstruction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Noroozian">N. Noroozian</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Halim"> M. Halim</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Holloway"> B. Holloway</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Computed Tomography (CT) has become the largest source of radiation exposure in modern countries however, recent technological advances have created new methods to reduce dose without negatively affecting image quality. SAFIRE has emerged as a new software package which utilizes full raw data projections for iterative reconstruction, thereby allowing for lower CT dose to be used. this audit was performed to compare CT doses in certain examinations before and after the introduction of SAFIRE at our Radiology department which showed CT doses were significantly lower using SAFIRE compared with pre-SAFIRE software at SAFIRE 3 setting for the following studies:CSKUH Unenhanced brain scans (-20.9%), CABPEC Abdomen and pelvis with contrast (-21.5%), CCHAPC Chest with contrast (-24.4%), CCHAPC Abdomen and pelvis with contrast (-16.1%), CCHAPC Total chest, abdomen and pelvis (-18.7%). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dose%20reduction" title="dose reduction">dose reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=iterative%20reconstruction" title=" iterative reconstruction"> iterative reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20dose%20CT%20techniques" title=" low dose CT techniques"> low dose CT techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=SAFIRE" title=" SAFIRE"> SAFIRE</a> </p> <a href="https://publications.waset.org/abstracts/18344/ct-doses-pre-and-post-safire-sinogram-affirmed-iterative-reconstruction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18344.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=137">137</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=138">138</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=low%20contrast%20image&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10