CINXE.COM
Search results for: Image Quality
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Image Quality</title> <meta name="description" content="Search results for: Image Quality"> <meta name="keywords" content="Image Quality"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Image Quality" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Image Quality"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 12081</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Image Quality</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12081</span> A New Categorization of Image Quality Metrics Based on a Model of Human Quality Perception</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Grazia%20Albanesi">Maria Grazia Albanesi</a>, <a href="https://publications.waset.org/abstracts/search?q=Riccardo%20Amadeo"> Riccardo Amadeo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study presents a new model of the human image quality assessment process: the aim is to highlight the foundations of the image quality metrics proposed in literature, by identifying the cognitive/physiological or mathematical principles of their development and the relation with the actual human quality assessment process. The model allows to create a novel categorization of objective and subjective image quality metrics. Our work includes an overview of the most used or effective objective metrics in literature, and, for each of them, we underline its main characteristics, with reference to the rationale of the proposed model and categorization. From the results of this operation, we underline a problem that affects all the presented metrics: the fact that many aspects of human biases are not taken in account at all. We then propose a possible methodology to address this issue. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye-tracking" title="eye-tracking">eye-tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20assessment%20metric" title=" image quality assessment metric"> image quality assessment metric</a>, <a href="https://publications.waset.org/abstracts/search?q=MOS" title=" MOS"> MOS</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20of%20user%20experience" title=" quality of user experience"> quality of user experience</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/8906/a-new-categorization-of-image-quality-metrics-based-on-a-model-of-human-quality-perception" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8906.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12080</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12079</span> The Mediation Effect of Customer Satisfaction in the Relationship between Service Quality, Corporate Image to Customer Loyalty</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rizwan%20Ali">Rizwan Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Hammad%20Zafar"> Hammad Zafar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this research is to investigate the mediation effect of customer satisfaction in the relationship between service quality, corporate image to customer loyalty, in Pakistan banking sector. The population of this research is banking customers and sample size of 210 respondents. This research uses the SPSS, Correlation, ANOVA and regression analysis techniques along with AMOS methods. The service quality and corporate image applied by the banks are not all variables can directly affect customer loyalty, but must first going through satisfaction. Which means that banks must first need to understand what the customer basic needs through variable service quality and corporate image so that the customers feel loyal when the level of satisfaction is resolved. The service quality provided by the banking industry needs to be improved in order to improve customer satisfaction and loyalty of banking services, especially for banks in Pakistan. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=customer%20loyalty" title="customer loyalty">customer loyalty</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20quality" title=" service quality"> service quality</a>, <a href="https://publications.waset.org/abstracts/search?q=corporate%20image" title=" corporate image"> corporate image</a>, <a href="https://publications.waset.org/abstracts/search?q=customer%20satisfaction" title=" customer satisfaction"> customer satisfaction</a> </p> <a href="https://publications.waset.org/abstracts/154550/the-mediation-effect-of-customer-satisfaction-in-the-relationship-between-service-quality-corporate-image-to-customer-loyalty" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154550.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12078</span> 3D Guided Image Filtering to Improve Quality of Short-Time Binned Dynamic PET Images Using MRI Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tabassum%20Husain">Tabassum Husain</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Peng%20Li"> Shen Peng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhaolin%20Chen"> Zhaolin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper evaluates the usability of 3D Guided Image Filtering to enhance the quality of short-time binned dynamic PET images by using MRI images. Guided image filtering is an edge-preserving filter proposed to enhance 2D images. The 3D filter is applied on 1 and 5-minute binned images. The results are compared with 15-minute binned images and the Gaussian filtering. The guided image filter enhances the quality of dynamic PET images while also preserving important information of the voxels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20PET%20images" title="dynamic PET images">dynamic PET images</a>, <a href="https://publications.waset.org/abstracts/search?q=guided%20image%20filter" title=" guided image filter"> guided image filter</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20preservation%20filtering" title=" information preservation filtering"> information preservation filtering</a> </p> <a href="https://publications.waset.org/abstracts/152864/3d-guided-image-filtering-to-improve-quality-of-short-time-binned-dynamic-pet-images-using-mri-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">132</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12077</span> The Effect of Compensating Filter on Image Quality in Lateral Projection of Thoracolumbar Radiography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Noor%20Arda%20Adrina%20Daud">Noor Arda Adrina Daud</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Hanafi%20Ali"> Mohd Hanafi Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The compensating filter is placed between the patient and X-ray tube to compensate various density and thickness of human body. The main purpose of this project is to study the effect of compensating filter on image quality in lateral projection of thoracolumbar radiography. The study was performed by an X-ray unit where different thicknesses of aluminum were used as compensating filter. Specifically the relationship between thickness of aluminum, density and noise were evaluated. Results show different thickness of aluminum compensating filter improved the image quality of lateral projection thoracolumbar radiography. The compensating filter of 8.2 mm was considered as the optimal filter to compensate the thoracolumbar junction (T12-L1), 1 mm to compensate lumbar region and 5.9 mm to compensate thorax region. The aluminum wedge compensating filter was designed resulting in an acceptable image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=compensating%20filter" title="compensating filter">compensating filter</a>, <a href="https://publications.waset.org/abstracts/search?q=aluminum" title=" aluminum"> aluminum</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality" title=" image quality"> image quality</a>, <a href="https://publications.waset.org/abstracts/search?q=lateral" title=" lateral"> lateral</a>, <a href="https://publications.waset.org/abstracts/search?q=thoracolumbar" title=" thoracolumbar "> thoracolumbar </a> </p> <a href="https://publications.waset.org/abstracts/6135/the-effect-of-compensating-filter-on-image-quality-in-lateral-projection-of-thoracolumbar-radiography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6135.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">514</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12076</span> Digital Image Steganography with Multilayer Security</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amar%20Partap%20Singh%20Pharwaha">Amar Partap Singh Pharwaha</a>, <a href="https://publications.waset.org/abstracts/search?q=Balkrishan%20Jindal"> Balkrishan Jindal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new method is developed for hiding image in a digital image with multilayer security. In the proposed method, the secret image is encrypted in the first instance using a flexible matrix based symmetric key to add first layer of security. Then another layer of security is added to the secret data by encrypting the ciphered data using Pythagorean Theorem method. The ciphered data bits (4 bits) produced after double encryption are then embedded within digital image in the spatial domain using Least Significant Bits (LSBs) substitution. To improve the image quality of the stego-image, an improved form of pixel adjustment process is proposed. To evaluate the effectiveness of the proposed method, image quality metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), entropy, correlation, mean value and Universal Image Quality Index (UIQI) are measured. It has been found experimentally that the proposed method provides higher security as well as robustness. In fact, the results of this study are quite promising. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pythagorean%20theorem" title="Pythagorean theorem">Pythagorean theorem</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20adjustment" title=" pixel adjustment"> pixel adjustment</a>, <a href="https://publications.waset.org/abstracts/search?q=ciphered%20data" title=" ciphered data"> ciphered data</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20hiding" title=" image hiding"> image hiding</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20significant%20bit" title=" least significant bit"> least significant bit</a>, <a href="https://publications.waset.org/abstracts/search?q=flexible%20matrix" title=" flexible matrix"> flexible matrix</a> </p> <a href="https://publications.waset.org/abstracts/31493/digital-image-steganography-with-multilayer-security" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">337</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12075</span> Quality Assurance in Cardiac Disorder Detection Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anam%20Naveed">Anam Naveed</a>, <a href="https://publications.waset.org/abstracts/search?q=Asma%20Andleeb"> Asma Andleeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehreen%20Sirshar"> Mehreen Sirshar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the article, Image processing techniques have been applied on cardiac images for enhancing the image quality. Two types of methodologies considers for survey, invasive techniques and non-invasive techniques. Different image processes for improvement of cardiac image quality and reduce the amount of radiation exposure for invasive techniques are explored. Different image processing algorithms for enhancing the noninvasive cardiac image qualities are described. Beside these two methodologies, third methodology has applied on live streaming of heart rate on ECG window for extracting necessary information, removing noise and enhancing quality. Sensitivity analyses have been carried out to investigate the impacts of cardiac images for diagnosis of cardiac arteries disease and how the enhancement on images will help the cardiologist to diagnoses disease. The paper evaluates strengths and weaknesses of different techniques applied for improved the image quality and draw a conclusion. Some specific limitations must be considered for whole survey, like the patient heart beat must be 70-75 beats/minute while doing the angiography, similarly patient weight and exposure radiation amount has some limitation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cardiac%20images" title="cardiac images">cardiac images</a>, <a href="https://publications.waset.org/abstracts/search?q=CT%20angiography" title=" CT angiography"> CT angiography</a>, <a href="https://publications.waset.org/abstracts/search?q=critical%20analysis" title=" critical analysis"> critical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=exposure%20radiation" title=" exposure radiation"> exposure radiation</a>, <a href="https://publications.waset.org/abstracts/search?q=invasive%20techniques" title=" invasive techniques"> invasive techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=invasive%20techniques" title=" invasive techniques"> invasive techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=non-invasive%20techniques" title=" non-invasive techniques"> non-invasive techniques</a> </p> <a href="https://publications.waset.org/abstracts/26171/quality-assurance-in-cardiac-disorder-detection-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12074</span> The Mediating Role of Bank Image in Customer Satisfaction Building</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Emari">H. Emari</a>, <a href="https://publications.waset.org/abstracts/search?q=Z.%20Emari"> Z. Emari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main objective of this research was to determine the dimensions of service quality in the banking industry of Iran. For this purpose, the study empirically examined the European perspective suggesting that service quality consists of three dimensions, technical, functional and image. This research is an applied research and its strategy is casual strategy. A standard questionnaire was used for collecting the data. 287 customers of Melli Bank of Northwest were selected through cluster sampling and were studied. The results from a banking service sample revealed that the overall service quality is influenced more by a consumer’s perception of technical quality than functional quality. Accordingly, the Gronroos model is a more appropriate representation of service quality than the American perspective with its limited concentration on the dimension of functional quality in the banking industry of Iran. So, knowing the key dimensions of the quality of services in this industry and planning for their improvement can increase the satisfaction of customers and productivity of this industry. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=technical%20quality" title="technical quality">technical quality</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20quality" title=" functional quality"> functional quality</a>, <a href="https://publications.waset.org/abstracts/search?q=banking" title=" banking"> banking</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=mediating%20role" title=" mediating role"> mediating role</a> </p> <a href="https://publications.waset.org/abstracts/30318/the-mediating-role-of-bank-image-in-customer-satisfaction-building" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30318.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">369</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12073</span> Optimizing Exposure Parameters in Digital Mammography: A Study in Morocco </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Talbi%20Mohammed">Talbi Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Oustous%20Aziz"> Oustous Aziz</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Messaoud%20Mounir"> Ben Messaoud Mounir</a>, <a href="https://publications.waset.org/abstracts/search?q=Sebihi%20Rajaa"> Sebihi Rajaa</a>, <a href="https://publications.waset.org/abstracts/search?q=Khalis%20Mohammed"> Khalis Mohammed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Breast cancer is the leading cause of death for women around the world. Screening mammography is the reference examination, due to its sensitivity for detecting small lesions and micro-calcifications. Therefore, it is essential to ensure quality mammographic examinations with the most optimal dose. These conditions depend on the choice of exposure parameters. Clinically, practices must be evaluated in order to determine the most appropriate exposure parameters. Material and Methods: We performed our measurements on a mobile mammography unit (PLANMED Sofie-classic.) in Morocco. A solid dosimeter (AGMS Radcal) and a MTM 100 phantom allow to quantify the delivered dose and the image quality. For image quality assessment, scores are defined by the rate of visible inserts (MTM 100 phantom), obtained and compared for each acquisition. Results: The results show that the parameters of the mammography unit on which we have made our measurements can be improved in order to offer a better compromise between image quality and breast dose. The last one can be reduced up from 13.27% to 22.16%, while preserving comparable image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mammography" title="Mammography">Mammography</a>, <a href="https://publications.waset.org/abstracts/search?q=Breast%20Dose" title=" Breast Dose"> Breast Dose</a>, <a href="https://publications.waset.org/abstracts/search?q=Image%20Quality" title=" Image Quality"> Image Quality</a>, <a href="https://publications.waset.org/abstracts/search?q=Phantom" title=" Phantom"> Phantom</a> </p> <a href="https://publications.waset.org/abstracts/116596/optimizing-exposure-parameters-in-digital-mammography-a-study-in-morocco" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/116596.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">172</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12072</span> A Multi Sensor Monochrome Video Fusion Using Image Quality Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Prema%20Kumar">M. Prema Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Rajesh%20Kumar"> P. Rajesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The increasing interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. This paper gives a novel approach of merging the information content from several videos taken from the same scene in order to rack up a combined video that contains the finest information coming from different source videos. This process is known as video fusion which helps in providing superior quality (The term quality, connote measurement on the particular application.) image than the source images. In this technique different sensors (whose redundant information can be reduced) are used for various cameras that are imperative for capturing the required images and also help in reducing. In this paper Image fusion technique based on multi-resolution singular value decomposition (MSVD) has been used. The image fusion by MSVD is almost similar to that of wavelets. The idea behind MSVD is to replace the FIR filters in wavelet transform with singular value decomposition (SVD). It is computationally very simple and is well suited for real time applications like in remote sensing and in astronomy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi%20sensor%20image%20fusion" title="multi sensor image fusion">multi sensor image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=MSVD" title=" MSVD"> MSVD</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20video" title=" monochrome video"> monochrome video</a> </p> <a href="https://publications.waset.org/abstracts/14866/a-multi-sensor-monochrome-video-fusion-using-image-quality-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">572</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12071</span> Digital Retinal Images: Background and Damaged Areas Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20A.%20Gani">Eman A. Gani</a>, <a href="https://publications.waset.org/abstracts/search?q=Loay%20E.%20George"> Loay E. George</a>, <a href="https://publications.waset.org/abstracts/search?q=Faisel%20G.%20Mohammed"> Faisel G. Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20H.%20Sager"> Kamal H. Sager</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital retinal images are more appropriate for automatic screening of diabetic retinopathy systems. Unfortunately, a significant percentage of these images are poor quality that hinders further analysis due to many factors (such as patient movement, inadequate or non-uniform illumination, acquisition angle and retinal pigmentation). The retinal images of poor quality need to be enhanced before the extraction of features and abnormalities. So, the segmentation of retinal image is essential for this purpose, the segmentation is employed to smooth and strengthen image by separating the background and damaged areas from the overall image thus resulting in retinal image enhancement and less processing time. In this paper, methods for segmenting colored retinal image are proposed to improve the quality of retinal image diagnosis. The methods generate two segmentation masks; i.e., background segmentation mask for extracting the background area and poor quality mask for removing the noisy areas from the retinal image. The standard retinal image databases DIARETDB0, DIARETDB1, STARE, DRIVE and some images obtained from ophthalmologists have been used to test the validation of the proposed segmentation technique. Experimental results indicate the introduced methods are effective and can lead to high segmentation accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title="retinal images">retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20segmentation" title=" background segmentation"> background segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=damaged%20areas%20segmentation" title=" damaged areas segmentation"> damaged areas segmentation</a> </p> <a href="https://publications.waset.org/abstracts/12289/digital-retinal-images-background-and-damaged-areas-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12070</span> Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Z.%20Mortezaie">Z. Mortezaie</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Hassanpour"> H. Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Asadi%20Amiri"> S. Asadi Amiri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unsharp%20masking" title="unsharp masking">unsharp masking</a>, <a href="https://publications.waset.org/abstracts/search?q=blur%20image" title=" blur image"> blur image</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-region%20gradient" title=" sub-region gradient"> sub-region gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a> </p> <a href="https://publications.waset.org/abstracts/73795/contrast-enhancement-in-digital-images-using-an-adaptive-unsharp-masking-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">214</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12069</span> A Novel Combination Method for Computing the Importance Map of Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Absetan">Ahmad Absetan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Nooshyar"> Mahdi Nooshyar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-aware%20image%20resizing" title="content-aware image resizing">content-aware image resizing</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20saliency" title=" visual saliency"> visual saliency</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20density" title=" edge density"> edge density</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20warping" title=" image warping"> image warping</a> </p> <a href="https://publications.waset.org/abstracts/35692/a-novel-combination-method-for-computing-the-importance-map-of-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">582</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12068</span> Blind Data Hiding Technique Using Interpolation of Subsampled Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Singara%20Singh%20Kasana">Singara Singh Kasana</a>, <a href="https://publications.waset.org/abstracts/search?q=Pankaj%20Garg"> Pankaj Garg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a blind data hiding technique based on interpolation of sub sampled versions of a cover image is proposed. Sub sampled image is taken as a reference image and an interpolated image is generated from this reference image. Then difference between original cover image and interpolated image is used to embed secret data. Comparisons with the existing interpolation based techniques show that proposed technique provides higher embedding capacity and better visual quality marked images. Moreover, the performance of the proposed technique is more stable for different images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interpolation" title="interpolation">interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subsampling" title=" image subsampling"> image subsampling</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=SIM" title=" SIM"> SIM</a> </p> <a href="https://publications.waset.org/abstracts/18926/blind-data-hiding-technique-using-interpolation-of-subsampled-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">578</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12067</span> Performance of Hybrid Image Fusion: Implementation of Dual-Tree Complex Wavelet Transform Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Gupta">Manoj Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nirmendra%20Singh%20Bhadauria"> Nirmendra Singh Bhadauria</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of the applications in image processing require high spatial and high spectral resolution in a single image. For example satellite image system, the traffic monitoring system, and long range sensor fusion system all use image processing. However, most of the available equipment is not capable of providing this type of data. The sensor in the surveillance system can only cover the view of a small area for a particular focus, yet the demanding application of this system requires a view with a high coverage of the field. Image fusion provides the possibility of combining different sources of information. In this paper, we have decomposed the image using DTCWT and then fused using average and hybrid of (maxima and average) pixel level techniques and then compared quality of both the images using PSNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/abstracts/search?q=DT-CWT" title=" DT-CWT"> DT-CWT</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20image%20fusion" title=" average image fusion"> average image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20image%20fusion" title=" hybrid image fusion"> hybrid image fusion</a> </p> <a href="https://publications.waset.org/abstracts/19207/performance-of-hybrid-image-fusion-implementation-of-dual-tree-complex-wavelet-transform-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12066</span> Body Image Impact on Quality of Life and Adolescents’ Binge Eating: The Indirect Role of Body Image Coping Strategies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dora%20Bianchi">Dora Bianchi</a>, <a href="https://publications.waset.org/abstracts/search?q=Anthony%20Schinelli"> Anthony Schinelli</a>, <a href="https://publications.waset.org/abstracts/search?q=Laura%20Maria%20Fatta"> Laura Maria Fatta</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonia%20Lonigro"> Antonia Lonigro</a>, <a href="https://publications.waset.org/abstracts/search?q=Fabio%20Lucidi"> Fabio Lucidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Fiorenzo%20Laghi"> Fiorenzo Laghi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: The role of body image in adolescent binge eating is widely confirmed, albeit the various facets of this relationship are still mostly unexplored. Within the multidimensional body image framework, this study hypothesized the indirect effects of three body image coping strategies (positive rational acceptance, appearance fixing, avoidance) in the expected relationship between the perceived impact of body image on individuals’ quality of life and binge eating symptoms. Methods: Participants were 715 adolescents aged 15-21 years (49.1% girls) recruited in Italian schools. An anonymous self-report online survey was administered. A multiple mediation model was tested. Results: A more positive perceived impact of body image on quality of life was a negative predictor of adolescents’ binge eating, controlling for individual levels of body satisfaction. Three indirect effects were found in this relationship: on one hand, the positive body image impact reduced binge eating via increasing positive rational acceptance (M1), and via reducing avoidance (M2); on the contrary, the positive body image impact also enhanced binge eating via increasing appearance fixing (M3). Conclusions: The body image impact on quality of life can be alternatively protective—when adaptive coping is solicited, and maladaptive strategies are reduced—or a risk factor, which may increase binge eating by soliciting appearance fixing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=binge%20eating" title="binge eating">binge eating</a>, <a href="https://publications.waset.org/abstracts/search?q=body%20image%20satisfaction" title=" body image satisfaction"> body image satisfaction</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20of%20life" title=" quality of life"> quality of life</a>, <a href="https://publications.waset.org/abstracts/search?q=coping%20strategies" title=" coping strategies"> coping strategies</a>, <a href="https://publications.waset.org/abstracts/search?q=adolescents" title=" adolescents"> adolescents</a> </p> <a href="https://publications.waset.org/abstracts/172611/body-image-impact-on-quality-of-life-and-adolescents-binge-eating-the-indirect-role-of-body-image-coping-strategies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172611.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12065</span> Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sajid%20Abas">Sajid Abas</a>, <a href="https://publications.waset.org/abstracts/search?q=Jon%20Pyo%20Hong"> Jon Pyo Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Jung-Ryun%20Le"> Jung-Ryun Le</a>, <a href="https://publications.waset.org/abstracts/search?q=Seungryong%20Cho"> Seungryong Cho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computed%20tomography" title="computed tomography">computed tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=computed%20laminography" title=" computed laminography"> computed laminography</a>, <a href="https://publications.waset.org/abstracts/search?q=compressive%20sending" title=" compressive sending"> compressive sending</a>, <a href="https://publications.waset.org/abstracts/search?q=low-dose" title=" low-dose"> low-dose</a> </p> <a href="https://publications.waset.org/abstracts/13025/efects-of-data-corelation-in-a-sparse-view-compresive-sensing-based-image-reconstruction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13025.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12064</span> Image Quality and Dose Optimisations in Digital and Computed Radiography X-ray Radiography Using Lumbar Spine Phantom</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elhussaien%20Elshiekh">Elhussaien Elshiekh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A study was performed to management and compare radiation doses and image quality during Lumbar spine PA and Lumbar spine LAT, x- ray radiography using Computed Radiography (CR) and Digital Radiography (DR). Standard exposure factors such as kV, mAs and FFD used for imaging the Lumbar spine anthropomorphic phantom obtained from average exposure factors that were used with CR in five radiology centres. Lumbar spine phantom was imaged using CR and DR systems. Entrance surface air kerma (ESAK) was calculated X-ray tube output and patient exposure factor. Images were evaluated using visual grading system based on the European Guidelines on Quality Criteria for diagnostic radiographic images. The ESAK corresponding to each image was measured at the surface of the phantom. Six experienced specialists evaluated hard copies of all the images, the image score (IS) was calculated for each image by finding the average score of the Six evaluators. The IS value also was used to determine whether an image was diagnostically acceptable. The optimum recommended exposure factors founded here for Lumbar spine PA and Lumbar spine LAT, with respectively (80 kVp,25 mAs at 100 cm FFD) and (75 kVp,15 mAs at 100 cm FFD) for CR system, and (80 kVp,15 mAs at100 cm FFD) and (75 kVp,10 mAs at 100 cm FFD) for DR system. For Lumbar spine PA, the lowest ESAK value required to obtain a diagnostically acceptable image were 0.80 mGy for DR and 1.20 mGy for CR systems. Similarly for Lumbar spine LAT projection, the lowest ESAK values to obtain a diagnostically acceptable image were 0.62 mGy for DR and 0.76 mGy for CR systems. At standard kVp and mAs values, the image quality did not vary significantly between the CR and the DR system, but at higher kVp and mAs values, the DR images were found to be of better quality than CR images. In addition, the lower limit of entrance skin dose consistent with diagnostically acceptable DR images was 40% lower than that for CR images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20quality" title="image quality">image quality</a>, <a href="https://publications.waset.org/abstracts/search?q=dosimetry" title=" dosimetry"> dosimetry</a>, <a href="https://publications.waset.org/abstracts/search?q=radiation%20protection" title=" radiation protection"> radiation protection</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20radiography" title=" digital radiography"> digital radiography</a>, <a href="https://publications.waset.org/abstracts/search?q=computed%20radiography" title=" computed radiography"> computed radiography</a> </p> <a href="https://publications.waset.org/abstracts/185317/image-quality-and-dose-optimisations-in-digital-and-computed-radiography-x-ray-radiography-using-lumbar-spine-phantom" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185317.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">50</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12063</span> Pre-Processing of Ultrasonography Image Quality Improvement in Cases of Cervical Cancer Using Image Enhancement </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Retno%20Supriyanti">Retno Supriyanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Teguh%20Budiono"> Teguh Budiono</a>, <a href="https://publications.waset.org/abstracts/search?q=Yogi%20Ramadhani"> Yogi Ramadhani</a>, <a href="https://publications.waset.org/abstracts/search?q=Haris%20B.%20Widodo"> Haris B. Widodo</a>, <a href="https://publications.waset.org/abstracts/search?q=Arwita%20Mulyawati"> Arwita Mulyawati</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cervical cancer is the leading cause of mortality in cancer-related diseases. In this diagnosis doctors usually perform several tests to determine the presence of cervical cancer in a patient. However, these checks require support equipment to get the results in more detail. One is by using ultrasonography. However, for the developing countries most of the existing ultrasonography has a low resolution. The goal of this research is to obtain abnormalities on low-resolution ultrasound images especially for cervical cancer case. In this paper, we emphasize our work to use Image Enhancement for pre-processing image quality improvement. The result shows that pre-processing stage is promising to support further analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cervical%20cancer" title="cervical cancer">cervical cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=mortality" title=" mortality"> mortality</a>, <a href="https://publications.waset.org/abstracts/search?q=low-resolution" title=" low-resolution"> low-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement." title=" image enhancement. "> image enhancement. </a> </p> <a href="https://publications.waset.org/abstracts/26523/pre-processing-of-ultrasonography-image-quality-improvement-in-cases-of-cervical-cancer-using-image-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26523.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">636</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12062</span> Efficient Corporate Image as a Strategy for Enhancing Profitability in Hotels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lucila%20T.%20Magalong">Lucila T. Magalong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The hotel industry has been using their corporate image and reputation to maintain service quality, customer satisfaction, and customer loyalty and to leverage themselves against competitors and facilitate their growth strategies. With the increasing pressure to perform, hotels have even created hybrid service strategy to fight in the niche markets across pricing and level-off service parameters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=corporate%20image" title="corporate image">corporate image</a>, <a href="https://publications.waset.org/abstracts/search?q=hotel%20industry" title=" hotel industry"> hotel industry</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20quality" title=" service quality"> service quality</a>, <a href="https://publications.waset.org/abstracts/search?q=customer%20expectations" title=" customer expectations"> customer expectations</a> </p> <a href="https://publications.waset.org/abstracts/16183/efficient-corporate-image-as-a-strategy-for-enhancing-profitability-in-hotels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16183.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12061</span> Filtering and Reconstruction System for Grey-Level Forensic Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahd%20Aljarf">Ahd Aljarf</a>, <a href="https://publications.waset.org/abstracts/search?q=Saad%20Amin"> Saad Amin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Images are important source of information used as evidence during any investigation process. Their clarity and accuracy is essential and of the utmost importance for any investigation. Images are vulnerable to losing blocks and having noise added to them either after alteration or when the image was taken initially, therefore, having a high performance image processing system and it is implementation is very important in a forensic point of view. This paper focuses on improving the quality of the forensic images. For different reasons packets that store data can be affected, harmed or even lost because of noise. For example, sending the image through a wireless channel can cause loss of bits. These types of errors might give difficulties generally for the visual display quality of the forensic images. Two of the images problems: noise and losing blocks are covered. However, information which gets transmitted through any way of communication may suffer alteration from its original state or even lose important data due to the channel noise. Therefore, a developed system is introduced to improve the quality and clarity of the forensic images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20filtering" title="image filtering">image filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=forensic%20images" title=" forensic images"> forensic images</a> </p> <a href="https://publications.waset.org/abstracts/15654/filtering-and-reconstruction-system-for-grey-level-forensic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12060</span> Optimizing Machine Learning Through Python Based Image Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Srinidhi.%20A">Srinidhi. A</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Twinkle%20Hareendran"> Twinkle Hareendran</a>, <a href="https://publications.waset.org/abstracts/search?q=Vriksha%20Prakash"> Vriksha Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning%20applications" title=" machine learning applications"> machine learning applications</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20matching" title=" template matching"> template matching</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a> </p> <a href="https://publications.waset.org/abstracts/193107/optimizing-machine-learning-through-python-based-image-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">13</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12059</span> Arbitrarily Shaped Blur Kernel Estimation for Single Image Blind Deblurring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aftab%20Khan">Aftab Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashfaq%20Khan"> Ashfaq Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research paper focuses on an interesting challenge faced in Blind Image Deblurring (BID). It relates to the estimation of arbitrarily shaped or non-parametric Point Spread Functions (PSFs) of motion blur caused by camera handshake. These PSFs exhibit much more complex shapes than their parametric counterparts and deblurring in this case requires intricate ways to estimate the blur and effectively remove it. This research work introduces a novel blind deblurring scheme visualized for deblurring images corrupted by arbitrarily shaped PSFs. It is based on Genetic Algorithm (GA) and utilises the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) measure as the fitness function for arbitrarily shaped PSF estimation. The proposed BID scheme has been compared with other single image motion deblurring schemes as benchmark. Validation has been carried out on various blurred images. Results of both benchmark and real images are presented. Non-reference image quality measures were used to quantify the deblurring results. For benchmark images, the proposed BID scheme using BRISQUE converges in close vicinity of the original blurring functions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20deconvolution" title="blind deconvolution">blind deconvolution</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20image%20deblurring" title=" blind image deblurring"> blind image deblurring</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20restoration" title=" image restoration"> image restoration</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measures" title=" image quality measures"> image quality measures</a> </p> <a href="https://publications.waset.org/abstracts/37142/arbitrarily-shaped-blur-kernel-estimation-for-single-image-blind-deblurring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">443</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12058</span> Framework for Performance Measure of Super Resolution Imaging</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Varsha%20Hemant%20Patil">Varsha Hemant Patil</a>, <a href="https://publications.waset.org/abstracts/search?q=Swati%20A.%20Bhavsar"> Swati A. Bhavsar</a>, <a href="https://publications.waset.org/abstracts/search?q=Abolee%20H.%20Patil"> Abolee H. Patil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image quality assessment plays an important role in image evaluation. This paper aims to present an investigation of classic techniques in use for image quality assessment, especially for super-resolution imaging. Researchers have contributed a lot towards the development of super-resolution imaging techniques. However, not much attention is paid to the development of metrics for testing the performance of developed techniques. In this paper, the study report of existing image quality measures is given. The paper classifies reviewed approaches according to functionality and suitability for super-resolution imaging. Probable modifications and improvements of these to suit super-resolution imaging are presented. The prime goal of the paper is to provide a comprehensive reference source for researchers working towards super-resolution imaging and suggest a better framework for measuring the performance of super-resolution imaging techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interpolation" title="interpolation">interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=MSE" title=" MSE"> MSE</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=SSIM" title=" SSIM"> SSIM</a>, <a href="https://publications.waset.org/abstracts/search?q=super%20resolution" title=" super resolution"> super resolution</a> </p> <a href="https://publications.waset.org/abstracts/159819/framework-for-performance-measure-of-super-resolution-imaging" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159819.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12057</span> Influence of High-Resolution Satellites Attitude Parameters on Image Quality</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Walid%20Wahballah">Walid Wahballah</a>, <a href="https://publications.waset.org/abstracts/search?q=Taher%20Bazan"> Taher Bazan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fawzy%20Eltohamy"> Fawzy Eltohamy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the important functions of the satellite attitude control system is to provide the required pointing accuracy and attitude stability for optical remote sensing satellites to achieve good image quality. Although offering noise reduction and increased sensitivity, time delay and integration (TDI) charge coupled devices (CCDs) utilized in high-resolution satellites (HRS) are prone to introduce large amounts of pixel smear due to the instability of the line of sight. During on-orbit imaging, as a result of the Earth’s rotation and the satellite platform instability, the moving direction of the TDI-CCD linear array and the imaging direction of the camera become different. The speed of the image moving on the image plane (focal plane) represents the image motion velocity whereas the angle between the two directions is known as the drift angle (β). The drift angle occurs due to the rotation of the earth around its axis during satellite imaging; affecting the geometric accuracy and, consequently, causing image quality degradation. Therefore, the image motion velocity vector and the drift angle are two important factors used in the assessment of the image quality of TDI-CCD based optical remote sensing satellites. A model for estimating the image motion velocity and the drift angle in HRS is derived. The six satellite attitude control parameters represented in the derived model are the (roll angle φ, pitch angle θ, yaw angle ψ, roll angular velocity φ֗, pitch angular velocity θ֗ and yaw angular velocity ψ֗ ). The influence of these attitude parameters on the image quality is analyzed by establishing a relationship between the image motion velocity vector, drift angle and the six satellite attitude parameters. The influence of the satellite attitude parameters on the image quality is assessed by the presented model in terms of modulation transfer function (MTF) in both cross- and along-track directions. Three different cases representing the effect of pointing accuracy (φ, θ, ψ) bias are considered using four different sets of pointing accuracy typical values, while the satellite attitude stability parameters are ideal. In the same manner, the influence of satellite attitude stability (φ֗, θ֗, ψ֗) on image quality is also analysed for ideal pointing accuracy parameters. The results reveal that cross-track image quality is influenced seriously by the yaw angle bias and the roll angular velocity bias, while along-track image quality is influenced only by the pitch angular velocity bias. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high-resolution%20satellites" title="high-resolution satellites">high-resolution satellites</a>, <a href="https://publications.waset.org/abstracts/search?q=pointing%20accuracy" title=" pointing accuracy"> pointing accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=attitude%20stability" title=" attitude stability"> attitude stability</a>, <a href="https://publications.waset.org/abstracts/search?q=TDI-CCD" title=" TDI-CCD"> TDI-CCD</a>, <a href="https://publications.waset.org/abstracts/search?q=smear" title=" smear"> smear</a>, <a href="https://publications.waset.org/abstracts/search?q=MTF" title=" MTF"> MTF</a> </p> <a href="https://publications.waset.org/abstracts/79548/influence-of-high-resolution-satellites-attitude-parameters-on-image-quality" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12056</span> An Object-Based Image Resizing Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chin-Chen%20Chang">Chin-Chen Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=I-Ta%20Lee"> I-Ta Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Tsung-Ta%20Ke"> Tsung-Ta Ke</a>, <a href="https://publications.waset.org/abstracts/search?q=Wen-Kai%20Tai"> Wen-Kai Tai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=energy%20map" title="energy map">energy map</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20saliency" title=" visual saliency"> visual saliency</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20map" title=" gradient map"> gradient map</a>, <a href="https://publications.waset.org/abstracts/search?q=seam%20carving" title=" seam carving"> seam carving</a> </p> <a href="https://publications.waset.org/abstracts/8953/an-object-based-image-resizing-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8953.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12055</span> Blind Super-Resolution Reconstruction Based on PSF Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Osama%20A.%20Omer">Osama A. Omer</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20Hamed"> Amal Hamed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind" title="blind">blind</a>, <a href="https://publications.waset.org/abstracts/search?q=PSF" title=" PSF"> PSF</a>, <a href="https://publications.waset.org/abstracts/search?q=super-resolution" title=" super-resolution"> super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=knife-edge" title=" knife-edge"> knife-edge</a>, <a href="https://publications.waset.org/abstracts/search?q=blurring" title=" blurring"> blurring</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral" title=" bilateral"> bilateral</a>, <a href="https://publications.waset.org/abstracts/search?q=L1%20norm" title=" L1 norm"> L1 norm</a> </p> <a href="https://publications.waset.org/abstracts/1385/blind-super-resolution-reconstruction-based-on-psf-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1385.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12054</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12053</span> Data Hiding in Gray Image Using ASCII Value and Scanning Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20K.%20Pateriya">R. K. Pateriya</a>, <a href="https://publications.waset.org/abstracts/search?q=Jyoti%20Bharti"> Jyoti Bharti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for data hiding methods which provides a secret communication between sender and receiver. The data is hidden in gray-scale images and the boundary of gray-scale image is used to store the mapping information. In this an approach data is in ASCII format and the mapping is in between ASCII value of hidden message and pixel value of cover image, since pixel value of an image as well as ASCII value is in range of 0 to 255 and this mapping information is occupying only 1 bit per character of hidden message as compared to 8 bit per character thus maintaining good quality of stego image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ASCII%20value" title="ASCII value">ASCII value</a>, <a href="https://publications.waset.org/abstracts/search?q=cover%20image" title=" cover image"> cover image</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20value" title=" pixel value"> pixel value</a>, <a href="https://publications.waset.org/abstracts/search?q=stego%20image" title=" stego image"> stego image</a>, <a href="https://publications.waset.org/abstracts/search?q=secret%20message" title=" secret message"> secret message</a> </p> <a href="https://publications.waset.org/abstracts/50472/data-hiding-in-gray-image-using-ascii-value-and-scanning-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12052</span> MRI Quality Control Using Texture Analysis and Spatial Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kumar%20Kanudkuri">Kumar Kanudkuri</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sandhya"> A. Sandhya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ACR%20MRI%20phantom" title="ACR MRI phantom">ACR MRI phantom</a>, <a href="https://publications.waset.org/abstracts/search?q=MRI%20image%20quality%20metrics" title=" MRI image quality metrics"> MRI image quality metrics</a>, <a href="https://publications.waset.org/abstracts/search?q=SNRU" title=" SNRU"> SNRU</a>, <a href="https://publications.waset.org/abstracts/search?q=VIF" title=" VIF"> VIF</a>, <a href="https://publications.waset.org/abstracts/search?q=FSIM" title=" FSIM"> FSIM</a>, <a href="https://publications.waset.org/abstracts/search?q=GLCM" title=" GLCM"> GLCM</a>, <a href="https://publications.waset.org/abstracts/search?q=slice%20thickness%20accuracy" title=" slice thickness accuracy"> slice thickness accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=slice%20position%20accuracy" title=" slice position accuracy"> slice position accuracy</a> </p> <a href="https://publications.waset.org/abstracts/163983/mri-quality-control-using-texture-analysis-and-spatial-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163983.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=402">402</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=403">403</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Image%20Quality&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>