CINXE.COM
Search results for: Image training.
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Image training.</title> <meta name="description" content="Search results for: Image training."> <meta name="keywords" content="Image training."> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Image training." name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Image training."> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2449</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Image training.</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2449</span> Medical Image Registration by Minimizing Divergence Measure Based on Tsallis Entropy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Shaoyan%20Sun">Shaoyan Sun</a>, <a href="https://publications.waset.org/search?q=Liwei%20Zhang"> Liwei Zhang</a>, <a href="https://publications.waset.org/search?q=Chonghui%20Guo"> Chonghui Guo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>As the use of registration packages spreads, the number of the aligned image pairs in image databases (either by manual or automatic methods) increases dramatically. These image pairs can serve as a set of training data. Correspondingly, the images that are to be registered serve as testing data. In this paper, a novel medical image registration method is proposed which is based on the a priori knowledge of the expected joint intensity distribution estimated from pre-aligned training images. The goal of the registration is to find the optimal transformation such that the distance between the observed joint intensity distribution obtained from the testing image pair and the expected joint intensity distribution obtained from the corresponding training image pair is minimized. The distance is measured using the divergence measure based on Tsallis entropy. Experimental results show that, compared with the widely-used Shannon mutual information as well as Tsallis mutual information, the proposed method is computationally more efficient without sacrificing registration accuracy.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Multimodality%20images" title="Multimodality images">Multimodality images</a>, <a href="https://publications.waset.org/search?q=image%20registration" title=" image registration"> image registration</a>, <a href="https://publications.waset.org/search?q=Shannonentropy" title=" Shannonentropy"> Shannonentropy</a>, <a href="https://publications.waset.org/search?q=Tsallis%20entropy" title=" Tsallis entropy"> Tsallis entropy</a>, <a href="https://publications.waset.org/search?q=mutual%20information" title=" mutual information"> mutual information</a>, <a href="https://publications.waset.org/search?q=Powell%20optimization." title=" Powell optimization."> Powell optimization.</a> </p> <a href="https://publications.waset.org/11164/medical-image-registration-by-minimizing-divergence-measure-based-on-tsallis-entropy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11164/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11164/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11164/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11164/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11164/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11164/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11164/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11164/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11164/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11164/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11164.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1636</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2448</span> Low Resolution Single Neural Network Based Face Recognition </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jahan%20Zeb">Jahan Zeb</a>, <a href="https://publications.waset.org/search?q=Muhammad%20Younus%20Javed"> Muhammad Younus Javed</a>, <a href="https://publications.waset.org/search?q=Usman%20Qayyum"> Usman Qayyum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Average%20filtering" title="Average filtering">Average filtering</a>, <a href="https://publications.waset.org/search?q=Bicubic%20Interpolation" title=" Bicubic Interpolation"> Bicubic Interpolation</a>, <a href="https://publications.waset.org/search?q=Neurons" title=" Neurons"> Neurons</a>, <a href="https://publications.waset.org/search?q=vectorization." title=" vectorization."> vectorization.</a> </p> <a href="https://publications.waset.org/6651/low-resolution-single-neural-network-based-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6651/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6651/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6651/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6651/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6651/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6651/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6651/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6651/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6651/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6651/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1750</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2447</span> Image Ranking to Assist Object Labeling for Training Detection Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tonislav%20Ivanov">Tonislav Ivanov</a>, <a href="https://publications.waset.org/search?q=Oleksii%20Nedashkivskyi"> Oleksii Nedashkivskyi</a>, <a href="https://publications.waset.org/search?q=Denis%20Babeshko"> Denis Babeshko</a>, <a href="https://publications.waset.org/search?q=Vadim%20Pinskiy"> Vadim Pinskiy</a>, <a href="https://publications.waset.org/search?q=Matthew%20Putman"> Matthew Putman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Computer%20vision" title="Computer vision">Computer vision</a>, <a href="https://publications.waset.org/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/search?q=semiconductor." title=" semiconductor."> semiconductor.</a> </p> <a href="https://publications.waset.org/10011847/image-ranking-to-assist-object-labeling-for-training-detection-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011847/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011847/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011847/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011847/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011847/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011847/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011847/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011847/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011847/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011847/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">829</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2446</span> Tests and Measurements of Image Acquisition Characteristics for Image Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Seongsoo%20Lee">Seongsoo Lee</a>, <a href="https://publications.waset.org/search?q=Jong-Bae%20Lee"> Jong-Bae Lee</a>, <a href="https://publications.waset.org/search?q=Wookkang%20Lee"> Wookkang Lee</a>, <a href="https://publications.waset.org/search?q=Duyen%20Hai%20Pham"> Duyen Hai Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In the image sensors, the acquired image often differs from the real image in luminance or chrominance due to fabrication defects or nonlinear characteristics, which often lead to pixel defects or sensor failure. Therefore, the image acquisition characteristics of image sensors should be measured and tested before they are mounted on the target product. In this paper, the standardized test and measurement methods of image sensors are introduced. It applies standard light source to the image sensor under test, and the characteristics of the acquired image is compared with ideal values.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Sensor" title="Image Sensor">Image Sensor</a>, <a href="https://publications.waset.org/search?q=Image%20Acquisition%20Characteristics" title=" Image Acquisition Characteristics"> Image Acquisition Characteristics</a>, <a href="https://publications.waset.org/search?q=Defect" title=" Defect"> Defect</a>, <a href="https://publications.waset.org/search?q=Failure" title=" Failure"> Failure</a>, <a href="https://publications.waset.org/search?q=Standard" title=" Standard"> Standard</a>, <a href="https://publications.waset.org/search?q=Test" title=" Test"> Test</a>, <a href="https://publications.waset.org/search?q=Measurement." title=" Measurement."> Measurement.</a> </p> <a href="https://publications.waset.org/17198/tests-and-measurements-of-image-acquisition-characteristics-for-image-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/17198/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/17198/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/17198/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/17198/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/17198/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/17198/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/17198/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/17198/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/17198/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/17198/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/17198.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1689</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2445</span> A Comparative Study of Image Segmentation Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mehdi%20Hosseinzadeh">Mehdi Hosseinzadeh</a>, <a href="https://publications.waset.org/search?q=Parisa%20Khoshvaght"> Parisa Khoshvaght</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In some applications, such as image recognition or compression, segmentation refers to the process of partitioning a digital image into multiple segments. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. Image segmentation is to classify or cluster an image into several parts (regions) according to the feature of image, for example, the pixel value or the frequency response. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Several image segmentation algorithms were proposed to segment an image before recognition or compression. Up to now, many image segmentation algorithms exist and be extensively applied in science and daily life. According to their segmentation method, we can approximately categorize them into region-based segmentation, data clustering, and edge-base segmentation. In this paper, we give a study of several popular image segmentation algorithms that are available. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Segmentation" title="Image Segmentation">Image Segmentation</a>, <a href="https://publications.waset.org/search?q=hierarchical%20segmentation" title=" hierarchical segmentation"> hierarchical segmentation</a>, <a href="https://publications.waset.org/search?q=partitional%20segmentation" title=" partitional segmentation"> partitional segmentation</a>, <a href="https://publications.waset.org/search?q=density%20estimation." title=" density estimation."> density estimation.</a> </p> <a href="https://publications.waset.org/10002407/a-comparative-study-of-image-segmentation-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002407/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002407/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002407/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002407/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002407/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002407/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002407/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002407/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002407/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002407/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002407.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2918</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2444</span> Survey on Image Mining Using Genetic Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Jyoti%20Dua">Jyoti Dua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>One image is worth more than thousand words. Images if analyzed can reveal useful information. Low level image processing deals with the extraction of specific feature from a single image. Now the question arises: What technique should be used to extract patterns of very large and detailed image database? The answer of the question is: “Image Mining”. Image Mining deals with the extraction of image data relationship, implicit knowledge, and another pattern from the collection of images or image database. It is nothing but the extension of Data Mining. In the following paper, not only we are going to scrutinize the current techniques of image mining but also present a new technique for mining images using Genetic Algorithm.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Mining" title="Image Mining">Image Mining</a>, <a href="https://publications.waset.org/search?q=Data%20Mining" title=" Data Mining"> Data Mining</a>, <a href="https://publications.waset.org/search?q=Genetic%20Algorithm." title=" Genetic Algorithm."> Genetic Algorithm.</a> </p> <a href="https://publications.waset.org/10000598/survey-on-image-mining-using-genetic-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000598/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000598/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000598/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000598/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000598/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000598/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000598/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000598/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000598/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000598/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000598.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2445</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2443</span> 2D Spherical Spaces for Face Relighting under Harsh Illumination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Amr%20Almaddah">Amr Almaddah</a>, <a href="https://publications.waset.org/search?q=Sadi%20Vural"> Sadi Vural</a>, <a href="https://publications.waset.org/search?q=Yasushi%20Mae"> Yasushi Mae</a>, <a href="https://publications.waset.org/search?q=Kenichi%20Ohara"> Kenichi Ohara</a>, <a href="https://publications.waset.org/search?q=Tatsuo%20Arai"> Tatsuo Arai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a robust face relighting technique by using spherical space properties. The proposed method is done for reducing the illumination effects on face recognition. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients. First, an internal training illumination database is generated by computing face albedo and face normal from 2D images under different lighting conditions. Based on the generated database, we analyze the target face pixels and compare them with the training bootstrap by using pre-generated tiles. In this work, practical real time processing speed and small image size were considered when designing the framework. In contrast to other works, our technique requires no 3D face models for the training process and takes a single 2D image as an input. Experimental results on publicly available databases show that the proposed technique works well under severe lighting conditions with significant improvements on the face recognition rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20synthesis%20and%20recognition" title="Face synthesis and recognition">Face synthesis and recognition</a>, <a href="https://publications.waset.org/search?q=Face%20illumination%0Arecovery" title=" Face illumination recovery"> Face illumination recovery</a>, <a href="https://publications.waset.org/search?q=2D%20spherical%20spaces" title=" 2D spherical spaces"> 2D spherical spaces</a>, <a href="https://publications.waset.org/search?q=Vision%20for%20graphics." title=" Vision for graphics."> Vision for graphics.</a> </p> <a href="https://publications.waset.org/6318/2d-spherical-spaces-for-face-relighting-under-harsh-illumination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6318/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6318/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6318/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6318/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6318/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6318/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6318/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6318/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6318/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6318/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6318.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1754</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2442</span> A Complexity-Based Approach in Image Compression using Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hadi%20Veisi">Hadi Veisi</a>, <a href="https://publications.waset.org/search?q=Mansour%20Jamzad"> Mansour Jamzad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Adaptive%20image%20compression" title="Adaptive image compression">Adaptive image compression</a>, <a href="https://publications.waset.org/search?q=Image%20complexity" title=" Image complexity"> Image complexity</a>, <a href="https://publications.waset.org/search?q=Multi-layer%20perceptron%20neural%20network" title="Multi-layer perceptron neural network">Multi-layer perceptron neural network</a>, <a href="https://publications.waset.org/search?q=JPEG%20Standard" title=" JPEG Standard"> JPEG Standard</a>, <a href="https://publications.waset.org/search?q=PSNR." title=" PSNR."> PSNR.</a> </p> <a href="https://publications.waset.org/4800/a-complexity-based-approach-in-image-compression-using-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4800/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4800/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4800/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4800/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4800/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4800/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4800/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4800/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4800/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4800/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4800.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2222</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2441</span> Medical Imaging Fusion: A Teaching-Learning Simulation Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Cristina%20M.%20R.%20Caridade">Cristina M. R. Caridade</a>, <a href="https://publications.waset.org/search?q=Ana%20Rita%20F.%20Morais"> Ana Rita F. Morais</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with health care facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool, developed in MATLAB using Graphical User Interface, for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing to view original images and fusion images, compare processed and original images, adjust parameters and save images. The tool proposed in an innovative teaching and learning environment, consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques, necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides a real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student鈥檚 knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20fusion" title="Image fusion">Image fusion</a>, <a href="https://publications.waset.org/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/search?q=teaching-learning%20simulation%20tool" title=" teaching-learning simulation tool"> teaching-learning simulation tool</a>, <a href="https://publications.waset.org/search?q=biomedical%20engineering%20education." title=" biomedical engineering education."> biomedical engineering education.</a> </p> <a href="https://publications.waset.org/10013903/medical-imaging-fusion-a-teaching-learning-simulation-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10013903/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10013903/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10013903/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10013903/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10013903/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10013903/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10013903/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10013903/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10013903/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10013903/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10013903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">21</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2440</span> A Study on Neural Network Training Algorithm for Multiface Detection in Static Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Zulhadi%20Zakaria">Zulhadi Zakaria</a>, <a href="https://publications.waset.org/search?q=Nor%20Ashidi%20Mat%20Isa"> Nor Ashidi Mat Isa</a>, <a href="https://publications.waset.org/search?q=Shahrel%20A.%20Suandi"> Shahrel A. Suandi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper reports the study results on neural network training algorithm of numerical optimization techniques multiface detection in static images. The training algorithms involved are scale gradient conjugate backpropagation, conjugate gradient backpropagation with Polak-Riebre updates, conjugate gradient backpropagation with Fletcher-Reeves updates, one secant backpropagation and resilent backpropagation. The final result of each training algorithms for multiface detection application will also be discussed and compared. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=training%20algorithm" title="training algorithm">training algorithm</a>, <a href="https://publications.waset.org/search?q=multiface" title=" multiface"> multiface</a>, <a href="https://publications.waset.org/search?q=static%20image" title=" static image"> static image</a>, <a href="https://publications.waset.org/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/10292/a-study-on-neural-network-training-algorithm-for-multiface-detection-in-static-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10292/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10292/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10292/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10292/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10292/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10292/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10292/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10292/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10292/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10292/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10292.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2571</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2439</span> A New Approach to Steganography using Sinc-Convolution Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ahmad%20R.%20Naghsh-Nilchi">Ahmad R. Naghsh-Nilchi</a>, <a href="https://publications.waset.org/search?q=Latifeh%20Pourmohammadbagher"> Latifeh Pourmohammadbagher</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Both image steganography and image encryption have advantages and disadvantages. Steganograhy allows us to hide a desired image containing confidential information in a covered or host image while image encryption is decomposing the desired image to a non-readable, non-comprehended manner. The encryption methods are usually much more robust than the steganographic ones. However, they have a high visibility and would provoke the attackers easily since it usually is obvious from an encrypted image that something is hidden! The combination of steganography and encryption will cover both of their weaknesses and therefore, it increases the security. In this paper an image encryption method based on sinc-convolution along with using an encryption key of 128 bit length is introduced. Then, the encrypted image is covered by a host image using a modified version of JSteg steganography algorithm. This method could be applied to almost all image formats including TIF, BMP, GIF and JPEG. The experiment results show that our method is able to hide a desired image with high security and low visibility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Sinc%20Approximation" title="Sinc Approximation">Sinc Approximation</a>, <a href="https://publications.waset.org/search?q=Image%20Encryption" title=" Image Encryption"> Image Encryption</a>, <a href="https://publications.waset.org/search?q=Sincconvolution" title=" Sincconvolution"> Sincconvolution</a>, <a href="https://publications.waset.org/search?q=Image%20Steganography" title="Image Steganography">Image Steganography</a>, <a href="https://publications.waset.org/search?q=JSTEG." title=" JSTEG."> JSTEG.</a> </p> <a href="https://publications.waset.org/12344/a-new-approach-to-steganography-using-sinc-convolution-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12344/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12344/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12344/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12344/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12344/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12344/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12344/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12344/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12344/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12344/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12344.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1828</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2438</span> Codebook Generation for Vector Quantization on Orthogonal Polynomials based Transform Coding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=R.%20Krishnamoorthi">R. Krishnamoorthi</a>, <a href="https://publications.waset.org/search?q=N.%20Kannan"> N. Kannan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Orthogonal%20Polynomials" title="Orthogonal Polynomials">Orthogonal Polynomials</a>, <a href="https://publications.waset.org/search?q=Image%20Coding" title=" Image Coding"> Image Coding</a>, <a href="https://publications.waset.org/search?q=Vector%20Quantization" title=" Vector Quantization"> Vector Quantization</a>, <a href="https://publications.waset.org/search?q=TSVQ" title=" TSVQ"> TSVQ</a>, <a href="https://publications.waset.org/search?q=Binary%20Tree%20Classifier" title=" Binary Tree Classifier"> Binary Tree Classifier</a> </p> <a href="https://publications.waset.org/5503/codebook-generation-for-vector-quantization-on-orthogonal-polynomials-based-transform-coding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5503/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5503/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5503/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5503/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5503/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5503/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5503/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5503/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5503/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5503/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5503.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2150</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2437</span> Effectiveness of Dominant Color Descriptor Technique in Medical Image Retrieval Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mohd%20Kamir%20Yusof">Mohd Kamir Yusof</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a dominant color descriptor technique for medical image retrieval. The medical image system will collect and store into medical database. The purpose of dominant color descriptor (DCD) technique is to retrieve medical image and to display similar image using queried image. First, this technique will search and retrieve medical image based on keyword entered by user. After image is found, the system will assign this image as a queried image. DCD technique will calculate the image value of dominant color. Then, system will search and retrieve again medical image based on value of dominant color query image. Finally, the system will display similar images with the queried image to user. Simple application has been developed and tested using dominant color descriptor. Result based on experiment indicates this technique is effective and can be used for medical image retrieval. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Medical%20Image%20Retrieval" title="Medical Image Retrieval">Medical Image Retrieval</a>, <a href="https://publications.waset.org/search?q=Dominant%20ColorDescriptor." title=" Dominant ColorDescriptor."> Dominant ColorDescriptor.</a> </p> <a href="https://publications.waset.org/10303/effectiveness-of-dominant-color-descriptor-technique-in-medical-image-retrieval-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10303/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10303/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10303/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10303/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10303/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10303/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10303/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10303/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10303/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10303/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10303.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1742</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2436</span> Blind Low Frequency Watermarking Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dimitar%20Taskovski">Dimitar Taskovski</a>, <a href="https://publications.waset.org/search?q=Sofija%20Bogdanova"> Sofija Bogdanova</a>, <a href="https://publications.waset.org/search?q=Momcilo%20Bogdanov"> Momcilo Bogdanov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a low frequency watermarking method adaptive to image content. The image content is analyzed and properties of HVS are exploited to generate a visual mask of the same size as the approximation image. Using this mask we embed the watermark in the approximation image without degrading the image quality. Watermark detection is performed without using the original image. Experimental results show that the proposed watermarking method is robust against most common image processing operations, which can be easily implemented and usually do not degrade the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Blind" title="Blind">Blind</a>, <a href="https://publications.waset.org/search?q=digital%20watermarking" title=" digital watermarking"> digital watermarking</a>, <a href="https://publications.waset.org/search?q=low%20frequency" title=" low frequency"> low frequency</a>, <a href="https://publications.waset.org/search?q=visualmask." title=" visualmask."> visualmask.</a> </p> <a href="https://publications.waset.org/8605/blind-low-frequency-watermarking-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8605/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8605/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8605/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8605/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8605/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8605/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8605/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8605/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8605/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8605/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8605.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1542</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2435</span> A Comparative Study of Image Segmentation using Edge-Based Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Rajiv%20Kumar">Rajiv Kumar</a>, <a href="https://publications.waset.org/search?q=Arthanariee%20A.%20M."> Arthanariee A. M.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Image segmentation is the process to segment a given image into several parts so that each of these parts present in the image can be further analyzed. There are numerous techniques of image segmentation available in literature. In this paper, authors have been analyzed the edge-based approach for image segmentation. They have been implemented the different edge operators like Prewitt, Sobel, LoG, and Canny on the basis of their threshold parameter. The results of these operators have been shown for various images.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Edge%20Operator" title="Edge Operator">Edge Operator</a>, <a href="https://publications.waset.org/search?q=Edge-based%20Segmentation" title=" Edge-based Segmentation"> Edge-based Segmentation</a>, <a href="https://publications.waset.org/search?q=Image%20Segmentation" title=" Image Segmentation"> Image Segmentation</a>, <a href="https://publications.waset.org/search?q=Matlab%2010.4." title=" Matlab 10.4."> Matlab 10.4.</a> </p> <a href="https://publications.waset.org/16809/a-comparative-study-of-image-segmentation-using-edge-based-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/16809/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/16809/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/16809/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/16809/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/16809/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/16809/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/16809/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/16809/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/16809/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/16809/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/16809.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3606</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2434</span> Approximation Incremental Training Algorithm Based on a Changeable Training Set</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yi-Fan%20Zhu">Yi-Fan Zhu</a>, <a href="https://publications.waset.org/search?q=Wei%20Zhang"> Wei Zhang</a>, <a href="https://publications.waset.org/search?q=Xuan%20Zhou"> Xuan Zhou</a>, <a href="https://publications.waset.org/search?q=Qun%20Li"> Qun Li</a>, <a href="https://publications.waset.org/search?q=Yong-Lin%20Lei"> Yong-Lin Lei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The quick training algorithms and accurate solution procedure for incremental learning aim at improving the efficiency of training of SVR, whereas there are some disadvantages for them, i.e. the nonconvergence of the formers for changeable training set and the inefficiency of the latter for a massive dataset. In order to handle the problems, a new training algorithm for a changeable training set, named Approximation Incremental Training Algorithm (AITA), was proposed. This paper explored the reason of nonconvergence theoretically and discussed the realization of AITA, and finally demonstrated the benefits of AITA both on precision and efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=support%20vector%20regression" title="support vector regression">support vector regression</a>, <a href="https://publications.waset.org/search?q=incremental%20learning" title=" incremental learning"> incremental learning</a>, <a href="https://publications.waset.org/search?q=changeable%20training%20set" title="changeable training set">changeable training set</a>, <a href="https://publications.waset.org/search?q=quick%20training%20algorithm" title=" quick training algorithm"> quick training algorithm</a>, <a href="https://publications.waset.org/search?q=accurate%20solutionprocedure" title=" accurate solutionprocedure"> accurate solutionprocedure</a> </p> <a href="https://publications.waset.org/5883/approximation-incremental-training-algorithm-based-on-a-changeable-training-set" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5883/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5883/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5883/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5883/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5883/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5883/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5883/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5883/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5883/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5883/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5883.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1484</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2433</span> Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hossein%20Nezamabadi-pour">Hossein Nezamabadi-pour</a>, <a href="https://publications.waset.org/search?q=Saeid%20Saryazdi"> Saeid Saryazdi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Object-based%20image%20retrieval" title="Object-based image retrieval">Object-based image retrieval</a>, <a href="https://publications.waset.org/search?q=DCT%20domain" title=" DCT domain"> DCT domain</a>, <a href="https://publications.waset.org/search?q=Image%20indexing" title=" Image indexing"> Image indexing</a>, <a href="https://publications.waset.org/search?q=Image%20classification." title=" Image classification."> Image classification.</a> </p> <a href="https://publications.waset.org/4766/object-based-image-indexing-and-retrieval-in-dct-domain-using-clustering-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4766/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4766/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4766/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4766/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4766/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4766/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4766/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4766/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4766/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4766/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4766.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2025</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2432</span> A VR Cybersecurity Training Knowledge-Based Ontology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Shaila%20Rana">Shaila Rana</a>, <a href="https://publications.waset.org/search?q=Wasim%20Alhamdani"> Wasim Alhamdani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Effective cybersecurity learning relies on an engaging, interactive, and entertaining activity that fosters positive learning outcomes. VR cybersecurity training may provide a training format that is engaging, interactive, and entertaining. A methodological approach and framework are needed to allow trainers and educators to employ VR cybersecurity training methods to promote positive learning outcomes. Thus, this paper aims to create an approach that cybersecurity trainers can follow to create a VR cybersecurity training module. This methodology utilizes concepts from other cybersecurity training frameworks, such as NICE and CyTrONE. Other cybersecurity training frameworks do not incorporate the use of VR. VR training proposes unique challenges that cannot be addressed in current cybersecurity training frameworks. Subsequently, this ontology utilizes concepts to develop VR training to create a relevant methodology for creating VR cybersecurity training modules.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Virtual%20reality%20cybersecurity%20training" title="Virtual reality cybersecurity training">Virtual reality cybersecurity training</a>, <a href="https://publications.waset.org/search?q=VR%20cybersecurity%20training" title=" VR cybersecurity training"> VR cybersecurity training</a>, <a href="https://publications.waset.org/search?q=traditional%20cybersecurity%20training" title=" traditional cybersecurity training"> traditional cybersecurity training</a>, <a href="https://publications.waset.org/search?q=ontology." title=" ontology."> ontology.</a> </p> <a href="https://publications.waset.org/10012600/a-vr-cybersecurity-training-knowledge-based-ontology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012600/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012600/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012600/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012600/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012600/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012600/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012600/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012600/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012600/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012600/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012600.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2431</span> Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Chutimon%20Thitipornvanid">Chutimon Thitipornvanid</a>, <a href="https://publications.waset.org/search?q=Siripun%20Sanguansintukul"> Siripun Sanguansintukul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Metadata" title="Metadata">Metadata</a>, <a href="https://publications.waset.org/search?q=Prediction" title=" Prediction"> Prediction</a>, <a href="https://publications.waset.org/search?q=Multi-layer%20perceptron" title=" Multi-layer perceptron"> Multi-layer perceptron</a>, <a href="https://publications.waset.org/search?q=Human%20facial%20image" title=" Human facial image"> Human facial image</a>, <a href="https://publications.waset.org/search?q=Image%20mining." title=" Image mining."> Image mining.</a> </p> <a href="https://publications.waset.org/4909/prediction-of-a-human-facial-image-by-ann-using-image-data-and-its-content-on-web-pages" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4909/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4909/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4909/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4909/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4909/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4909/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4909/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4909/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4909/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4909/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1214</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2430</span> Changes in Vocational Teacher Training in Hungary: Challenges and Possibilities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20Bacsa-B%C3%A1n">A. Bacsa-B谩n</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The training of vocational education teachers in Hungary was a special training system before the Bologna system, but under the influence of the Bologna system, the structure and content of the training changed significantly. The training of vocational teachers, including engineering teacher and vocational trainers, is considerably different when compared to the training of public education teachers. This study aims to present these differences and peculiarities, problems, and issues of the training as well as to outline the possibilities of further development. During the study the following methods were implemented: empirical research among students and graduates of vocational teacher training, as well as analysis of the relevant literature. The study summarizes the research and theoretical results related to Vocational Education and Training (VET) teacher training over the past 15 years, with the aim of developing the training and mapping new directions in the field.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Bologna%20system" title="Bologna system">Bologna system</a>, <a href="https://publications.waset.org/search?q=vocational%20educators" title=" vocational educators"> vocational educators</a>, <a href="https://publications.waset.org/search?q=vocational%20teachers" title=" vocational teachers"> vocational teachers</a>, <a href="https://publications.waset.org/search?q=vocational%20teacher%20training." title=" vocational teacher training."> vocational teacher training.</a> </p> <a href="https://publications.waset.org/10012766/changes-in-vocational-teacher-training-in-hungary-challenges-and-possibilities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012766/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012766/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012766/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012766/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012766/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012766/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012766/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012766/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012766/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012766/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012766.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2429</span> Medical Image Edge Detection Based on Neuro-Fuzzy Approach </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=J.%20Mehena">J. Mehena</a>, <a href="https://publications.waset.org/search?q=M.%20C.%20Adhikary"> M. C. Adhikary</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Edge detection is one of the most important tasks in image processing. Medical image edge detection plays an important role in segmentation and object recognition of the human organs. It refers to the process of identifying and locating sharp discontinuities in medical images. In this paper, a neuro-fuzzy based approach is introduced to detect the edges for noisy medical images. This approach uses desired number of neuro-fuzzy subdetectors with a postprocessor for detecting the edges of medical images. The internal parameters of the approach are optimized by training pattern using artificial images. The performance of the approach is evaluated on different medical images and compared with popular edge detection algorithm. From the experimental results, it is clear that this approach has better performance than those of other competing edge detection algorithms for noisy medical images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Edge%20detection" title="Edge detection">Edge detection</a>, <a href="https://publications.waset.org/search?q=neuro-fuzzy" title=" neuro-fuzzy"> neuro-fuzzy</a>, <a href="https://publications.waset.org/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/search?q=artificial%20image" title=" artificial image"> artificial image</a>, <a href="https://publications.waset.org/search?q=object%20recognition." title=" object recognition."> object recognition.</a> </p> <a href="https://publications.waset.org/10004525/medical-image-edge-detection-based-on-neuro-fuzzy-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004525/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004525/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004525/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004525/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004525/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004525/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004525/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004525/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004525/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004525/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004525.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1282</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2428</span> A Quantum Algorithm of Constructing Image Histogram</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Yi%20Zhang">Yi Zhang</a>, <a href="https://publications.waset.org/search?q=Kai%20Lu"> Kai Lu</a>, <a href="https://publications.waset.org/search?q=Ying-hui%20Gao"> Ying-hui Gao</a>, <a href="https://publications.waset.org/search?q=Mo%20Wang"> Mo Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Histogram plays an important statistical role in digital image processing. However, the existing quantum image models are deficient to do this kind of image statistical processing because different gray scales are not distinguishable. In this paper, a novel quantum image representation model is proposed firstly in which the pixels with different gray scales can be distinguished and operated simultaneously. Based on the new model, a fast quantum algorithm of constructing histogram for quantum image is designed. Performance comparison reveals that the new quantum algorithm could achieve an approximately quadratic speedup than the classical counterpart. The proposed quantum model and algorithm have significant meanings for the future researches of quantum image processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Quantum%20Image%20Representation" title="Quantum Image Representation">Quantum Image Representation</a>, <a href="https://publications.waset.org/search?q=Quantum%0AAlgorithm" title=" Quantum Algorithm"> Quantum Algorithm</a>, <a href="https://publications.waset.org/search?q=Image%20Histogram." title=" Image Histogram."> Image Histogram.</a> </p> <a href="https://publications.waset.org/4536/a-quantum-algorithm-of-constructing-image-histogram" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4536/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4536/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4536/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4536/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4536/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4536/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4536/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4536/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4536/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4536/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4536.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2356</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2427</span> Modified Vector Quantization Method for Image Compression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=K.Somasundaram">K.Somasundaram</a>, <a href="https://publications.waset.org/search?q=S.Domnic"> S.Domnic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A low bit rate still image compression scheme by compressing the indices of Vector Quantization (VQ) and generating residual codebook is proposed. The indices of VQ are compressed by exploiting correlation among image blocks, which reduces the bit per index. A residual codebook similar to VQ codebook is generated that represents the distortion produced in VQ. Using this residual codebook the distortion in the reconstructed image is removed, thereby increasing the image quality. Our scheme combines these two methods. Experimental results on standard image Lena show that our scheme can give a reconstructed image with a PSNR value of 31.6 db at 0.396 bits per pixel. Our scheme is also faster than the existing VQ variants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20compression" title="Image compression">Image compression</a>, <a href="https://publications.waset.org/search?q=Vector%20Quantization" title=" Vector Quantization"> Vector Quantization</a>, <a href="https://publications.waset.org/search?q=Residual%0ACodebook." title=" Residual Codebook."> Residual Codebook.</a> </p> <a href="https://publications.waset.org/9419/modified-vector-quantization-method-for-image-compression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9419/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9419/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9419/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9419/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9419/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9419/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9419/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9419/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9419/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9419/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9419.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1439</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2426</span> Image Similarity: A Genetic Algorithm Based Approach </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=R.%20C.%20Joshi">R. C. Joshi</a>, <a href="https://publications.waset.org/search?q=Shashikala%20Tapaswi"> Shashikala Tapaswi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Features" title="Image Features">Image Features</a>, <a href="https://publications.waset.org/search?q=color%20descriptor" title=" color descriptor"> color descriptor</a>, <a href="https://publications.waset.org/search?q=segmented%20classes" title=" segmented classes"> segmented classes</a>, <a href="https://publications.waset.org/search?q=texture%20descriptors" title="texture descriptors">texture descriptors</a>, <a href="https://publications.waset.org/search?q=genetic%20algorithm." title=" genetic algorithm."> genetic algorithm.</a> </p> <a href="https://publications.waset.org/5728/image-similarity-a-genetic-algorithm-based-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5728/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5728/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5728/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5728/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5728/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5728/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5728/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5728/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5728/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5728/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5728.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2326</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2425</span> A Novel Dual-Purpose Image Watermarking Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Maha%20Sharkas">Maha Sharkas</a>, <a href="https://publications.waset.org/search?q=Dahlia%20R.%20ElShafie"> Dahlia R. ElShafie</a>, <a href="https://publications.waset.org/search?q=Nadder%20Hamdy"> Nadder Hamdy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image watermarking has proven to be quite an efficient tool for the purpose of copyright protection and authentication over the last few years. In this paper, a novel image watermarking technique in the wavelet domain is suggested and tested. To achieve more security and robustness, the proposed techniques relies on using two nested watermarks that are embedded into the image to be watermarked. A primary watermark in form of a PN sequence is first embedded into an image (the secondary watermark) before being embedded into the host image. The technique is implemented using Daubechies mother wavelets where an arbitrary embedding factor 伪 is introduced to improve the invisibility and robustness. The proposed technique has been applied on several gray scale images where a PSNR of about 60 dB was achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20watermarking" title="Image watermarking">Image watermarking</a>, <a href="https://publications.waset.org/search?q=Multimedia%20Security" title=" Multimedia Security"> Multimedia Security</a>, <a href="https://publications.waset.org/search?q=Wavelets" title="Wavelets">Wavelets</a>, <a href="https://publications.waset.org/search?q=Image%20Processing." title=" Image Processing."> Image Processing.</a> </p> <a href="https://publications.waset.org/4108/a-novel-dual-purpose-image-watermarking-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4108/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4108/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4108/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4108/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4108/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4108/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4108/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4108/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4108/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4108/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4108.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1699</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2424</span> Objective Performance of Compressed Image Quality Assessments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ratchakit%20Sakuldee">Ratchakit Sakuldee</a>, <a href="https://publications.waset.org/search?q=Somkait%20Udomhunsakul"> Somkait Udomhunsakul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=JPEG" title="JPEG">JPEG</a>, <a href="https://publications.waset.org/search?q=JPEG2000" title=" JPEG2000"> JPEG2000</a>, <a href="https://publications.waset.org/search?q=objective%20image%20quality%20measurement" title=" objective image quality measurement"> objective image quality measurement</a>, <a href="https://publications.waset.org/search?q=subjective%20image%20quality%20measurement" title=" subjective image quality measurement"> subjective image quality measurement</a>, <a href="https://publications.waset.org/search?q=correlation%20coefficients." title=" correlation coefficients."> correlation coefficients.</a> </p> <a href="https://publications.waset.org/10171/objective-performance-of-compressed-image-quality-assessments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10171/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10171/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10171/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10171/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10171/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10171/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10171/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10171/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10171/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10171/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2188</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2423</span> MAP-Based Image Super-resolution Reconstruction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Xueting%20Liu">Xueting Liu</a>, <a href="https://publications.waset.org/search?q=Daojin%20Song"> Daojin Song</a>, <a href="https://publications.waset.org/search?q=Chuandai%20Dong"> Chuandai Dong</a>, <a href="https://publications.waset.org/search?q=Hongkui%20Li"> Hongkui Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem <em>袆</em>=<em>An </em>into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=High-resolution%20MAP%20image" title="High-resolution MAP image">High-resolution MAP image</a>, <a href="https://publications.waset.org/search?q=Reconstruction" title=" Reconstruction"> Reconstruction</a>, <a href="https://publications.waset.org/search?q=Image%20interpolation" title=" Image interpolation"> Image interpolation</a>, <a href="https://publications.waset.org/search?q=Motion%20Estimation" title=" Motion Estimation"> Motion Estimation</a>, <a href="https://publications.waset.org/search?q=Hermitian%20positive%20definite%20solutions." title=" Hermitian positive definite solutions."> Hermitian positive definite solutions.</a> </p> <a href="https://publications.waset.org/2034/map-based-image-super-resolution-reconstruction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2034/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2034/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2034/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2034/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2034/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2034/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2034/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2034/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2034/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2034/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2034.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2156</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2422</span> Enhancing Multi-Frame Images Using Self-Delaying Dynamic Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Lewis%20E.%20Hibell">Lewis E. Hibell</a>, <a href="https://publications.waset.org/search?q=Honghai%20Liu"> Honghai Liu</a>, <a href="https://publications.waset.org/search?q=David%20J.%20Brown"> David J. Brown</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the use of a newly created network structure known as a Self-Delaying Dynamic Network (SDN) to create a high resolution image from a set of time stepped input frames. These SDNs are non-recurrent temporal neural networks which can process time sampled data. SDNs can store input data for a lifecycle and feature dynamic logic based connections between layers. Several low resolution images and one high resolution image of a scene were presented to the SDN during training by a Genetic Algorithm. The SDN was trained to process the input frames in order to recreate the high resolution image. The trained SDN was then used to enhance a number of unseen noisy image sets. The quality of high resolution images produced by the SDN is compared to that of high resolution images generated using Bi-Cubic interpolation. The SDN produced images are superior in several ways to the images produced using Bi-Cubic interpolation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20Enhancement" title="Image Enhancement">Image Enhancement</a>, <a href="https://publications.waset.org/search?q=Neural%20Networks" title=" Neural Networks"> Neural Networks</a>, <a href="https://publications.waset.org/search?q=Multi-Frame." title=" Multi-Frame."> Multi-Frame.</a> </p> <a href="https://publications.waset.org/4055/enhancing-multi-frame-images-using-self-delaying-dynamic-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4055/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4055/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4055/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4055/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4055/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4055/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4055/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4055/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4055/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4055/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4055.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1194</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2421</span> Image Search by Features of Sorted Gray level Histogram Polynomial Curve </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Awais%20Adnan">Awais Adnan</a>, <a href="https://publications.waset.org/search?q=Muhammad%20Ali"> Muhammad Ali</a>, <a href="https://publications.waset.org/search?q=Amir%20Hanif%20Dar"> Amir Hanif Dar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Image Searching was always a problem specially when these images are not properly managed or these are distributed over different locations. Currently different techniques are used for image search. On one end, more features of the image are captured and stored to get better results. Storing and management of such features is itself a time consuming job. While on the other extreme if fewer features are stored the accuracy rate is not satisfactory. Same image stored with different visual properties can further reduce the rate of accuracy. In this paper we present a new concept of using polynomials of sorted histogram of the image. This approach need less overhead and can cope with the difference in visual features of image.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Sorted%20Histogram" title="Sorted Histogram">Sorted Histogram</a>, <a href="https://publications.waset.org/search?q=Polynomial%20Curves" title=" Polynomial Curves"> Polynomial Curves</a>, <a href="https://publications.waset.org/search?q=feature%20pointsof%20images" title=" feature pointsof images"> feature pointsof images</a>, <a href="https://publications.waset.org/search?q=Grayscale" title=" Grayscale"> Grayscale</a>, <a href="https://publications.waset.org/search?q=visual%20properties%20of%20image." title=" visual properties of image."> visual properties of image.</a> </p> <a href="https://publications.waset.org/770/image-search-by-features-of-sorted-gray-level-histogram-polynomial-curve" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/770/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/770/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/770/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/770/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/770/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/770/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/770/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/770/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/770/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/770/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/770.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1428</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2420</span> Using Self Organizing Feature Maps for Classification in RGB Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hassan%20Masoumi">Hassan Masoumi</a>, <a href="https://publications.waset.org/search?q=Ahad%20Salimi"> Ahad Salimi</a>, <a href="https://publications.waset.org/search?q=Nazanin%20Barhemmat"> Nazanin Barhemmat</a>, <a href="https://publications.waset.org/search?q=Babak%20Gholami"> Babak Gholami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feedforward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on selforganizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Classification" title="Classification">Classification</a>, <a href="https://publications.waset.org/search?q=SOFM" title=" SOFM"> SOFM</a>, <a href="https://publications.waset.org/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/search?q=RGB%20images." title=" RGB images."> RGB images.</a> </p> <a href="https://publications.waset.org/10002035/using-self-organizing-feature-maps-for-classification-in-rgb-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002035/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002035/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002035/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002035/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002035/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002035/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002035/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002035/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002035/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002035/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002035.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2319</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=81">81</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=82">82</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Image%20training.&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>