CINXE.COM
Search results for: image memorability
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image memorability</title> <meta name="description" content="Search results for: image memorability"> <meta name="keywords" content="image memorability"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image memorability" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image memorability"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2773</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image memorability</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2773</span> Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elham%20Bagheri">Elham Bagheri</a>, <a href="https://publications.waset.org/abstracts/search?q=Yalda%20Mohsenzadeh"> Yalda Mohsenzadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autoencoder" title="autoencoder">autoencoder</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20vision" title=" computational vision"> computational vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20memorability" title=" image memorability"> image memorability</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20reconstruction" title=" image reconstruction"> image reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=memory%20retention" title=" memory retention"> memory retention</a>, <a href="https://publications.waset.org/abstracts/search?q=reconstruction%20error" title=" reconstruction error"> reconstruction error</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/175805/modeling-visual-memorability-assessment-with-autoencoders-reveals-characteristics-of-memorable-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175805.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2772</span> Electroencephalography Correlates of Memorability While Viewing Advertising Content</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victor%20N.%20Anisimov">Victor N. Anisimov</a>, <a href="https://publications.waset.org/abstracts/search?q=Igor%20E.%20Serov"> Igor E. Serov</a>, <a href="https://publications.waset.org/abstracts/search?q=Ksenia%20M.%20Kolkova"> Ksenia M. Kolkova</a>, <a href="https://publications.waset.org/abstracts/search?q=Natalia%20V.%20Galkina"> Natalia V. Galkina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of memorability of the advertising content is closely connected with the key issues of neuromarketing. The memorability of the advertising content contributes to the marketing effectiveness of the promoted product. Significant directions of studying the phenomenon of memorability are the memorability of the brand (detected through the memorability of the logo) and the memorability of the product offer (detected through the memorization of dynamic audiovisual advertising content - commercial). The aim of this work is to reveal the predictors of memorization of static and dynamic audiovisual stimuli (logos and commercials). An important direction of the research was revealing differences in psychophysiological correlates of memorability between static and dynamic audiovisual stimuli. We assumed that static and dynamic images are perceived in different ways and may have a difference in the memorization process. Objective methods of recording psychophysiological parameters while watching static and dynamic audiovisual materials are well suited to achieve the aim. The electroencephalography (EEG) method was performed with the aim of identifying correlates of the memorability of various stimuli in the electrical activity of the cerebral cortex. All stimuli (in the groups of statics and dynamics separately) were divided into 2 groups – remembered and not remembered based on the results of the questioning method. The questionnaires were filled out by survey participants after viewing the stimuli not immediately, but after a time interval (for detecting stimuli recorded through long-term memorization). Using statistical method, we developed the classifier (statistical model) that predicts which group (remembered or not remembered) stimuli gets, based on psychophysiological perception. The result of the statistical model was compared with the results of the questionnaire. Conclusions: Predictors of the memorability of static and dynamic stimuli have been identified, which allows prediction of which stimuli will have a higher probability of remembering. Further developments of this study will be the creation of stimulus memory model with the possibility of recognizing the stimulus as previously seen or new. Thus, in the process of remembering the stimulus, it is planned to take into account the stimulus recognition factor, which is one of the most important tasks for neuromarketing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=memory" title="memory">memory</a>, <a href="https://publications.waset.org/abstracts/search?q=commercials" title=" commercials"> commercials</a>, <a href="https://publications.waset.org/abstracts/search?q=neuromarketing" title=" neuromarketing"> neuromarketing</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG" title=" EEG"> EEG</a>, <a href="https://publications.waset.org/abstracts/search?q=branding" title=" branding"> branding</a> </p> <a href="https://publications.waset.org/abstracts/91017/electroencephalography-correlates-of-memorability-while-viewing-advertising-content" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">251</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2771</span> The Desire for Significance & Memorability in Popular Culture: A Cognitive Psychological Study of Contemporary Literature, Art, and Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Israel%20B.%20Bitton">Israel B. Bitton</a> </p> <p class="card-text"><strong>Abstract:</strong></p> “Memory” is associated with various phenomena, from physical to mental, personal to collective and historical to cultural. As part of a broader exploration of memory studies in philosophy and science (slated for academic publication October 2021), this specific study employs analytical methods of cognitive psychology and philosophy of memory to theorize that A) the primary human will (drive) is to significance, in that every human action and expression can be rooted in a most primal desire to be cosmically significant (however that is individually perceived); and B) that the will to significance manifests as the will to memorability, an innate desire to be remembered by others after death. In support of these broad claims, a review of various popular culture “touchpoints”—historic and contemporary records spanning literature, film and television, traditional news media, and social media—is presented to demonstrate how this very theory is repeatedly and commonly expressed (and has been for a long time) by many popular public figures as well as “everyday people.” Though developed before COVID, the crisis only increased the theory’s relevance: so many people were forced to die alone, leaving them and their loved ones to face even greater existential angst than what ordinarily accompanies death since the usual expectations for one’s “final moments” were shattered. To underscore this issue of, and response to, what can be considered a sociocultural “memory gap,” this study concludes with a summary of several projects launched by journalists at the height of the pandemic to document the memorable human stories behind COVID’s tragic warped speed death toll that, when analyzed through the lens of Viktor E. Frankl’s psychoanalytical perspective on “existential meaning,” shows how countless individuals were robbed of the last wills and testaments to their self-significance and memorability typically afforded to the dying and the aggrieved. The resulting insight ought to inform how government and public health officials determine what is truly “non-essential” to human health, physical and mental, at times of crisis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20psychology" title="cognitive psychology">cognitive psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=covid" title=" covid"> covid</a>, <a href="https://publications.waset.org/abstracts/search?q=neuroscience" title=" neuroscience"> neuroscience</a>, <a href="https://publications.waset.org/abstracts/search?q=philosophy%20of%20memory" title=" philosophy of memory"> philosophy of memory</a> </p> <a href="https://publications.waset.org/abstracts/139374/the-desire-for-significance-memorability-in-popular-culture-a-cognitive-psychological-study-of-contemporary-literature-art-and-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139374.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2770</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2769</span> A Method of the Semantic on Image Auto-Annotation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Huo">Lin Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianwei%20Liu"> Xianwei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingxiong%20Zhou"> Jingxiong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20auto-annotation" title="image auto-annotation">image auto-annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20correlograms" title=" color correlograms"> color correlograms</a>, <a href="https://publications.waset.org/abstracts/search?q=Hash%20code" title=" Hash code"> Hash code</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/15628/a-method-of-the-semantic-on-image-auto-annotation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2768</span> Deployment of Matrix Transpose in Digital Image Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Okike%20Benjamin">Okike Benjamin</a>, <a href="https://publications.waset.org/abstracts/search?q=Garba%20E%20J.%20D."> Garba E J. D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=matrices" title=" matrices"> matrices</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel" title=" pixel"> pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20transpose" title=" matrix transpose "> matrix transpose </a> </p> <a href="https://publications.waset.org/abstracts/48717/deployment-of-matrix-transpose-in-digital-image-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2767</span> Performance of Hybrid Image Fusion: Implementation of Dual-Tree Complex Wavelet Transform Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Gupta">Manoj Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nirmendra%20Singh%20Bhadauria"> Nirmendra Singh Bhadauria</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of the applications in image processing require high spatial and high spectral resolution in a single image. For example satellite image system, the traffic monitoring system, and long range sensor fusion system all use image processing. However, most of the available equipment is not capable of providing this type of data. The sensor in the surveillance system can only cover the view of a small area for a particular focus, yet the demanding application of this system requires a view with a high coverage of the field. Image fusion provides the possibility of combining different sources of information. In this paper, we have decomposed the image using DTCWT and then fused using average and hybrid of (maxima and average) pixel level techniques and then compared quality of both the images using PSNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/abstracts/search?q=DT-CWT" title=" DT-CWT"> DT-CWT</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20image%20fusion" title=" average image fusion"> average image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20image%20fusion" title=" hybrid image fusion"> hybrid image fusion</a> </p> <a href="https://publications.waset.org/abstracts/19207/performance-of-hybrid-image-fusion-implementation-of-dual-tree-complex-wavelet-transform-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2766</span> Assessment of Image Databases Used for Human Skin Detection Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saleh%20Alshehri">Saleh Alshehri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection is a vital step in many applications. Some of the applications are critical especially those related to security. This leverages the importance of a high-performance detection algorithm. To validate the accuracy of the algorithm, image databases are usually used. However, the suitability of these image databases is still questionable. It is suggested that the suitability can be measured mainly by the span the database covers of the color space. This research investigates the validity of three famous image databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20databases" title="image databases">image databases</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/87836/assessment-of-image-databases-used-for-human-skin-detection-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">271</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2765</span> A Novel Combination Method for Computing the Importance Map of Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Absetan">Ahmad Absetan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Nooshyar"> Mahdi Nooshyar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-aware%20image%20resizing" title="content-aware image resizing">content-aware image resizing</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20saliency" title=" visual saliency"> visual saliency</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20density" title=" edge density"> edge density</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20warping" title=" image warping"> image warping</a> </p> <a href="https://publications.waset.org/abstracts/35692/a-novel-combination-method-for-computing-the-importance-map-of-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">582</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2764</span> Blind Data Hiding Technique Using Interpolation of Subsampled Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Singara%20Singh%20Kasana">Singara Singh Kasana</a>, <a href="https://publications.waset.org/abstracts/search?q=Pankaj%20Garg"> Pankaj Garg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a blind data hiding technique based on interpolation of sub sampled versions of a cover image is proposed. Sub sampled image is taken as a reference image and an interpolated image is generated from this reference image. Then difference between original cover image and interpolated image is used to embed secret data. Comparisons with the existing interpolation based techniques show that proposed technique provides higher embedding capacity and better visual quality marked images. Moreover, the performance of the proposed technique is more stable for different images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interpolation" title="interpolation">interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subsampling" title=" image subsampling"> image subsampling</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=SIM" title=" SIM"> SIM</a> </p> <a href="https://publications.waset.org/abstracts/18926/blind-data-hiding-technique-using-interpolation-of-subsampled-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">578</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2763</span> Self-Image of Police Officers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leo%20Carlo%20B.%20Rondina">Leo Carlo B. Rondina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=police%20image" title="police image">police image</a>, <a href="https://publications.waset.org/abstracts/search?q=exploratory%20factor%20analysis" title=" exploratory factor analysis"> exploratory factor analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ordinal%20regression" title=" ordinal regression"> ordinal regression</a>, <a href="https://publications.waset.org/abstracts/search?q=Galatea%20effect" title=" Galatea effect"> Galatea effect</a> </p> <a href="https://publications.waset.org/abstracts/75550/self-image-of-police-officers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75550.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2762</span> Evaluating Classification with Efficacy Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guofan%20Shao">Guofan Shao</a>, <a href="https://publications.waset.org/abstracts/search?q=Lina%20Tang"> Lina Tang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Zhang"> Hao Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The values of image classification accuracy are affected by class size distributions and classification schemes, making it difficult to compare the performance of classification algorithms across different remote sensing data sources and classification systems. Based on the term efficacy from medicine and pharmacology, we have developed the metrics of image classification efficacy at the map and class levels. The novelty of this approach is that a baseline classification is involved in computing image classification efficacies so that the effects of class statistics are reduced. Furthermore, the image classification efficacies are interpretable and comparable, and thus, strengthen the assessment of image data classification methods. We use real-world and hypothetical examples to explain the use of image classification efficacies. The metrics of image classification efficacy meet the critical need to rectify the strategy for the assessment of image classification performance as image classification methods are becoming more diversified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy%20assessment" title="accuracy assessment">accuracy assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=efficacy" title=" efficacy"> efficacy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty" title=" uncertainty"> uncertainty</a> </p> <a href="https://publications.waset.org/abstracts/142555/evaluating-classification-with-efficacy-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142555.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2761</span> Texture Analysis of Grayscale Co-Occurrence Matrix on Mammographic Indexed Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sushma">S. Sushma</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Balasubramanian"> S. Balasubramanian</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20C.%20Latha"> K. C. Latha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The mammographic image of breast cancer compressed and synthesized to get co-efficient values which will be converted (5x5) matrix to get ROI image where we get the highest value of effected region and with the same ideology the technique has been extended to differentiate between Calcification and normal cell image using mean value derived from 5x5 matrix values <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20analysis" title="texture analysis">texture analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=mammographic%20image" title=" mammographic image"> mammographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=partitioned%20gray%20scale%20co-oocurance%20matrix" title=" partitioned gray scale co-oocurance matrix"> partitioned gray scale co-oocurance matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=co-efficient" title=" co-efficient "> co-efficient </a> </p> <a href="https://publications.waset.org/abstracts/17516/texture-analysis-of-grayscale-co-occurrence-matrix-on-mammographic-indexed-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">533</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2760</span> Size Reduction of Images Using Constraint Optimization Approach for Machine Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chee%20Sun%20Won">Chee Sun Won</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the size reduction of images for machine-to-machine communications. Here, the salient image regions to be preserved include the image patches of the key-points such as corners and blobs. Based on a saliency image map from the key-points and their image patches, an axis-aligned grid-size optimization is proposed for the reduction of image size. To increase the size-reduction efficiency the aspect ratio constraint is relaxed in the constraint optimization framework. The proposed method yields higher matching accuracy after the size reduction than the conventional content-aware image size-reduction methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20compression" title="image compression">image compression</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description" title=" key-point detection and description"> key-point detection and description</a>, <a href="https://publications.waset.org/abstracts/search?q=machine-to-machine%20communication" title=" machine-to-machine communication"> machine-to-machine communication</a> </p> <a href="https://publications.waset.org/abstracts/67605/size-reduction-of-images-using-constraint-optimization-approach-for-machine-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67605.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2759</span> A Review on Artificial Neural Networks in Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Afsharipoor">B. Afsharipoor</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Nazemi"> E. Nazemi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artificial neural networks (ANNs) are powerful tool for prediction which can be trained based on a set of examples and thus, it would be useful for nonlinear image processing. The present paper reviews several paper regarding applications of ANN in image processing to shed the light on advantage and disadvantage of ANNs in this field. Different steps in the image processing chain including pre-processing, enhancement, segmentation, object recognition, image understanding and optimization by using ANN are summarized. Furthermore, results on using multi artificial neural networks are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20understanding" title=" image understanding"> image understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=MANN" title=" MANN"> MANN</a> </p> <a href="https://publications.waset.org/abstracts/36843/a-review-on-artificial-neural-networks-in-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36843.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">407</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2758</span> Definition, Structure, and Core Functions of the State Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosa%20Nurtazina">Rosa Nurtazina</a>, <a href="https://publications.waset.org/abstracts/search?q=Yerkebulan%20Zhumashov"> Yerkebulan Zhumashov</a>, <a href="https://publications.waset.org/abstracts/search?q=Maral%20Tomanova"> Maral Tomanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Humanity is entering an era when 'virtual reality' as the image of the world created by the media with the help of the Internet does not match the reality in many respects, when new communication technologies create a fundamentally different and previously unknown 'global space'. According to these technologies, the state begins to change the basic technology of political communication of the state and society, the state and the state. Nowadays, image of the state becomes the most important tool and technology. Image is a purposefully created image granting political object (person, organization, country, etc.) certain social and political values and promoting more emotional perception. Political image of the state plays an important role in international relations. The success of the country's foreign policy, development of trade and economic relations with other countries depends on whether it is positive or negative. Foreign policy image has an impact on political processes taking place in the state: the negative image of the countries can be used by opposition forces as one of the arguments to criticize the government and its policies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20country" title="image of the country">image of the country</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20classification" title=" country's image classification"> country's image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image" title=" function of the country image"> function of the country image</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20components" title=" country's image components"> country's image components</a> </p> <a href="https://publications.waset.org/abstracts/5104/definition-structure-and-core-functions-of-the-state-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5104.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2757</span> Bitplanes Gray-Level Image Encryption Approach Using Arnold Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Abdrhman%20M.%20Ukasha">Ali Abdrhman M. Ukasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SSPCE%20method" title="SSPCE method">SSPCE method</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20compression-salt-%20peppers%20attacks" title=" image compression-salt- peppers attacks"> image compression-salt- peppers attacks</a>, <a href="https://publications.waset.org/abstracts/search?q=bitplanes%20decomposition" title=" bitplanes decomposition"> bitplanes decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnold%20transform" title=" Arnold transform"> Arnold transform</a>, <a href="https://publications.waset.org/abstracts/search?q=lossless%20image%20encryption" title=" lossless image encryption"> lossless image encryption</a> </p> <a href="https://publications.waset.org/abstracts/14573/bitplanes-gray-level-image-encryption-approach-using-arnold-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2756</span> Integral Image-Based Differential Filters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Inoue">Kohei Inoue</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenji%20Hara"> Kenji Hara</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiichi%20Urahama"> Kiichi Urahama</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=integral%20images" title="integral images">integral images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20images" title=" differential images"> differential images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20filters" title=" differential filters"> differential filters</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title=" image fusion"> image fusion</a> </p> <a href="https://publications.waset.org/abstracts/8531/integral-image-based-differential-filters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2755</span> Bitplanes Image Encryption/Decryption Using Edge Map (SSPCE Method) and Arnold Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20A.%20Ukasha">Ali A. Ukasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SSPCE%20method" title="SSPCE method">SSPCE method</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20compression" title=" image compression"> image compression</a>, <a href="https://publications.waset.org/abstracts/search?q=salt%20and%0D%0Apeppers%20attacks" title=" salt and peppers attacks"> salt and peppers attacks</a>, <a href="https://publications.waset.org/abstracts/search?q=bitplanes%20decomposition" title=" bitplanes decomposition"> bitplanes decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnold%20transform" title=" Arnold transform"> Arnold transform</a>, <a href="https://publications.waset.org/abstracts/search?q=lossless%20image%20encryption" title=" lossless image encryption"> lossless image encryption</a> </p> <a href="https://publications.waset.org/abstracts/14570/bitplanes-image-encryptiondecryption-using-edge-map-sspce-method-and-arnold-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14570.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2754</span> Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian">M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy"> M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20b-spline" title="advanced b-spline">advanced b-spline</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=mean%20square%20error%20%28MSE%29" title=" mean square error (MSE)"> mean square error (MSE)</a>, <a href="https://publications.waset.org/abstracts/search?q=peak%20signal%20to%20noise%20ratio%20%28PSNR%29" title=" peak signal to noise ratio (PSNR)"> peak signal to noise ratio (PSNR)</a>, <a href="https://publications.waset.org/abstracts/search?q=resolution%20down%20converter" title=" resolution down converter"> resolution down converter</a> </p> <a href="https://publications.waset.org/abstracts/59499/design-and-performance-analysis-of-advanced-b-spline-algorithm-for-image-resolution-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2753</span> Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=L.%20Hamsaveni"> L. Hamsaveni</a>, <a href="https://publications.waset.org/abstracts/search?q=Navya%20Prakash"> Navya Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresha"> Suresha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=grayscale%20image%20format" title="grayscale image format">grayscale image format</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusing" title=" image fusing"> image fusing</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20image%20format" title=" RGB image format"> RGB image format</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%20detection" title=" SURF detection"> SURF detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YCbCr%20image%20format" title=" YCbCr image format"> YCbCr image format</a> </p> <a href="https://publications.waset.org/abstracts/64187/degraded-document-analysis-and-extraction-of-original-text-document-an-approach-without-optical-character-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">377</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2752</span> Secure Image Retrieval Based on Orthogonal Decomposition under Cloud Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Xu">Y. Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Xiong"> L. Xiong</a>, <a href="https://publications.waset.org/abstracts/search?q=Z.%20Xu"> Z. Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to protect data privacy, image with sensitive or private information needs to be encrypted before being outsourced to the cloud. However, this causes difficulties in image retrieval and data management. A secure image retrieval method based on orthogonal decomposition is proposed in the paper. The image is divided into two different components, for which encryption and feature extraction are executed separately. As a result, cloud server can extract features from an encrypted image directly and compare them with the features of the queried images, so that the user can thus obtain the image. Different from other methods, the proposed method has no special requirements to encryption algorithms. Experimental results prove that the proposed method can achieve better security and better retrieval precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=secure%20image%20retrieval" title="secure image retrieval">secure image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20search" title=" secure search"> secure search</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20decomposition" title=" orthogonal decomposition"> orthogonal decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20cloud%20computing" title=" secure cloud computing"> secure cloud computing</a> </p> <a href="https://publications.waset.org/abstracts/29115/secure-image-retrieval-based-on-orthogonal-decomposition-under-cloud-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29115.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2751</span> Structure Analysis of Text-Image Connection in Jalayrid Period Illustrated Manuscripts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20Khani%20Oushani">Mahsa Khani Oushani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text and image are two important elements in the field of Iranian art, the text component and the image component have always been manifested together. The image narrates the text and the text is the factor in the formation of the image and they are closely related to each other. The connection between text and image is an interactive and two-way connection in the tradition of Iranian manuscript arrangement. The interaction between the narrative description and the image scene is the result of a direct and close connection between the text and the image, which in addition to the decorative aspect, also has a descriptive aspect. In this article the connection between the text element and the image element and its adaptation to the theory of Roland Barthes, the structuralism theorist, in this regard will be discussed. This study tends to investigate the question of how the connection between text and image in illustrated manuscripts of the Jalayrid period is defined according to Barthes’ theory. And what kind of proportion has the artist created in the composition between text and image. Based on the results of reviewing the data of this study, it can be inferred that in the Jalayrid period, the image has a reference connection and although it is of major importance on the page, it also maintains a close connection with the text and is placed in a special proportion. It is not necessarily balanced and symmetrical and sometimes uses imbalance for composition. This research has been done by descriptive-analytical method, which has been done by library collection method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=structure" title="structure">structure</a>, <a href="https://publications.waset.org/abstracts/search?q=text" title=" text"> text</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=Jalayrid" title=" Jalayrid"> Jalayrid</a>, <a href="https://publications.waset.org/abstracts/search?q=painter" title=" painter"> painter</a> </p> <a href="https://publications.waset.org/abstracts/138869/structure-analysis-of-text-image-connection-in-jalayrid-period-illustrated-manuscripts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138869.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">233</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2750</span> Robust Image Design Based Steganographic System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadiq%20J.%20Abou-Loukh">Sadiq J. Abou-Loukh</a>, <a href="https://publications.waset.org/abstracts/search?q=Hanan%20M.%20Habbi"> Hanan M. Habbi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a steganography to hide the transmitted information without excite suspicious and also illustrates the level of secrecy that can be increased by using cryptography techniques. The proposed system has been implemented firstly by encrypted image file one time pad key and secondly encrypted message that hidden to perform encryption followed by image embedding. Then the new image file will be created from the original image by using four triangles operation, the new image is processed by one of two image processing techniques. The proposed two processing techniques are thresholding and differential predictive coding (DPC). Afterwards, encryption or decryption keys are generated by functional key generator. The generator key is used one time only. Encrypted text will be hidden in the places that are not used for image processing and key generation system has high embedding rate (0.1875 character/pixel) for true color image (24 bit depth). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=encryption" title="encryption">encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=thresholding" title=" thresholding"> thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%0D%0Apredictive%20coding" title=" differential predictive coding"> differential predictive coding</a>, <a href="https://publications.waset.org/abstracts/search?q=four%20triangles%20operation" title=" four triangles operation "> four triangles operation </a> </p> <a href="https://publications.waset.org/abstracts/16654/robust-image-design-based-steganographic-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2749</span> Multi-Spectral Medical Images Enhancement Using a Weber’s law</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muna%20F.%20Al-Sammaraie">Muna F. Al-Sammaraie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this research is to present a multi spectral image enhancement methods used to achieve highly real digital image populates only a small portion of the available range of digital values. Also, a quantitative measure of image enhancement is presented. This measure is related with concepts of the Webers Low of the human visual system. For decades, several image enhancement techniques have been proposed. Although most techniques require profuse amount of advance and critical steps, the result for the perceive image are not as satisfied. This study involves changing the original values so that more of the available range is used; then increases the contrast between features and their backgrounds. It consists of reading the binary image on the basis of pixels taking them byte-wise and displaying it, calculating the statistics of an image, automatically enhancing the color of the image based on statistics calculation using algorithms and working with RGB color bands. Finally, the enhanced image is displayed along with image histogram. A number of experimental results illustrated the performance of these algorithms. Particularly the quantitative measure has helped to select optimal processing parameters: the best parameters and transform. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-spectral" title=" multi-spectral"> multi-spectral</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB" title=" RGB"> RGB</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a> </p> <a href="https://publications.waset.org/abstracts/8574/multi-spectral-medical-images-enhancement-using-a-webers-law" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">328</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2748</span> High Speed Image Rotation Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hee-Choul%20Kwon">Hee-Choul Kwon</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyungjin%20Cho"> Hyungjin Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Heeyong%20Kwon"> Heeyong Kwon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image rotation is one of main pre-processing step in image processing or image pattern recognition. It is implemented with rotation matrix multiplication. However it requires lots of floating point arithmetic operations and trigonometric function calculations, so it takes long execution time. We propose a new high speed image rotation algorithm without two major time-consuming operations. We compare the proposed algorithm with the conventional rotation one with various size images. Experimental results show that the proposed algorithm is superior to the conventional rotation ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high%20speed%20rotation%20operation" title="high speed rotation operation">high speed rotation operation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20rotation" title=" image rotation"> image rotation</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=transformation%20matrix" title=" transformation matrix"> transformation matrix</a> </p> <a href="https://publications.waset.org/abstracts/25258/high-speed-image-rotation-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25258.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2747</span> Image Rotation Using an Augmented 2-Step Shear Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hee-Choul%20Kwon">Hee-Choul Kwon</a>, <a href="https://publications.waset.org/abstracts/search?q=Heeyong%20Kwon"> Heeyong Kwon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image rotation is one of main pre-processing steps for image processing or image pattern recognition. It is implemented with a rotation matrix multiplication. It requires a lot of floating point arithmetic operations and trigonometric calculations, so it takes a long time to execute. Therefore, there has been a need for a high speed image rotation algorithm without two major time-consuming operations. However, the rotated image has a drawback, i.e. distortions. We solved the problem using an augmented two-step shear transform. We compare the presented algorithm with the conventional rotation with images of various sizes. Experimental results show that the presented algorithm is superior to the conventional rotation one. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high-speed%20rotation%20operation" title="high-speed rotation operation">high-speed rotation operation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20rotation" title=" image rotation"> image rotation</a>, <a href="https://publications.waset.org/abstracts/search?q=transform%20matrix" title=" transform matrix"> transform matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/64167/image-rotation-using-an-augmented-2-step-shear-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">277</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2746</span> Analysis of Various Copy Move Image Forgery Techniques for Better Detection Accuracy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Grishma%20D.%20Solanki">Grishma D. Solanki</a>, <a href="https://publications.waset.org/abstracts/search?q=Karshan%20Kandoriya"> Karshan Kandoriya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In modern era of information age, digitalization has revolutionized like never before. Powerful computers, advanced photo editing software packages and high resolution capturing devices have made manipulation of digital images incredibly easy. As per as image forensics concerns, one of the most actively researched area are detection of copy move forgeries. Higher computational complexity is one of the major component of existing techniques to detect such tampering. Moreover, copy move forgery is usually performed in three steps. First, copying of a region in an image then pasting the same one in the same respective image and finally doing some post-processing like rotation, scaling, shift, noise, etc. Consequently, pseudo Zernike moment is used as a features extraction method for matching image blocks and as a primary factor on which performance of detection algorithms depends. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=copy-move%20image%20forgery" title="copy-move image forgery">copy-move image forgery</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20forensics" title=" digital forensics"> digital forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title=" image forensics"> image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20forgery" title=" image forgery"> image forgery</a> </p> <a href="https://publications.waset.org/abstracts/49539/analysis-of-various-copy-move-image-forgery-techniques-for-better-detection-accuracy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49539.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2745</span> The Image as an Initial Element of the Cognitive Understanding of Words</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Pesina">S. Pesina</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Solonchak"> T. Solonchak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An analysis of word semantics focusing on the invariance of advanced imagery in several pressing problems. Interest in the language of imagery is caused by the introduction, in the linguistics sphere, of a new paradigm, the center of which is the personality of the speaker (the subject of the language). Particularly noteworthy is the question of the place of the image when discussing the lexical, phraseological values and the relationship of imagery and metaphors. In part, the formation of a metaphor, as an interaction between two intellective entities, occurs at a cognitive level, and it is the category of the image, having cognitive roots, which aides in the correct interpretation of the results of this process on the lexical-semantic level. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image" title="image">image</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor" title=" metaphor"> metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=concept" title=" concept"> concept</a>, <a href="https://publications.waset.org/abstracts/search?q=creation%20of%20a%20metaphor" title=" creation of a metaphor"> creation of a metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20linguistics" title=" cognitive linguistics"> cognitive linguistics</a>, <a href="https://publications.waset.org/abstracts/search?q=erased%20image" title=" erased image"> erased image</a>, <a href="https://publications.waset.org/abstracts/search?q=vivid%20image" title=" vivid image"> vivid image</a> </p> <a href="https://publications.waset.org/abstracts/10617/the-image-as-an-initial-element-of-the-cognitive-understanding-of-words" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10617.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">361</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2744</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=92">92</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=93">93</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20memorability&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>