CINXE.COM
Search results for: image of the past
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image of the past</title> <meta name="description" content="Search results for: image of the past"> <meta name="keywords" content="image of the past"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image of the past" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image of the past"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 5596</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image of the past</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5596</span> Research Approaches for Identifying Images of the Past in the Built Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Al-Zoabi">Ahmad Al-Zoabi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Development of research approaches for identifying images of the past in the built environment is at a beginning stage, and a review of the current literature reveals a limited body of research in this area. This study seeks to make a contribution to fill this void. It investigates the theoretical and empirical studies that examine the built environment as a medium for communicating the past in order to understand how images of the past are operationalized in these studies. Findings revealed that image could be operationalized in several ways depending on the focus of the study. Three concerns were addressed in this study when defining the image of the past: (a) to investigate an 'everyday' popular image of the past; (b) to look at the building's image as an integrated part of a larger image for the city; and (c) to find patterns within residents' images of the past. This study concludes that a future study is needed to address the effects of different scales (size and depth of history) of cities and of different cultural backgrounds of images of the past. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=architecture" title="architecture">architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=built%20environment" title=" built environment"> built environment</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past" title=" image of the past"> image of the past</a>, <a href="https://publications.waset.org/abstracts/search?q=research%20approaches" title=" research approaches"> research approaches</a> </p> <a href="https://publications.waset.org/abstracts/66594/research-approaches-for-identifying-images-of-the-past-in-the-built-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66594.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5595</span> Image Segmentation Techniques: Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lindani%20Mbatha">Lindani Mbatha</a>, <a href="https://publications.waset.org/abstracts/search?q=Suvendi%20Rimer"> Suvendi Rimer</a>, <a href="https://publications.waset.org/abstracts/search?q=Mpho%20Gololo"> Mpho Gololo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering-based" title="clustering-based">clustering-based</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution-network" title=" convolution-network"> convolution-network</a>, <a href="https://publications.waset.org/abstracts/search?q=edge-based" title=" edge-based"> edge-based</a>, <a href="https://publications.waset.org/abstracts/search?q=region-growing" title=" region-growing"> region-growing</a> </p> <a href="https://publications.waset.org/abstracts/166513/image-segmentation-techniques-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166513.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">96</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5594</span> Reliving Historical Events Using Augmented Reality Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Josep%20Domenech%20Mingot">Josep Domenech Mingot</a>, <a href="https://publications.waset.org/abstracts/search?q=Francisco%20Javier%20Esclapes%20Jover"> Francisco Javier Esclapes Jover</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The arrival of the age of information and new technologies allowed humanity to see what the future has in store, but occasionally it also brings the opportunity to look through a window to the past, an opportunity to relive history. This paper introduces a prototype of a digital system that lets us peek into our past making use of augmented reality technologies. A 3D scene will be modeled and animated based on an old image, depicting an event of historical significance. From this scene, a video will be rendered, recreating the events that were taking place at the time. Also, a smartphone app will be created. This app will detect the original image with the smartphone’s camera, overlay the rendered video so that it fully covers it and track the detected image, so that the overlaying video can keep covering the image. The recreation of Alicante’s Central Market bombing during the Spanish Civil War is presented as a case study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality" title="augmented reality">augmented reality</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20heritage" title=" digital heritage"> digital heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=history" title=" history"> history</a>, <a href="https://publications.waset.org/abstracts/search?q=multimedia" title=" multimedia"> multimedia</a>, <a href="https://publications.waset.org/abstracts/search?q=smartphone" title=" smartphone"> smartphone</a> </p> <a href="https://publications.waset.org/abstracts/86915/reliving-historical-events-using-augmented-reality-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">220</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5593</span> Design and Implementation of Image Super-Resolution for Myocardial Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy">M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian"> M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20dictionary%20creation" title="image dictionary creation">image dictionary creation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=LGE%20images" title=" LGE images"> LGE images</a>, <a href="https://publications.waset.org/abstracts/search?q=patch%20extraction" title=" patch extraction"> patch extraction</a> </p> <a href="https://publications.waset.org/abstracts/59494/design-and-implementation-of-image-super-resolution-for-myocardial-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59494.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5592</span> Quest for Literary Past: A Study of Byatt’s Possession</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chen%20Jun">Chen Jun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Antonia Susan Byatt’s Possession: A Romance has been misread as a postmodern pastiche novel since its publication because there are epics, epigraphs, lyrics, fairy tales, epistles, and even critical articles swollen in this work. The word ‘pastiche’ suggests messy, disorganized, and chaotic, which buries its artistic excellence while overlooking its subtitle, A Romance. The center of romance is the quest that the hero sets forth to conquer the adversity, hardship, and danger to accomplish a task to prove his identity or social worth. This paper argues that Byatt’s Possession is not a postmodern pastiche novel but rather a postmodern romance in which the characters in the academic world set forth their quest into the Victorian literary past that is nostalgically identified by Byatt as the Golden Age of English literature. In doing so, these five following issues are addressed: first, the origin of the protagonist Roland, and consequently, the nature of his quest; second, the central image of the dragon created by the fictional Victorian poet Henry Ash; third, Melusine as an image of female serpent created by the fictional Victorian poet Christabel LaMotte; fourth, the images of the two ladies; last, the image of water that links the dragon and the serpent. In Possession, the past is reinvented not as an unfortunate fall but as a Golden Age presented in the imaginative academic adventure. The dragon, a stereotypical symbol of evil, becomes the symbol of life in Byatt’s work, which parallels with the image of the mythical phoenix that can resurrect from its own ash. At the same time, the phoenix symbolizes Byatt’s efforts to revive the Victorian poetic art that is supposed to be dead in the post-capitalism society when the novel is the dominating literary genre and poetry becomes the minority. The fictional Victorian poet Ash is in fact Byatt’s own poetic mask through which she breathes life into the lost poetic artistry in the postmodern era. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Byatt" title="Byatt">Byatt</a>, <a href="https://publications.waset.org/abstracts/search?q=possession" title=" possession"> possession</a>, <a href="https://publications.waset.org/abstracts/search?q=postmodern%20romance" title=" postmodern romance"> postmodern romance</a>, <a href="https://publications.waset.org/abstracts/search?q=literary%20past" title=" literary past"> literary past</a> </p> <a href="https://publications.waset.org/abstracts/29962/quest-for-literary-past-a-study-of-byatts-possession" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29962.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5591</span> A Method of the Semantic on Image Auto-Annotation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Huo">Lin Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianwei%20Liu"> Xianwei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingxiong%20Zhou"> Jingxiong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20auto-annotation" title="image auto-annotation">image auto-annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20correlograms" title=" color correlograms"> color correlograms</a>, <a href="https://publications.waset.org/abstracts/search?q=Hash%20code" title=" Hash code"> Hash code</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/15628/a-method-of-the-semantic-on-image-auto-annotation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5590</span> Deployment of Matrix Transpose in Digital Image Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Okike%20Benjamin">Okike Benjamin</a>, <a href="https://publications.waset.org/abstracts/search?q=Garba%20E%20J.%20D."> Garba E J. D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=matrices" title=" matrices"> matrices</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel" title=" pixel"> pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20transpose" title=" matrix transpose "> matrix transpose </a> </p> <a href="https://publications.waset.org/abstracts/48717/deployment-of-matrix-transpose-in-digital-image-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5589</span> Performance of Hybrid Image Fusion: Implementation of Dual-Tree Complex Wavelet Transform Technique </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manoj%20Gupta">Manoj Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Nirmendra%20Singh%20Bhadauria"> Nirmendra Singh Bhadauria</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of the applications in image processing require high spatial and high spectral resolution in a single image. For example satellite image system, the traffic monitoring system, and long range sensor fusion system all use image processing. However, most of the available equipment is not capable of providing this type of data. The sensor in the surveillance system can only cover the view of a small area for a particular focus, yet the demanding application of this system requires a view with a high coverage of the field. Image fusion provides the possibility of combining different sources of information. In this paper, we have decomposed the image using DTCWT and then fused using average and hybrid of (maxima and average) pixel level techniques and then compared quality of both the images using PSNR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/abstracts/search?q=DT-CWT" title=" DT-CWT"> DT-CWT</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=average%20image%20fusion" title=" average image fusion"> average image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20image%20fusion" title=" hybrid image fusion"> hybrid image fusion</a> </p> <a href="https://publications.waset.org/abstracts/19207/performance-of-hybrid-image-fusion-implementation-of-dual-tree-complex-wavelet-transform-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5588</span> Factors Influencing the Development and Implementation of Radiology Technologist Specialist Role in Image Interpretation in Sudan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Awad%20Elkhadir">Awad Elkhadir</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajab%20M.%20Ben%20Yousef"> Rajab M. Ben Yousef</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The production of high-quality medical images by radiology technologists is useful in diagnosing and treating various injuries and diseases. However, the factors affecting the role of radiology technologists in image interpretation in Sudan have not been investigated widely. Methods: Cross-sectional study has been employed by recruiting ten radiology college deans in Sudan. The questionnaire was distributed online, and obtained data were analyzed using Microsoft Excel and IBM-SPSS version 16.0 to generate descriptive statistics. Results: The study results have shown that half of the deans were doubtful about the readiness of Sudan to implement the role of radiology technologist specialist in image interpretation. The majority of them (60%) believed that this issue had been most strongly pushed by researchers over the past decade. The factors affecting the implementation of the radiology technologist specialist role in image interpretation included; education/training (100%), recognition (30%), technical issues (30%), people-related issues (20%), management changes (30%), government role (30%), costs (10%), and timings (20%). Conclusion: The study concluded that there is a need for a change in image interpretation by radiology technologists in Sudan. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=development" title="development">development</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20interpretation" title=" image interpretation"> image interpretation</a>, <a href="https://publications.waset.org/abstracts/search?q=implementation" title=" implementation"> implementation</a>, <a href="https://publications.waset.org/abstracts/search?q=radiology%20technologist%20specialist" title=" radiology technologist specialist"> radiology technologist specialist</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudan" title=" Sudan"> Sudan</a> </p> <a href="https://publications.waset.org/abstracts/160811/factors-influencing-the-development-and-implementation-of-radiology-technologist-specialist-role-in-image-interpretation-in-sudan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160811.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5587</span> Assessment of Image Databases Used for Human Skin Detection Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saleh%20Alshehri">Saleh Alshehri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection is a vital step in many applications. Some of the applications are critical especially those related to security. This leverages the importance of a high-performance detection algorithm. To validate the accuracy of the algorithm, image databases are usually used. However, the suitability of these image databases is still questionable. It is suggested that the suitability can be measured mainly by the span the database covers of the color space. This research investigates the validity of three famous image databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20databases" title="image databases">image databases</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/87836/assessment-of-image-databases-used-for-human-skin-detection-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87836.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">271</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5586</span> A Novel Combination Method for Computing the Importance Map of Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Absetan">Ahmad Absetan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Nooshyar"> Mahdi Nooshyar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-aware%20image%20resizing" title="content-aware image resizing">content-aware image resizing</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20saliency" title=" visual saliency"> visual saliency</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20density" title=" edge density"> edge density</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20warping" title=" image warping"> image warping</a> </p> <a href="https://publications.waset.org/abstracts/35692/a-novel-combination-method-for-computing-the-importance-map-of-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">582</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5585</span> Blind Data Hiding Technique Using Interpolation of Subsampled Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Singara%20Singh%20Kasana">Singara Singh Kasana</a>, <a href="https://publications.waset.org/abstracts/search?q=Pankaj%20Garg"> Pankaj Garg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a blind data hiding technique based on interpolation of sub sampled versions of a cover image is proposed. Sub sampled image is taken as a reference image and an interpolated image is generated from this reference image. Then difference between original cover image and interpolated image is used to embed secret data. Comparisons with the existing interpolation based techniques show that proposed technique provides higher embedding capacity and better visual quality marked images. Moreover, the performance of the proposed technique is more stable for different images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interpolation" title="interpolation">interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subsampling" title=" image subsampling"> image subsampling</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=SIM" title=" SIM"> SIM</a> </p> <a href="https://publications.waset.org/abstracts/18926/blind-data-hiding-technique-using-interpolation-of-subsampled-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18926.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">578</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5584</span> Self-Image of Police Officers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leo%20Carlo%20B.%20Rondina">Leo Carlo B. Rondina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=police%20image" title="police image">police image</a>, <a href="https://publications.waset.org/abstracts/search?q=exploratory%20factor%20analysis" title=" exploratory factor analysis"> exploratory factor analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ordinal%20regression" title=" ordinal regression"> ordinal regression</a>, <a href="https://publications.waset.org/abstracts/search?q=Galatea%20effect" title=" Galatea effect"> Galatea effect</a> </p> <a href="https://publications.waset.org/abstracts/75550/self-image-of-police-officers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75550.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5583</span> Evaluating Classification with Efficacy Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guofan%20Shao">Guofan Shao</a>, <a href="https://publications.waset.org/abstracts/search?q=Lina%20Tang"> Lina Tang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Zhang"> Hao Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The values of image classification accuracy are affected by class size distributions and classification schemes, making it difficult to compare the performance of classification algorithms across different remote sensing data sources and classification systems. Based on the term efficacy from medicine and pharmacology, we have developed the metrics of image classification efficacy at the map and class levels. The novelty of this approach is that a baseline classification is involved in computing image classification efficacies so that the effects of class statistics are reduced. Furthermore, the image classification efficacies are interpretable and comparable, and thus, strengthen the assessment of image data classification methods. We use real-world and hypothetical examples to explain the use of image classification efficacies. The metrics of image classification efficacy meet the critical need to rectify the strategy for the assessment of image classification performance as image classification methods are becoming more diversified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy%20assessment" title="accuracy assessment">accuracy assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=efficacy" title=" efficacy"> efficacy</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty" title=" uncertainty"> uncertainty</a> </p> <a href="https://publications.waset.org/abstracts/142555/evaluating-classification-with-efficacy-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142555.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5582</span> Texture Analysis of Grayscale Co-Occurrence Matrix on Mammographic Indexed Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sushma">S. Sushma</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Balasubramanian"> S. Balasubramanian</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20C.%20Latha"> K. C. Latha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The mammographic image of breast cancer compressed and synthesized to get co-efficient values which will be converted (5x5) matrix to get ROI image where we get the highest value of effected region and with the same ideology the technique has been extended to differentiate between Calcification and normal cell image using mean value derived from 5x5 matrix values <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=texture%20analysis" title="texture analysis">texture analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=mammographic%20image" title=" mammographic image"> mammographic image</a>, <a href="https://publications.waset.org/abstracts/search?q=partitioned%20gray%20scale%20co-oocurance%20matrix" title=" partitioned gray scale co-oocurance matrix"> partitioned gray scale co-oocurance matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=co-efficient" title=" co-efficient "> co-efficient </a> </p> <a href="https://publications.waset.org/abstracts/17516/texture-analysis-of-grayscale-co-occurrence-matrix-on-mammographic-indexed-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">533</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5581</span> Size Reduction of Images Using Constraint Optimization Approach for Machine Communications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chee%20Sun%20Won">Chee Sun Won</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the size reduction of images for machine-to-machine communications. Here, the salient image regions to be preserved include the image patches of the key-points such as corners and blobs. Based on a saliency image map from the key-points and their image patches, an axis-aligned grid-size optimization is proposed for the reduction of image size. To increase the size-reduction efficiency the aspect ratio constraint is relaxed in the constraint optimization framework. The proposed method yields higher matching accuracy after the size reduction than the conventional content-aware image size-reduction methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20compression" title="image compression">image compression</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description" title=" key-point detection and description"> key-point detection and description</a>, <a href="https://publications.waset.org/abstracts/search?q=machine-to-machine%20communication" title=" machine-to-machine communication"> machine-to-machine communication</a> </p> <a href="https://publications.waset.org/abstracts/67605/size-reduction-of-images-using-constraint-optimization-approach-for-machine-communications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67605.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5580</span> A Review on Artificial Neural Networks in Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Afsharipoor">B. Afsharipoor</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Nazemi"> E. Nazemi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Artificial neural networks (ANNs) are powerful tool for prediction which can be trained based on a set of examples and thus, it would be useful for nonlinear image processing. The present paper reviews several paper regarding applications of ANN in image processing to shed the light on advantage and disadvantage of ANNs in this field. Different steps in the image processing chain including pre-processing, enhancement, segmentation, object recognition, image understanding and optimization by using ANN are summarized. Furthermore, results on using multi artificial neural networks are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20understanding" title=" image understanding"> image understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=MANN" title=" MANN"> MANN</a> </p> <a href="https://publications.waset.org/abstracts/36843/a-review-on-artificial-neural-networks-in-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36843.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">407</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5579</span> Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Wei">Hui Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Dong"> Zheng Dong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20model" title="biological model">biological model</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20neural%20network" title=" multi-layer neural network"> multi-layer neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a> </p> <a href="https://publications.waset.org/abstracts/25221/local-image-features-emerging-from-brain-inspired-multi-layer-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">542</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5578</span> Definition, Structure, and Core Functions of the State Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosa%20Nurtazina">Rosa Nurtazina</a>, <a href="https://publications.waset.org/abstracts/search?q=Yerkebulan%20Zhumashov"> Yerkebulan Zhumashov</a>, <a href="https://publications.waset.org/abstracts/search?q=Maral%20Tomanova"> Maral Tomanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Humanity is entering an era when 'virtual reality' as the image of the world created by the media with the help of the Internet does not match the reality in many respects, when new communication technologies create a fundamentally different and previously unknown 'global space'. According to these technologies, the state begins to change the basic technology of political communication of the state and society, the state and the state. Nowadays, image of the state becomes the most important tool and technology. Image is a purposefully created image granting political object (person, organization, country, etc.) certain social and political values and promoting more emotional perception. Political image of the state plays an important role in international relations. The success of the country's foreign policy, development of trade and economic relations with other countries depends on whether it is positive or negative. Foreign policy image has an impact on political processes taking place in the state: the negative image of the countries can be used by opposition forces as one of the arguments to criticize the government and its policies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20country" title="image of the country">image of the country</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20classification" title=" country's image classification"> country's image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=function%20of%20the%20country%20image" title=" function of the country image"> function of the country image</a>, <a href="https://publications.waset.org/abstracts/search?q=country%27s%20image%20components" title=" country's image components"> country's image components</a> </p> <a href="https://publications.waset.org/abstracts/5104/definition-structure-and-core-functions-of-the-state-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5104.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5577</span> Bitplanes Gray-Level Image Encryption Approach Using Arnold Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Abdrhman%20M.%20Ukasha">Ali Abdrhman M. Ukasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SSPCE%20method" title="SSPCE method">SSPCE method</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20compression-salt-%20peppers%20attacks" title=" image compression-salt- peppers attacks"> image compression-salt- peppers attacks</a>, <a href="https://publications.waset.org/abstracts/search?q=bitplanes%20decomposition" title=" bitplanes decomposition"> bitplanes decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnold%20transform" title=" Arnold transform"> Arnold transform</a>, <a href="https://publications.waset.org/abstracts/search?q=lossless%20image%20encryption" title=" lossless image encryption"> lossless image encryption</a> </p> <a href="https://publications.waset.org/abstracts/14573/bitplanes-gray-level-image-encryption-approach-using-arnold-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5576</span> Integral Image-Based Differential Filters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Inoue">Kohei Inoue</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenji%20Hara"> Kenji Hara</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiichi%20Urahama"> Kiichi Urahama</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=integral%20images" title="integral images">integral images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20images" title=" differential images"> differential images</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%20filters" title=" differential filters"> differential filters</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title=" image fusion"> image fusion</a> </p> <a href="https://publications.waset.org/abstracts/8531/integral-image-based-differential-filters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5575</span> Bitplanes Image Encryption/Decryption Using Edge Map (SSPCE Method) and Arnold Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20A.%20Ukasha">Ali A. Ukasha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SSPCE%20method" title="SSPCE method">SSPCE method</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20compression" title=" image compression"> image compression</a>, <a href="https://publications.waset.org/abstracts/search?q=salt%20and%0D%0Apeppers%20attacks" title=" salt and peppers attacks"> salt and peppers attacks</a>, <a href="https://publications.waset.org/abstracts/search?q=bitplanes%20decomposition" title=" bitplanes decomposition"> bitplanes decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=Arnold%20transform" title=" Arnold transform"> Arnold transform</a>, <a href="https://publications.waset.org/abstracts/search?q=lossless%20image%20encryption" title=" lossless image encryption"> lossless image encryption</a> </p> <a href="https://publications.waset.org/abstracts/14570/bitplanes-image-encryptiondecryption-using-edge-map-sspce-method-and-arnold-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14570.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5574</span> Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasuyuki%20Takahashi">Yasuyuki Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hayato%20Ishimura"> Hayato Ishimura</a>, <a href="https://publications.waset.org/abstracts/search?q=Masao%20Miyagawa"> Masao Miyagawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Teruhito%20Mochizuki"> Teruhito Mochizuki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of <sup>99m</sup>Tc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of <sup>99m</sup>Tc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the <sup>99m</sup>Tc-tetorofosmin myocardial SPECT image is considerably improved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=99mTc-tetrofosmin" title="99mTc-tetrofosmin">99mTc-tetrofosmin</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20SPECT" title=" dynamic SPECT"> dynamic SPECT</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20subtraction" title=" time subtraction"> time subtraction</a>, <a href="https://publications.waset.org/abstracts/search?q=semiconductor%20detector" title=" semiconductor detector"> semiconductor detector</a> </p> <a href="https://publications.waset.org/abstracts/32340/improving-99mtc-tetrofosmin-myocardial-perfusion-images-by-time-subtraction-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32340.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">335</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5573</span> Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Z.%20Kurian">M. Z. Kurian</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20V.%20Chidananda%20Murthy"> M. V. Chidananda Murthy</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20S.%20Guruprasad"> H. S. Guruprasad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20b-spline" title="advanced b-spline">advanced b-spline</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20super-resolution" title=" image super-resolution"> image super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=mean%20square%20error%20%28MSE%29" title=" mean square error (MSE)"> mean square error (MSE)</a>, <a href="https://publications.waset.org/abstracts/search?q=peak%20signal%20to%20noise%20ratio%20%28PSNR%29" title=" peak signal to noise ratio (PSNR)"> peak signal to noise ratio (PSNR)</a>, <a href="https://publications.waset.org/abstracts/search?q=resolution%20down%20converter" title=" resolution down converter"> resolution down converter</a> </p> <a href="https://publications.waset.org/abstracts/59499/design-and-performance-analysis-of-advanced-b-spline-algorithm-for-image-resolution-enhancement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5572</span> Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=L.%20Hamsaveni"> L. Hamsaveni</a>, <a href="https://publications.waset.org/abstracts/search?q=Navya%20Prakash"> Navya Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresha"> Suresha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=grayscale%20image%20format" title="grayscale image format">grayscale image format</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusing" title=" image fusing"> image fusing</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20image%20format" title=" RGB image format"> RGB image format</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%20detection" title=" SURF detection"> SURF detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YCbCr%20image%20format" title=" YCbCr image format"> YCbCr image format</a> </p> <a href="https://publications.waset.org/abstracts/64187/degraded-document-analysis-and-extraction-of-original-text-document-an-approach-without-optical-character-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">377</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5571</span> Secure Image Retrieval Based on Orthogonal Decomposition under Cloud Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Xu">Y. Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20Xiong"> L. Xiong</a>, <a href="https://publications.waset.org/abstracts/search?q=Z.%20Xu"> Z. Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to protect data privacy, image with sensitive or private information needs to be encrypted before being outsourced to the cloud. However, this causes difficulties in image retrieval and data management. A secure image retrieval method based on orthogonal decomposition is proposed in the paper. The image is divided into two different components, for which encryption and feature extraction are executed separately. As a result, cloud server can extract features from an encrypted image directly and compare them with the features of the queried images, so that the user can thus obtain the image. Different from other methods, the proposed method has no special requirements to encryption algorithms. Experimental results prove that the proposed method can achieve better security and better retrieval precision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=secure%20image%20retrieval" title="secure image retrieval">secure image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20search" title=" secure search"> secure search</a>, <a href="https://publications.waset.org/abstracts/search?q=orthogonal%20decomposition" title=" orthogonal decomposition"> orthogonal decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20cloud%20computing" title=" secure cloud computing"> secure cloud computing</a> </p> <a href="https://publications.waset.org/abstracts/29115/secure-image-retrieval-based-on-orthogonal-decomposition-under-cloud-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29115.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">485</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5570</span> Structure Analysis of Text-Image Connection in Jalayrid Period Illustrated Manuscripts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20Khani%20Oushani">Mahsa Khani Oushani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text and image are two important elements in the field of Iranian art, the text component and the image component have always been manifested together. The image narrates the text and the text is the factor in the formation of the image and they are closely related to each other. The connection between text and image is an interactive and two-way connection in the tradition of Iranian manuscript arrangement. The interaction between the narrative description and the image scene is the result of a direct and close connection between the text and the image, which in addition to the decorative aspect, also has a descriptive aspect. In this article the connection between the text element and the image element and its adaptation to the theory of Roland Barthes, the structuralism theorist, in this regard will be discussed. This study tends to investigate the question of how the connection between text and image in illustrated manuscripts of the Jalayrid period is defined according to Barthes’ theory. And what kind of proportion has the artist created in the composition between text and image. Based on the results of reviewing the data of this study, it can be inferred that in the Jalayrid period, the image has a reference connection and although it is of major importance on the page, it also maintains a close connection with the text and is placed in a special proportion. It is not necessarily balanced and symmetrical and sometimes uses imbalance for composition. This research has been done by descriptive-analytical method, which has been done by library collection method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=structure" title="structure">structure</a>, <a href="https://publications.waset.org/abstracts/search?q=text" title=" text"> text</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=Jalayrid" title=" Jalayrid"> Jalayrid</a>, <a href="https://publications.waset.org/abstracts/search?q=painter" title=" painter"> painter</a> </p> <a href="https://publications.waset.org/abstracts/138869/structure-analysis-of-text-image-connection-in-jalayrid-period-illustrated-manuscripts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138869.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">233</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5569</span> Robust Image Design Based Steganographic System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadiq%20J.%20Abou-Loukh">Sadiq J. Abou-Loukh</a>, <a href="https://publications.waset.org/abstracts/search?q=Hanan%20M.%20Habbi"> Hanan M. Habbi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a steganography to hide the transmitted information without excite suspicious and also illustrates the level of secrecy that can be increased by using cryptography techniques. The proposed system has been implemented firstly by encrypted image file one time pad key and secondly encrypted message that hidden to perform encryption followed by image embedding. Then the new image file will be created from the original image by using four triangles operation, the new image is processed by one of two image processing techniques. The proposed two processing techniques are thresholding and differential predictive coding (DPC). Afterwards, encryption or decryption keys are generated by functional key generator. The generator key is used one time only. Encrypted text will be hidden in the places that are not used for image processing and key generation system has high embedding rate (0.1875 character/pixel) for true color image (24 bit depth). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=encryption" title="encryption">encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=thresholding" title=" thresholding"> thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=differential%0D%0Apredictive%20coding" title=" differential predictive coding"> differential predictive coding</a>, <a href="https://publications.waset.org/abstracts/search?q=four%20triangles%20operation" title=" four triangles operation "> four triangles operation </a> </p> <a href="https://publications.waset.org/abstracts/16654/robust-image-design-based-steganographic-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5568</span> Multi-Spectral Medical Images Enhancement Using a Weber’s law</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muna%20F.%20Al-Sammaraie">Muna F. Al-Sammaraie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this research is to present a multi spectral image enhancement methods used to achieve highly real digital image populates only a small portion of the available range of digital values. Also, a quantitative measure of image enhancement is presented. This measure is related with concepts of the Webers Low of the human visual system. For decades, several image enhancement techniques have been proposed. Although most techniques require profuse amount of advance and critical steps, the result for the perceive image are not as satisfied. This study involves changing the original values so that more of the available range is used; then increases the contrast between features and their backgrounds. It consists of reading the binary image on the basis of pixels taking them byte-wise and displaying it, calculating the statistics of an image, automatically enhancing the color of the image based on statistics calculation using algorithms and working with RGB color bands. Finally, the enhanced image is displayed along with image histogram. A number of experimental results illustrated the performance of these algorithms. Particularly the quantitative measure has helped to select optimal processing parameters: the best parameters and transform. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-spectral" title=" multi-spectral"> multi-spectral</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB" title=" RGB"> RGB</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a> </p> <a href="https://publications.waset.org/abstracts/8574/multi-spectral-medical-images-enhancement-using-a-webers-law" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">328</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5567</span> High Speed Image Rotation Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hee-Choul%20Kwon">Hee-Choul Kwon</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyungjin%20Cho"> Hyungjin Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Heeyong%20Kwon"> Heeyong Kwon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image rotation is one of main pre-processing step in image processing or image pattern recognition. It is implemented with rotation matrix multiplication. However it requires lots of floating point arithmetic operations and trigonometric function calculations, so it takes long execution time. We propose a new high speed image rotation algorithm without two major time-consuming operations. We compare the proposed algorithm with the conventional rotation one with various size images. Experimental results show that the proposed algorithm is superior to the conventional rotation ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high%20speed%20rotation%20operation" title="high speed rotation operation">high speed rotation operation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20rotation" title=" image rotation"> image rotation</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=transformation%20matrix" title=" transformation matrix"> transformation matrix</a> </p> <a href="https://publications.waset.org/abstracts/25258/high-speed-image-rotation-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25258.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">506</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=186">186</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=187">187</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20of%20the%20past&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>