CINXE.COM
Search results for: image segmentation
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: image segmentation</title> <meta name="description" content="Search results for: image segmentation"> <meta name="keywords" content="image segmentation"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="image segmentation" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="image segmentation"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2960</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: image segmentation</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2750</span> On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hala%20Zaghloul">Hala Zaghloul</a>, <a href="https://publications.waset.org/abstracts/search?q=Taymoor%20Nazmy"> Taymoor Nazmy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20system" title="cognitive system">cognitive system</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=PCNN%20kernels" title=" PCNN kernels"> PCNN kernels</a> </p> <a href="https://publications.waset.org/abstracts/53579/on-the-implementation-of-the-pulse-coupled-neural-network-pcnn-in-the-vision-of-cognitive-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2749</span> Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lamees%20Nasser">Lamees Nasser</a>, <a href="https://publications.waset.org/abstracts/search?q=Yago%20Diez"> Yago Diez</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20Mart%C3%AD"> Robert Martí</a>, <a href="https://publications.waset.org/abstracts/search?q=Joan%20Mart%C3%AD"> Joan Martí</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Sadek"> Ibrahim Sadek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automated%203D%20Breast%20Ultrasound" title="Automated 3D Breast Ultrasound">Automated 3D Breast Ultrasound</a>, <a href="https://publications.waset.org/abstracts/search?q=Eigenvalues%20of%20Hessian%20matrix" title=" Eigenvalues of Hessian matrix"> Eigenvalues of Hessian matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=Nipple%20detection" title=" Nipple detection"> Nipple detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Rib%20detection" title=" Rib detection"> Rib detection</a> </p> <a href="https://publications.waset.org/abstracts/41104/robust-segmentation-of-salient-features-in-automatic-breast-ultrasound-abus-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41104.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">330</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2748</span> Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ainouna%20Bouziane">Ainouna Bouziane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electron%20tomography" title="electron tomography">electron tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=supported%20catalysts" title=" supported catalysts"> supported catalysts</a>, <a href="https://publications.waset.org/abstracts/search?q=nanometrology" title=" nanometrology"> nanometrology</a>, <a href="https://publications.waset.org/abstracts/search?q=error%20assessment" title=" error assessment"> error assessment</a> </p> <a href="https://publications.waset.org/abstracts/170855/quantitative-evaluation-of-supported-catalysts-key-properties-from-electron-tomography-studies-assessing-accuracy-using-material-realistic-3d-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170855.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">86</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2747</span> Automatic Target Recognition in SAR Images Based on Sparse Representation Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmet%20Karagoz">Ahmet Karagoz</a>, <a href="https://publications.waset.org/abstracts/search?q=Irfan%20Karagoz"> Irfan Karagoz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20target%20recognition" title="automatic target recognition">automatic target recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=SAR%20images" title=" SAR images"> SAR images</a> </p> <a href="https://publications.waset.org/abstracts/71185/automatic-target-recognition-in-sar-images-based-on-sparse-representation-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71185.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2746</span> Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bensaid%20A.">Bensaid A.</a>, <a href="https://publications.waset.org/abstracts/search?q=Mostephaoui%20T."> Mostephaoui T.</a>, <a href="https://publications.waset.org/abstracts/search?q=Nedjai%20R."> Nedjai R.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=land%20development" title="land development">land development</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS" title=" GIS"> GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=sand%20dunes" title=" sand dunes"> sand dunes</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a> </p> <a href="https://publications.waset.org/abstracts/152493/multi-scale-geographic-object-based-image-analysis-geobia-approach-to-segment-a-very-high-resolution-images-for-extraction-of-new-degraded-zones-application-to-the-region-of-mecheria-in-the-south-west-of-algeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">109</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2745</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2744</span> Detecting the Edge of Multiple Images in Parallel</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prakash%20K.%20Aithal">Prakash K. Aithal</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20Dinesh%20Acharya"> U. Dinesh Acharya</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajesh%20Gopakumar"> Rajesh Gopakumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Edge is variation of brightness in an image. Edge detection is useful in many application areas such as finding forests, rivers from a satellite image, detecting broken bone in a medical image etc. The paper discusses about finding edge of multiple aerial images in parallel .The proposed work tested on 38 images 37 colored and one monochrome image. The time taken to process N images in parallel is equivalent to time taken to process 1 image in sequential. The proposed method achieves pixel level parallelism as well as image level parallelism. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title="edge detection">edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=multicore" title=" multicore"> multicore</a>, <a href="https://publications.waset.org/abstracts/search?q=gpu" title=" gpu"> gpu</a>, <a href="https://publications.waset.org/abstracts/search?q=opencl" title=" opencl"> opencl</a>, <a href="https://publications.waset.org/abstracts/search?q=mpi" title=" mpi"> mpi</a> </p> <a href="https://publications.waset.org/abstracts/30818/detecting-the-edge-of-multiple-images-in-parallel" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30818.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2743</span> Image Processing of Scanning Electron Microscope Micrograph of Ferrite and Pearlite Steel for Recognition of Micro-Constituents</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subir%20Gupta">Subir Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Subhas%20Ganguly"> Subhas Ganguly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we demonstrate the new area of application of image processing in metallurgical images to develop the more opportunity for structure-property correlation based approaches of alloy design. The present exercise focuses on the development of image processing tools suitable for phrase segmentation, grain boundary detection and recognition of micro-constituents in SEM micrographs of ferrite and pearlite steels. A comprehensive data of micrographs have been experimentally developed encompassing the variation of ferrite and pearlite volume fractions and taking images at different magnification (500X, 1000X, 15000X, 2000X, 3000X and 5000X) under scanning electron microscope. The variation in the volume fraction has been achieved using four different plain carbon steel containing 0.1, 0.22, 0.35 and 0.48 wt% C heat treated under annealing and normalizing treatments. The obtained data pool of micrographs arbitrarily divided into two parts to developing training and testing sets of micrographs. The statistical recognition features for ferrite and pearlite constituents have been developed by learning from training set of micrographs. The obtained features for microstructure pattern recognition are applied to test set of micrographs. The analysis of the result shows that the developed strategy can successfully detect the micro constitutes across the wide range of magnification and variation of volume fractions of the constituents in the structure with an accuracy of about +/- 5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SEM%20micrograph" title="SEM micrograph">SEM micrograph</a>, <a href="https://publications.waset.org/abstracts/search?q=metallurgical%20image%20processing" title=" metallurgical image processing"> metallurgical image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=ferrite%20pearlite%20steel" title=" ferrite pearlite steel"> ferrite pearlite steel</a>, <a href="https://publications.waset.org/abstracts/search?q=microstructure" title=" microstructure"> microstructure</a> </p> <a href="https://publications.waset.org/abstracts/71497/image-processing-of-scanning-electron-microscope-micrograph-of-ferrite-and-pearlite-steel-for-recognition-of-micro-constituents" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">199</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2742</span> Content Based Video Retrieval System Using Principal Object Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Thinh%20Bui">Van Thinh Bui</a>, <a href="https://publications.waset.org/abstracts/search?q=Anh%20Tuan%20Tran"> Anh Tuan Tran</a>, <a href="https://publications.waset.org/abstracts/search?q=Quoc%20Viet%20Ngo"> Quoc Viet Ngo</a>, <a href="https://publications.waset.org/abstracts/search?q=The%20Bao%20Pham"> The Bao Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video retrieval is a searching problem on videos or clips based on content in which they are relatively close to an input image or video. The application of this retrieval consists of selecting video in a folder or recognizing a human in security camera. However, some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. In order to overcome all obstacles, we propose a content-based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is performed on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show. The performance is evaluated in promising comparison to the other approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title="video retrieval">video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20objects" title=" principal objects"> principal objects</a>, <a href="https://publications.waset.org/abstracts/search?q=keyframe" title=" keyframe"> keyframe</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20of%20aggregating%20superpixels" title=" segmentation of aggregating superpixels"> segmentation of aggregating superpixels</a>, <a href="https://publications.waset.org/abstracts/search?q=speeded%20up%20robust%20features" title=" speeded up robust features"> speeded up robust features</a>, <a href="https://publications.waset.org/abstracts/search?q=bag-of-words" title=" bag-of-words"> bag-of-words</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/59753/content-based-video-retrieval-system-using-principal-object-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59753.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2741</span> Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaeyoung%20Lee">Jaeyoung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20network" title="edge network">edge network</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20network" title=" embedded network"> embedded network</a>, <a href="https://publications.waset.org/abstracts/search?q=MMA" title=" MMA"> MMA</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20multiplication%20accelerator" title=" matrix multiplication accelerator"> matrix multiplication accelerator</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation%20network" title=" semantic segmentation network"> semantic segmentation network</a> </p> <a href="https://publications.waset.org/abstracts/125967/embedded-semantic-segmentation-network-optimized-for-matrix-multiplication-accelerator" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2740</span> Speeding-up Gray-Scale FIC by Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20A.%20Al-Hilo">Eman A. Al-Hilo</a>, <a href="https://publications.waset.org/abstracts/search?q=Hawraa%20H.%20Al-Waelly"> Hawraa H. Al-Waelly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractal%20gray%20level%20image" title="fractal gray level image">fractal gray level image</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20compression%20technique" title=" fractal compression technique"> fractal compression technique</a>, <a href="https://publications.waset.org/abstracts/search?q=iterated%20function%20system" title=" iterated function system"> iterated function system</a>, <a href="https://publications.waset.org/abstracts/search?q=moments%20feature" title=" moments feature"> moments feature</a>, <a href="https://publications.waset.org/abstracts/search?q=zero-mean%20range-domain%20block" title=" zero-mean range-domain block"> zero-mean range-domain block</a> </p> <a href="https://publications.waset.org/abstracts/19903/speeding-up-gray-scale-fic-by-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">492</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2739</span> The Trajectory of the Ball in Football Game</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Motahari">Mahdi Motahari</a>, <a href="https://publications.waset.org/abstracts/search?q=Mojtaba%20Farzaneh"> Mojtaba Farzaneh</a>, <a href="https://publications.waset.org/abstracts/search?q=Ebrahim%20Sepidbar"> Ebrahim Sepidbar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tracking of moving and flying targets is one of the most important issues in image processing topic. Estimating of trajectory of desired object in short-term and long-term scale is more important than tracking of moving and flying targets. In this paper, a new way of identifying and estimating of future trajectory of a moving ball in long-term scale is estimated by using synthesis and interaction of image processing algorithms including noise removal and image segmentation, Kalman filter algorithm in order to estimating of trajectory of ball in football game in short-term scale and intelligent adaptive neuro-fuzzy algorithm based on time series of traverse distance. The proposed system attain more than 96% identify accuracy by using aforesaid methods and relaying on aforesaid algorithms and data base video in format of synthesis and interaction. Although the present method has high precision, it is time consuming. By comparing this method with other methods we realize the accuracy and efficiency of that. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tracking" title="tracking">tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20targets%20and%20flying" title=" moving targets and flying"> moving targets and flying</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligent%20systems" title=" artificial intelligent systems"> artificial intelligent systems</a>, <a href="https://publications.waset.org/abstracts/search?q=estimating%20of%20trajectory" title=" estimating of trajectory"> estimating of trajectory</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a> </p> <a href="https://publications.waset.org/abstracts/4185/the-trajectory-of-the-ball-in-football-game" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4185.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2738</span> Digital Image Forensics: Discovering the History of Digital Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gurinder%20Singh">Gurinder Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Kulbir%20Singh"> Kulbir Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital multimedia contents such as image, video, and audio can be tampered easily due to the availability of powerful editing softwares. Multimedia forensics is devoted to analyze these contents by using various digital forensic techniques in order to validate their authenticity. Digital image forensics is dedicated to investigate the reliability of digital images by analyzing the integrity of data and by reconstructing the historical information of an image related to its acquisition phase. In this paper, a survey is carried out on the forgery detection by considering the most recent and promising digital image forensic techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Computer%20Forensics" title="Computer Forensics">Computer Forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=Multimedia%20Forensics" title=" Multimedia Forensics"> Multimedia Forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=Image%20Ballistics" title=" Image Ballistics"> Image Ballistics</a>, <a href="https://publications.waset.org/abstracts/search?q=Camera%20Source%20Identification" title=" Camera Source Identification"> Camera Source Identification</a>, <a href="https://publications.waset.org/abstracts/search?q=Forgery%20Detection" title=" Forgery Detection"> Forgery Detection</a> </p> <a href="https://publications.waset.org/abstracts/76669/digital-image-forensics-discovering-the-history-of-digital-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76669.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2737</span> Gray Level Image Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Roza%20Afarin">Roza Afarin</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeed%20Mozaffari"> Saeed Mozaffari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=correlation%20coefficients" title="correlation coefficients">correlation coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title=" image encryption"> image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20entropy" title=" image entropy"> image entropy</a> </p> <a href="https://publications.waset.org/abstracts/10723/gray-level-image-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10723.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">330</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2736</span> Data Hiding in Gray Image Using ASCII Value and Scanning Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20K.%20Pateriya">R. K. Pateriya</a>, <a href="https://publications.waset.org/abstracts/search?q=Jyoti%20Bharti"> Jyoti Bharti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for data hiding methods which provides a secret communication between sender and receiver. The data is hidden in gray-scale images and the boundary of gray-scale image is used to store the mapping information. In this an approach data is in ASCII format and the mapping is in between ASCII value of hidden message and pixel value of cover image, since pixel value of an image as well as ASCII value is in range of 0 to 255 and this mapping information is occupying only 1 bit per character of hidden message as compared to 8 bit per character thus maintaining good quality of stego image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ASCII%20value" title="ASCII value">ASCII value</a>, <a href="https://publications.waset.org/abstracts/search?q=cover%20image" title=" cover image"> cover image</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20value" title=" pixel value"> pixel value</a>, <a href="https://publications.waset.org/abstracts/search?q=stego%20image" title=" stego image"> stego image</a>, <a href="https://publications.waset.org/abstracts/search?q=secret%20message" title=" secret message"> secret message</a> </p> <a href="https://publications.waset.org/abstracts/50472/data-hiding-in-gray-image-using-ascii-value-and-scanning-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2735</span> High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amal%20Khalifa">Amal Khalifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicolas%20Vana%20Santos"> Nicolas Vana Santos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=steganography" title=" steganography"> steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title=" discrete wavelet transform"> discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/170293/high-capacity-image-steganography-using-wavelet-based-fusion-on-deep-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2734</span> Typology of Gaming Tourists Based on the Perception of Destination Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mi%20Ju%20Choi">Mi Ju Choi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated the perception of gaming tourists toward Macau and developed a typology of gaming tourists. The 1,497 responses from tourists in Macau were collected through convenience sampling method. The dimensions of multi-culture, convenience, economy, gaming, and unsafety, were subsequently extracted as the factors of perception of gaming tourists in Macau. Cluster analysis was performed using the delineated factors (perception of tourists on Macau). Four heterogonous groups were generated, namely, gaming lovers (n = 467, 31.2%), exotic lovers (n = 509, 34.0%), reasonable budget seekers (n = 269, 18.0%), and convenience seekers (n = 252, 16.8%). Further analysis was performed to investigate any difference in gaming behavior and tourist activities. The findings are expected to contribute to the efforts of destination marketing organizations (DMOs) in establishing effective business strategies, provide a profile of gaming tourists in certain market segments, and assist DMOs and casino managers in establishing more effective marketing strategies for target markets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=destination%20image" title="destination image">destination image</a>, <a href="https://publications.waset.org/abstracts/search?q=gaming%20tourists" title=" gaming tourists"> gaming tourists</a>, <a href="https://publications.waset.org/abstracts/search?q=Macau" title=" Macau"> Macau</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/62322/typology-of-gaming-tourists-based-on-the-perception-of-destination-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62322.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2733</span> Simulation and Performance Evaluation of Transmission Lines with Shield Wire Segmentation against Atmospheric Discharges Using ATPDraw</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marcio%20S.%20da%20Silva">Marcio S. da Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Jose%20Mauricio%20de%20B.%20Bezerra"> Jose Mauricio de B. Bezerra</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20E.%20de%20A.%20Nogueira"> Antonio E. de A. Nogueira</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to make a performance analysis of shield wire transmission lines against atmospheric discharges when it is made the option of sectioning the shield wire and verify if the tolerability of the change. As a goal of this work, it was established to make complete modeling of a transmission line in the ATPDraw program with shield wire grounded in all the towers and in some towers. The methodology used to make the proposed evaluation was to choose an actual transmission line that served as a case study. From the choice of transmission line and verification of all its topology and materials, complete modeling of the line using the ATPDraw software was performed. Then several atmospheric discharges were simulated by striking the grounded shield wires in each tower. These simulations served to identify the behavior of the existing line against atmospheric discharges. After this first analysis, the same line was reconsidered with shield wire segmentation. The shielding wire segmentation technique aims to reduce induced losses in shield wires and is adopted in some transmission lines in Brazil. With the same conditions of atmospheric discharge the transmission line, this time with shield wire segmentation was again evaluated. The results obtained showed that it is possible to obtain similar performances against atmospheric discharges between a shield wired line in multiple towers and the same line with shield wire segmentation if some precautions are adopted as verification of the ground resistance of the wire segmented shield, adequacy of the maximum length of the segmented gap, evaluation of the separation length of the electrodes of the insulator spark, among others. As a conclusion, it is verified that since the correct assessment and adopted the correct criteria of adjustment a transmission line with shielded wire segmentation can perform very similar to the traditional use with multiple earths. This solution contributes in a very important way to the reduction of energy losses in transmission lines. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=atmospheric%20discharges" title="atmospheric discharges">atmospheric discharges</a>, <a href="https://publications.waset.org/abstracts/search?q=ATPDraw" title=" ATPDraw"> ATPDraw</a>, <a href="https://publications.waset.org/abstracts/search?q=shield%20wire" title=" shield wire"> shield wire</a>, <a href="https://publications.waset.org/abstracts/search?q=transmission%20lines" title=" transmission lines"> transmission lines</a> </p> <a href="https://publications.waset.org/abstracts/103131/simulation-and-performance-evaluation-of-transmission-lines-with-shield-wire-segmentation-against-atmospheric-discharges-using-atpdraw" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/103131.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2732</span> Improvement Image Summarization using Image Processing and Particle swarm optimization Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hooman%20Torabifard">Hooman Torabifard</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the last few years, with the progress of technology and computers and artificial intelligence entry into all kinds of scientific and industrial fields, the lifestyles of human life have changed and in general, the way of humans live on earth has many changes and development. Until now, some of the changes has occurred in the context of digital images and image processing and still continues. However, besides all the benefits, there have been disadvantages. One of these disadvantages is the multiplicity of images with high volume and data; the focus of this paper is on improving and developing a method for summarizing and enhancing the productivity of these images. The general method used for this purpose in this paper consists of a set of methods based on data obtained from image processing and using the PSO (Particle swarm optimization) algorithm. In the remainder of this paper, the method used is elaborated in detail. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20summarization" title="image summarization">image summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20swarm%20optimization" title=" particle swarm optimization"> particle swarm optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20threshold" title=" image threshold"> image threshold</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/138289/improvement-image-summarization-using-image-processing-and-particle-swarm-optimization-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">133</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2731</span> Improved Super-Resolution Using Deep Denoising Convolutional Neural Network </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pawan%20Kumar%20Mishra">Pawan Kumar Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Singh%20Bisht"> Ganesh Singh Bisht</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=resolution" title="resolution">resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=de-blurring" title=" de-blurring"> de-blurring</a> </p> <a href="https://publications.waset.org/abstracts/78802/improved-super-resolution-using-deep-denoising-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78802.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">517</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2730</span> High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalid%20A.%20Al-Afandy">Khalid A. Al-Afandy</a>, <a href="https://publications.waset.org/abstracts/search?q=El-Sayyed%20El-Rabaie"> El-Sayyed El-Rabaie</a>, <a href="https://publications.waset.org/abstracts/search?q=Osama%20Salah"> Osama Salah</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20El-Mhalaway"> Ahmed El-Mhalaway</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=steganography" title="steganography">steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=stego" title=" stego"> stego</a>, <a href="https://publications.waset.org/abstracts/search?q=LSB" title=" LSB"> LSB</a>, <a href="https://publications.waset.org/abstracts/search?q=crop" title=" crop"> crop</a> </p> <a href="https://publications.waset.org/abstracts/44747/high-secure-data-hiding-using-cropping-image-and-least-significant-bit-steganography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44747.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2729</span> Secure E-Pay System Using Steganography and Visual Cryptography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Suganya%20Devi">K. Suganya Devi</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Srinivasan"> P. Srinivasan</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20Vaishnave"> M. P. Vaishnave</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Arutperumjothi"> G. Arutperumjothi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Today’s internet world is highly prone to various online attacks, of which the most harmful attack is phishing. The attackers host the fake websites which are very similar and look alike. We propose an image based authentication using steganography and visual cryptography to prevent phishing. This paper presents a secure steganographic technique for true color (RGB) images and uses Discrete Cosine Transform to compress the images. The proposed method hides the secret data inside the cover image. The use of visual cryptography is to preserve the privacy of an image by decomposing the original image into two shares. Original image can be identified only when both qualified shares are simultaneously available. Individual share does not reveal the identity of the original image. Thus, the existence of the secret message is hard to be detected by the RS steganalysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20security" title="image security">image security</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20LSB" title=" random LSB"> random LSB</a>, <a href="https://publications.waset.org/abstracts/search?q=steganography" title=" steganography"> steganography</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20cryptography" title=" visual cryptography"> visual cryptography</a> </p> <a href="https://publications.waset.org/abstracts/67554/secure-e-pay-system-using-steganography-and-visual-cryptography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67554.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">330</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2728</span> Noise Detection Algorithm for Skin Disease Image Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Minakshi%20Mainaji%20Sonawane">Minakshi Mainaji Sonawane</a>, <a href="https://publications.waset.org/abstracts/search?q=Bharti%20W.%20Gawali"> Bharti W. Gawali</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudhir%20Mendhekar"> Sudhir Mendhekar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramesh%20R.%20Manza"> Ramesh R. Manza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> People's lives and health are severely impacted by skin diseases. A new study proposes an effective method for identifying the different forms of skin diseases. Image denoising is a technique for improving image quality after it has been harmed by noise. The proposed technique is based on the usage of the wavelet transform. Wavelet transform is the best method for analyzing the image due to the ability to split the image into the sub-band, which has been used to estimate the noise ratio at the noisy image. According to experimental results, the proposed method presents the best values for MSE, PSNR, and Entropy for denoised images. we can found in Also, by using different types of wavelet transform filters is make the proposed approach can obtain the best results 23.13, 20.08, 50.7 for the image denoising process <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MSE" title="MSE">MSE</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20filter" title=" Gaussian filter"> Gaussian filter</a>, <a href="https://publications.waset.org/abstracts/search?q=DWT" title=" DWT"> DWT</a> </p> <a href="https://publications.waset.org/abstracts/142039/noise-detection-algorithm-for-skin-disease-image-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2727</span> Utilizing the Principal Component Analysis on Multispectral Aerial Imagery for Identification of Underlying Structures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marcos%20Bosques-Perez">Marcos Bosques-Perez</a>, <a href="https://publications.waset.org/abstracts/search?q=Walter%20Izquierdo"> Walter Izquierdo</a>, <a href="https://publications.waset.org/abstracts/search?q=Harold%20Martin"> Harold Martin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liangdon%20Deng"> Liangdon Deng</a>, <a href="https://publications.waset.org/abstracts/search?q=Josue%20Rodriguez"> Josue Rodriguez</a>, <a href="https://publications.waset.org/abstracts/search?q=Thony%20Yan"> Thony Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mercedes%20Cabrerizo"> Mercedes Cabrerizo</a>, <a href="https://publications.waset.org/abstracts/search?q=Armando%20Barreto"> Armando Barreto</a>, <a href="https://publications.waset.org/abstracts/search?q=Naphtali%20Rishe"> Naphtali Rishe</a>, <a href="https://publications.waset.org/abstracts/search?q=Malek%20Adjouadi"> Malek Adjouadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aerial imagery is a powerful tool when it comes to analyzing temporal changes in ecosystems and extracting valuable information from the observed scene. It allows us to identify and assess various elements such as objects, structures, textures, waterways, and shadows. To extract meaningful information, multispectral cameras capture data across different wavelength bands of the electromagnetic spectrum. In this study, the collected multispectral aerial images were subjected to principal component analysis (PCA) to identify independent and uncorrelated components or features that extend beyond the visible spectrum captured in standard RGB images. The results demonstrate that these principal components contain unique characteristics specific to certain wavebands, enabling effective object identification and image segmentation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral" title=" multispectral"> multispectral</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis" title=" principal component analysis"> principal component analysis</a> </p> <a href="https://publications.waset.org/abstracts/170875/utilizing-the-principal-component-analysis-on-multispectral-aerial-imagery-for-identification-of-underlying-structures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170875.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2726</span> Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samiah%20Alammari">Samiah Alammari</a>, <a href="https://publications.waset.org/abstracts/search?q=Nassim%20Ammour"> Nassim Ammour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on HSI dataset Indian Pines. The results confirm the capability of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=continual%20learning" title="continual learning">continual learning</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20reconstruction" title=" data reconstruction"> data reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20image%20segmentation" title=" hyperspectral image segmentation"> hyperspectral image segmentation</a> </p> <a href="https://publications.waset.org/abstracts/150863/continual-learning-using-data-generation-for-hyperspectral-remote-sensing-scene-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150863.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2725</span> Red Green Blue Image Encryption Based on Paillier Cryptographic System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mamadou%20I.%20Wade">Mamadou I. Wade</a>, <a href="https://publications.waset.org/abstracts/search?q=Henry%20C.%20Ogworonjo"> Henry C. Ogworonjo</a>, <a href="https://publications.waset.org/abstracts/search?q=Madiha%20Gul"> Madiha Gul</a>, <a href="https://publications.waset.org/abstracts/search?q=Mandoye%20Ndoye"> Mandoye Ndoye</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Chouikha"> Mohamed Chouikha</a>, <a href="https://publications.waset.org/abstracts/search?q=Wayne%20Patterson"> Wayne Patterson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a novel application of the Paillier cryptographic system to the encryption of RGB (Red Green Blue) images. In this method, an RGB image is first separated into its constituent channel images, and the Paillier encryption function is applied to each of the channels pixel intensity values. Next, the encrypted image is combined and compressed if necessary before being transmitted through an unsecured communication channel. The transmitted image is subsequently recovered by a decryption process. We performed a series of security and performance analyses to the recovered images in order to verify their robustness to security attack. The results show that the proposed image encryption scheme produces highly secured encrypted images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20encryption" title="image encryption">image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=Paillier%20cryptographic%20system" title=" Paillier cryptographic system"> Paillier cryptographic system</a>, <a href="https://publications.waset.org/abstracts/search?q=RBG%20image%20encryption" title=" RBG image encryption"> RBG image encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=Paillier" title=" Paillier"> Paillier</a> </p> <a href="https://publications.waset.org/abstracts/79232/red-green-blue-image-encryption-based-on-paillier-cryptographic-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">238</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2724</span> An Object-Based Image Resizing Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chin-Chen%20Chang">Chin-Chen Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=I-Ta%20Lee"> I-Ta Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Tsung-Ta%20Ke"> Tsung-Ta Ke</a>, <a href="https://publications.waset.org/abstracts/search?q=Wen-Kai%20Tai"> Wen-Kai Tai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=energy%20map" title="energy map">energy map</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20saliency" title=" visual saliency"> visual saliency</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20map" title=" gradient map"> gradient map</a>, <a href="https://publications.waset.org/abstracts/search?q=seam%20carving" title=" seam carving"> seam carving</a> </p> <a href="https://publications.waset.org/abstracts/8953/an-object-based-image-resizing-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8953.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2723</span> Rough Neural Networks in Adapting Cellular Automata Rule for Reducing Image Noise</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasser%20F.%20Hassan">Yasser F. Hassan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The reduction or removal of noise in a color image is an essential part of image processing, whether the final information is used for human perception or for an automatic inspection and analysis. This paper describes the modeling system based on the rough neural network model to adaptive cellular automata for various image processing tasks and noise remover. In this paper, we consider the problem of object processing in colored image using rough neural networks to help deriving the rules which will be used in cellular automata for noise image. The proposed method is compared with some classical and recent methods. The results demonstrate that the new model is capable of being trained to perform many different tasks, and that the quality of these results is comparable or better than established specialized algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rough%20sets" title="rough sets">rough sets</a>, <a href="https://publications.waset.org/abstracts/search?q=rough%20neural%20networks" title=" rough neural networks"> rough neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=cellular%20automata" title=" cellular automata"> cellular automata</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/1516/rough-neural-networks-in-adapting-cellular-automata-rule-for-reducing-image-noise" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">439</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2722</span> Encephalon-An Implementation of a Handwritten Mathematical Expression Solver</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreeyam">Shreeyam</a>, <a href="https://publications.waset.org/abstracts/search?q=Ranjan%20Kumar%20Sah"> Ranjan Kumar Sah</a>, <a href="https://publications.waset.org/abstracts/search?q=Shivangi"> Shivangi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognizing and solving handwritten mathematical expressions can be a challenging task, particularly when certain characters are segmented and classified. This project proposes a solution that uses Convolutional Neural Network (CNN) and image processing techniques to accurately solve various types of equations, including arithmetic, quadratic, and trigonometric equations, as well as logical operations like logical AND, OR, NOT, NAND, XOR, and NOR. The proposed solution also provides a graphical solution, allowing users to visualize equations and their solutions. In addition to equation solving, the platform, called CNNCalc, offers a comprehensive learning experience for students. It provides educational content, a quiz platform, and a coding platform for practicing programming skills in different languages like C, Python, and Java. This all-in-one solution makes the learning process engaging and enjoyable for students. The proposed methodology includes horizontal compact projection analysis and survey for segmentation and binarization, as well as connected component analysis and integrated connected component analysis for character classification. The compact projection algorithm compresses the horizontal projections to remove noise and obtain a clearer image, contributing to the accuracy of character segmentation. Experimental results demonstrate the effectiveness of the proposed solution in solving a wide range of mathematical equations. CNNCalc provides a powerful and user-friendly platform for solving equations, learning, and practicing programming skills. With its comprehensive features and accurate results, CNNCalc is poised to revolutionize the way students learn and solve mathematical equations. The platform utilizes a custom-designed Convolutional Neural Network (CNN) with image processing techniques to accurately recognize and classify symbols within handwritten equations. The compact projection algorithm effectively removes noise from horizontal projections, leading to clearer images and improved character segmentation. Experimental results demonstrate the accuracy and effectiveness of the proposed solution in solving a wide range of equations, including arithmetic, quadratic, trigonometric, and logical operations. CNNCalc features a user-friendly interface with a graphical representation of equations being solved, making it an interactive and engaging learning experience for users. The platform also includes tutorials, testing capabilities, and programming features in languages such as C, Python, and Java. Users can track their progress and work towards improving their skills. CNNCalc is poised to revolutionize the way students learn and solve mathematical equations with its comprehensive features and accurate results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AL" title="AL">AL</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20written%20equation%20solver" title=" hand written equation solver"> hand written equation solver</a>, <a href="https://publications.waset.org/abstracts/search?q=maths" title=" maths"> maths</a>, <a href="https://publications.waset.org/abstracts/search?q=computer" title=" computer"> computer</a>, <a href="https://publications.waset.org/abstracts/search?q=CNNCalc" title=" CNNCalc"> CNNCalc</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/165164/encephalon-an-implementation-of-a-handwritten-mathematical-expression-solver" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165164.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2721</span> A Survey on Types of Noises and De-Noising Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amandeep%20Kaur">Amandeep Kaur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital Image processing is a fundamental tool to perform various operations on the digital images for pattern recognition, noise removal and feature extraction. In this paper noise removal technique has been described for various types of noises. This paper comprises discussion about various noises available in the image due to different environmental, accidental factors. In this paper, various de-noising approaches have been discussed that utilize different wavelets and filters for de-noising. By analyzing various papers on image de-noising we extract that wavelet based de-noise approaches are much effective as compared to others. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=de-noising%20techniques" title="de-noising techniques">de-noising techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=edges" title=" edges"> edges</a>, <a href="https://publications.waset.org/abstracts/search?q=image" title=" image"> image</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/54155/a-survey-on-types-of-noises-and-de-noising-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54155.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=7" rel="prev">‹</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=2">2</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=7">7</a></li> <li class="page-item active"><span class="page-link">8</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=10">10</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=11">11</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=98">98</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=99">99</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=image%20segmentation&page=9" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>