CINXE.COM
Search results for: pixel normalization
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: pixel normalization</title> <meta name="description" content="Search results for: pixel normalization"> <meta name="keywords" content="pixel normalization"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="pixel normalization" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="pixel normalization"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 412</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: pixel normalization</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">412</span> Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Dhana%20Lakshmi">M. Dhana Lakshmi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sakthivel%20Murugan"> S. Sakthivel Murugan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=underwater%20drone%20imagery" title="underwater drone imagery">underwater drone imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20normalization" title=" pixel normalization"> pixel normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=thresholding" title=" thresholding"> thresholding</a>, <a href="https://publications.waset.org/abstracts/search?q=masking" title=" masking"> masking</a>, <a href="https://publications.waset.org/abstracts/search?q=unsharp%20mask%20filter" title=" unsharp mask filter"> unsharp mask filter</a> </p> <a href="https://publications.waset.org/abstracts/142413/enhancement-of-underwater-haze-image-with-edge-reveal-using-pixel-normalization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142413.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">197</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">411</span> A Supervised Face Parts Labeling Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khalil%20Khan">Khalil Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ikram%20Syed"> Ikram Syed</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Ehsan%20Mazhar"> Muhammad Ehsan Mazhar</a>, <a href="https://publications.waset.org/abstracts/search?q=Iran%20Uddin"> Iran Uddin</a>, <a href="https://publications.waset.org/abstracts/search?q=Nasir%20Ahmad"> Nasir Ahmad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face parts labeling is the process of assigning class labels to each face part. A face parts labeling method (FPL) which divides a given image into its constitutes parts is proposed in this paper. A database FaceD consisting of 564 images is labeled with hand and make publically available. A supervised learning model is built through extraction of features from the training data. The testing phase is performed with two semantic segmentation methods, i.e., pixel and super-pixel based segmentation. In pixel-based segmentation class label is provided to each pixel individually. In super-pixel based method class label is assigned to super-pixel only – as a result, the same class label is given to all pixels inside a super-pixel. Pixel labeling accuracy reported with pixel and super-pixel based methods is 97.68 % and 93.45% respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20labeling" title="face labeling">face labeling</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20segmentation" title=" face segmentation"> face segmentation</a> </p> <a href="https://publications.waset.org/abstracts/94715/a-supervised-face-parts-labeling-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94715.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">257</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">410</span> Applying Spanning Tree Graph Theory for Automatic Database Normalization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chetneti%20Srisa-an">Chetneti Srisa-an</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=relational%20database" title="relational database">relational database</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20dependency" title=" functional dependency"> functional dependency</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20normalization" title=" automatic normalization"> automatic normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=primary%20key" title=" primary key"> primary key</a>, <a href="https://publications.waset.org/abstracts/search?q=spanning%20tree" title=" spanning tree"> spanning tree</a> </p> <a href="https://publications.waset.org/abstracts/8250/applying-spanning-tree-graph-theory-for-automatic-database-normalization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8250.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">409</span> Pose Normalization Network for Object Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Shen">Bingquan Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20classification" title=" object classification"> object classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20normalization" title=" pose normalization"> pose normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=viewpoint%20invariant" title=" viewpoint invariant"> viewpoint invariant</a> </p> <a href="https://publications.waset.org/abstracts/56852/pose-normalization-network-for-object-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56852.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">408</span> Sub-Pixel Mapping Based on New Mixed Interpolation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zeyu%20Zhou">Zeyu Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaojun%20Bi"> Xiaojun Bi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the limited environmental parameters and the limited resolution of the sensor, the universal existence of the mixed pixels in the process of remote sensing images restricts the spatial resolution of the remote sensing images. Sub-pixel mapping technology can effectively improve the spatial resolution. As the bilinear interpolation algorithm inevitably produces the edge blur effect, which leads to the inaccurate sub-pixel mapping results. In order to avoid the edge blur effect that affects the sub-pixel mapping results in the interpolation process, this paper presents a new edge-directed interpolation algorithm which uses the covariance adaptive interpolation algorithm on the edge of the low-resolution image and uses bilinear interpolation algorithm in the low-resolution image smooth area. By using the edge-directed interpolation algorithm, the super-resolution of the image with low resolution is obtained, and we get the percentage of each sub-pixel under a certain type of high-resolution image. Then we rely on the probability value as a soft attribute estimate and carry out sub-pixel scale under the ‘hard classification’. Finally, we get the result of sub-pixel mapping. Through the experiment, we compare the algorithm and the bilinear algorithm given in this paper to the results of the sub-pixel mapping method. It is found that the sub-pixel mapping method based on the edge-directed interpolation algorithm has better edge effect and higher mapping accuracy. The results of the paper meet our original intention of the question. At the same time, the method does not require iterative computation and training of samples, making it easier to implement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing%20images" title="remote sensing images">remote sensing images</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-pixel%20mapping" title=" sub-pixel mapping"> sub-pixel mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=bilinear%20interpolation" title=" bilinear interpolation"> bilinear interpolation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge-directed%20interpolation" title=" edge-directed interpolation"> edge-directed interpolation</a> </p> <a href="https://publications.waset.org/abstracts/77883/sub-pixel-mapping-based-on-new-mixed-interpolation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77883.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">407</span> An Approximation of Daily Rainfall by Using a Pixel Value Data Approach </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sarisa%20Pinkham">Sarisa Pinkham</a>, <a href="https://publications.waset.org/abstracts/search?q=Kanyarat%20Bussaban"> Kanyarat Bussaban</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=daily%20rainfall" title="daily rainfall">daily rainfall</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=approximation" title=" approximation"> approximation</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20value%20data" title=" pixel value data"> pixel value data</a> </p> <a href="https://publications.waset.org/abstracts/9889/an-approximation-of-daily-rainfall-by-using-a-pixel-value-data-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">406</span> Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shagufta%20Tabassum">Shagufta Tabassum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=time%20domain%20reflectometry%20measurement%20techinque" title="time domain reflectometry measurement techinque">time domain reflectometry measurement techinque</a>, <a href="https://publications.waset.org/abstracts/search?q=cable%20and%20connector%20loss" title=" cable and connector loss"> cable and connector loss</a>, <a href="https://publications.waset.org/abstracts/search?q=oscilloscope%20loss" title=" oscilloscope loss"> oscilloscope loss</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20normalization%20technique" title=" and normalization technique"> and normalization technique</a> </p> <a href="https://publications.waset.org/abstracts/139922/basic-calibration-and-normalization-techniques-for-time-domain-reflectometry-measurements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139922.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">207</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">405</span> On Phase Based Stereo Matching and Its Related Issues</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andr%C3%A1s%20R%C3%B6vid">András Rövid</a>, <a href="https://publications.waset.org/abstracts/search?q=Takeshi%20Hashimoto"> Takeshi Hashimoto</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper focuses on the problem of the point correspondence matching in stereo images. The proposed matching algorithm is based on the combination of simpler methods such as normalized sum of squared differences (NSSD) and a more complex phase correlation based approach, by considering the noise and other factors, as well. The speed of NSSD and the preciseness of the phase correlation together yield an efficient approach to find the best candidate point with sub-pixel accuracy in stereo image pairs. The task of the NSSD in this case is to approach the candidate pixel roughly. Afterwards the location of the candidate is refined by an enhanced phase correlation based method which in contrast to the NSSD has to run only once for each selected pixel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=stereo%20matching" title="stereo matching">stereo matching</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-pixel%20accuracy" title=" sub-pixel accuracy"> sub-pixel accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=phase%20correlation" title=" phase correlation"> phase correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=SVD" title=" SVD"> SVD</a>, <a href="https://publications.waset.org/abstracts/search?q=NSSD" title=" NSSD"> NSSD</a> </p> <a href="https://publications.waset.org/abstracts/8549/on-phase-based-stereo-matching-and-its-related-issues" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">404</span> Design and Simulation of 3-Transistor Active Pixel Sensor Using MATLAB Simulink</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Alheeh">H. Alheeh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Alameri"> M. Alameri</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Al%20Tarabsheh"> A. Al Tarabsheh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There has been a growing interest in CMOS-based sensors technology in cameras as they afford low-power, small-size, and cost-effective imaging systems. This article describes the CMOS image sensor pixel categories and presents the design and the simulation of the 3-Transistor (3T) Active Pixel Sensor (APS) in MATLAB/Simulink tool. The analysis investigates the conversion of the light into an electrical signal for a single pixel sensing circuit, which consists of a photodiode and three NMOS transistors. The paper also proposes three modes for the pixel operation; reset, integration, and readout modes. The simulations of the electrical signals for each of the studied modes of operation show how the output electrical signals are correlated to the input light intensities. The charging/discharging speed for the photodiodes is also investigated. The output voltage for different light intensities, including in dark case, is calculated and showed its inverse proportionality with the light intensity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=APS" title="APS">APS</a>, <a href="https://publications.waset.org/abstracts/search?q=CMOS%20image%20sensor" title=" CMOS image sensor"> CMOS image sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20intensities%20photodiode" title=" light intensities photodiode"> light intensities photodiode</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/131973/design-and-simulation-of-3-transistor-active-pixel-sensor-using-matlab-simulink" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131973.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">403</span> Automatic Detection and Update of Region of Interest in Vehicular Traffic Surveillance Videos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Naydelis%20Brito%20Su%C3%A1rez">Naydelis Brito Suárez</a>, <a href="https://publications.waset.org/abstracts/search?q=Deni%20Librado%20Torres%20Rom%C3%A1n"> Deni Librado Torres Román</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20Hermosillo%20Reynoso"> Fernando Hermosillo Reynoso</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic detection and generation of a dynamic ROI (Region of Interest) in vehicle traffic surveillance videos based on a static camera in Intelligent Transportation Systems is challenging for computer vision-based systems. The dynamic ROI, being a changing ROI, should capture any other moving object located outside of a static ROI. In this work, the video is represented by a Tensor model composed of a Background and a Foreground Tensor, which contains all moving vehicles or objects. The values of each pixel over a time interval are represented by time series, and some pixel rows were selected. This paper proposes a pixel entropy-based algorithm for automatic detection and generation of a dynamic ROI in traffic videos under the assumption of two types of theoretical pixel entropy behaviors: (1) a pixel located at the road shows a high entropy value due to disturbances in this zone by vehicle traffic, (2) a pixel located outside the road shows a relatively low entropy value. To study the statistical behavior of the selected pixels, detecting the entropy changes and consequently moving objects, Shannon, Tsallis, and Approximate entropies were employed. Although Tsallis entropy achieved very high results in real-time, Approximate entropy showed results slightly better but in greater time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convex%20hull" title="convex hull">convex hull</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20ROI%20detection" title=" dynamic ROI detection"> dynamic ROI detection</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20entropy" title=" pixel entropy"> pixel entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series" title=" time series"> time series</a>, <a href="https://publications.waset.org/abstracts/search?q=moving%20objects" title=" moving objects"> moving objects</a> </p> <a href="https://publications.waset.org/abstracts/174020/automatic-detection-and-update-of-region-of-interest-in-vehicular-traffic-surveillance-videos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">402</span> Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Levente%20Varga">Levente Varga</a>, <a href="https://publications.waset.org/abstracts/search?q=D%C3%A1vid%20Deritei"> Dávid Deritei</a>, <a href="https://publications.waset.org/abstracts/search?q=M%C3%A1ria%20Ercsey-Ravasz"> Mária Ercsey-Ravasz</a>, <a href="https://publications.waset.org/abstracts/search?q=R%C4%83zvan%20Florian"> Răzvan Florian</a>, <a href="https://publications.waset.org/abstracts/search?q=Zsolt%20I.%20L%C3%A1z%C3%A1r"> Zsolt I. Lázár</a>, <a href="https://publications.waset.org/abstracts/search?q=Istv%C3%A1n%20Papp"> István Papp</a>, <a href="https://publications.waset.org/abstracts/search?q=Ferenc%20J%C3%A1rai-Szab%C3%B3"> Ferenc Járai-Szabó</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the major shortcomings of widely used scientometric indicators is that different disciplines cannot be compared with each other. The issue of cross-disciplinary normalization has been long discussed, but even the classification of publications into scientific domains poses problems. Structural properties of citation networks offer new possibilities, however, the large size and constant growth of these networks asks for precaution. Here we present a new tool that in order to perform cross-field normalization of scientometric indicators of individual publications relays on the structural properties of citation networks. Due to the large size of the networks, a systematic procedure for identifying scientific domains based on a local community detection algorithm is proposed. The algorithm is tested with different benchmark and real-world networks. Then, by the use of this algorithm, the mechanism of the scientometric indicator normalization process is shown for a few indicators like the citation number, P-index and a local version of the PageRank indicator. The fat-tail trend of the article indicator distribution enables us to successfully perform the indicator normalization process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=citation%20networks" title="citation networks">citation networks</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-field%20normalization" title=" cross-field normalization"> cross-field normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20cluster%20detection" title=" local cluster detection"> local cluster detection</a>, <a href="https://publications.waset.org/abstracts/search?q=scientometric%20indicators" title=" scientometric indicators"> scientometric indicators</a> </p> <a href="https://publications.waset.org/abstracts/87198/normalizing-scientometric-indicators-of-individual-publications-using-local-cluster-detection-methods-on-citation-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87198.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">205</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">401</span> Vibration Imaging Method for Vibrating Objects with Translation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kohei%20Shimasaki">Kohei Shimasaki</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomoaki%20Okamura"> Tomoaki Okamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Idaku%20Ishii"> Idaku Ishii</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a vibration imaging method for high frame rate (HFR)-video-based localization of vibrating objects with large translations. When the ratio of the translation speed of a target to its vibration frequency is large, obtaining its frequency response in image intensities becomes difficult because one or no waves are observable at the same pixel. Our method can precisely localize moving objects with vibration by virtually translating multiple image sequences for pixel-level short-time Fourier transform to observe multiple waves at the same pixel. The effectiveness of the proposed method is demonstrated by analyzing several HFR videos of flying insects in real scenarios. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HFR%20video%20analysis" title="HFR video analysis">HFR video analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel-level%20vibration%20source%20localization" title=" pixel-level vibration source localization"> pixel-level vibration source localization</a>, <a href="https://publications.waset.org/abstracts/search?q=short-time%20Fourier%20transform" title=" short-time Fourier transform"> short-time Fourier transform</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20translation" title=" virtual translation"> virtual translation</a> </p> <a href="https://publications.waset.org/abstracts/160120/vibration-imaging-method-for-vibrating-objects-with-translation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160120.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">400</span> Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuhanis%20Yusof">Yuhanis Yusof</a>, <a href="https://publications.waset.org/abstracts/search?q=Zuriani%20Mustaffa"> Zuriani Mustaffa</a>, <a href="https://publications.waset.org/abstracts/search?q=Siti%20Sakira%20Kamaruddin"> Siti Sakira Kamaruddin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20bee%20colony" title="artificial bee colony">artificial bee colony</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20normalization" title=" data normalization"> data normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=forecasting" title=" forecasting"> forecasting</a>, <a href="https://publications.waset.org/abstracts/search?q=Grey%20Wolf%20optimizer" title=" Grey Wolf optimizer"> Grey Wolf optimizer</a> </p> <a href="https://publications.waset.org/abstracts/18294/investigating-data-normalization-techniques-in-swarm-intelligence-forecasting-for-energy-commodity-spot-price" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">399</span> Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20Datta">U. Datta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=co-registration" title="co-registration">co-registration</a>, <a href="https://publications.waset.org/abstracts/search?q=GLRT" title=" GLRT"> GLRT</a>, <a href="https://publications.waset.org/abstracts/search?q=infrastructure%20growth" title=" infrastructure growth"> infrastructure growth</a>, <a href="https://publications.waset.org/abstracts/search?q=multispectral" title=" multispectral"> multispectral</a>, <a href="https://publications.waset.org/abstracts/search?q=multitemporal" title=" multitemporal"> multitemporal</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel-based%20change%20detection" title=" pixel-based change detection"> pixel-based change detection</a> </p> <a href="https://publications.waset.org/abstracts/117430/infrastructure-change-monitoring-using-multitemporal-multispectral-satellite-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/117430.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">398</span> Normalizing Logarithms of Realized Volatility in an ARFIMA Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=G.%20L.%20C.%20Yap">G. L. C. Yap</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20process" title="Gaussian process">Gaussian process</a>, <a href="https://publications.waset.org/abstracts/search?q=long-memory" title=" long-memory"> long-memory</a>, <a href="https://publications.waset.org/abstracts/search?q=normalization" title=" normalization"> normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=value-at-risk" title=" value-at-risk"> value-at-risk</a>, <a href="https://publications.waset.org/abstracts/search?q=volatility" title=" volatility"> volatility</a>, <a href="https://publications.waset.org/abstracts/search?q=Whittle%20estimator" title=" Whittle estimator"> Whittle estimator</a> </p> <a href="https://publications.waset.org/abstracts/58573/normalizing-logarithms-of-realized-volatility-in-an-arfima-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">397</span> New Variational Approach for Contrast Enhancement of Color Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wanhyun%20Cho">Wanhyun Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Seongchae%20Seo"> Seongchae Seo</a>, <a href="https://publications.waset.org/abstracts/search?q=Soonja%20Kang"> Soonja Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20image" title="color image">color image</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20enhancement%20technique" title=" contrast enhancement technique"> contrast enhancement technique</a>, <a href="https://publications.waset.org/abstracts/search?q=variational%20approach" title=" variational approach"> variational approach</a>, <a href="https://publications.waset.org/abstracts/search?q=Euler-Lagrang%20equation" title=" Euler-Lagrang equation"> Euler-Lagrang equation</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20approximation%20method" title=" dynamic approximation method"> dynamic approximation method</a>, <a href="https://publications.waset.org/abstracts/search?q=EME%20measure" title=" EME measure"> EME measure</a> </p> <a href="https://publications.waset.org/abstracts/10574/new-variational-approach-for-contrast-enhancement-of-color-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">450</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">396</span> A Nonlocal Means Algorithm for Poisson Denoising Based on Information Geometry</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongxu%20Chen">Dongxu Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Yipeng%20Li"> Yipeng Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an information geometry NonlocalMeans(NLM) algorithm for Poisson denoising. NLM estimates a noise-free pixel as a weighted average of image pixels, where each pixel is weighted according to the similarity between image patches in Euclidean space. In this work, every pixel is a Poisson distribution locally estimated by Maximum Likelihood (ML), all distributions consist of a statistical manifold. A NLM denoising algorithm is conducted on the statistical manifold where Fisher information matrix can be used for computing distribution geodesics referenced as the similarity between patches. This approach was demonstrated to be competitive with related state-of-the-art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20denoising" title="image denoising">image denoising</a>, <a href="https://publications.waset.org/abstracts/search?q=Poisson%20noise" title=" Poisson noise"> Poisson noise</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20geometry" title=" information geometry"> information geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlocal-means" title=" nonlocal-means"> nonlocal-means</a> </p> <a href="https://publications.waset.org/abstracts/51221/a-nonlocal-means-algorithm-for-poisson-denoising-based-on-information-geometry" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">395</span> Data Hiding in Gray Image Using ASCII Value and Scanning Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20K.%20Pateriya">R. K. Pateriya</a>, <a href="https://publications.waset.org/abstracts/search?q=Jyoti%20Bharti"> Jyoti Bharti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for data hiding methods which provides a secret communication between sender and receiver. The data is hidden in gray-scale images and the boundary of gray-scale image is used to store the mapping information. In this an approach data is in ASCII format and the mapping is in between ASCII value of hidden message and pixel value of cover image, since pixel value of an image as well as ASCII value is in range of 0 to 255 and this mapping information is occupying only 1 bit per character of hidden message as compared to 8 bit per character thus maintaining good quality of stego image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ASCII%20value" title="ASCII value">ASCII value</a>, <a href="https://publications.waset.org/abstracts/search?q=cover%20image" title=" cover image"> cover image</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20value" title=" pixel value"> pixel value</a>, <a href="https://publications.waset.org/abstracts/search?q=stego%20image" title=" stego image"> stego image</a>, <a href="https://publications.waset.org/abstracts/search?q=secret%20message" title=" secret message"> secret message</a> </p> <a href="https://publications.waset.org/abstracts/50472/data-hiding-in-gray-image-using-ascii-value-and-scanning-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">417</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">394</span> Automatic Moment-Based Texture Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tudor%20Barbu">Tudor Barbu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Second, an automatic pixel classification approach is proposed. The feature vectors are clustered using some unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=moment-based" title=" moment-based"> moment-based</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20analysis" title=" texture analysis"> texture analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20classification" title=" automatic classification"> automatic classification</a>, <a href="https://publications.waset.org/abstracts/search?q=validation%20indexes" title=" validation indexes"> validation indexes</a> </p> <a href="https://publications.waset.org/abstracts/3065/automatic-moment-based-texture-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3065.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">417</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">393</span> Automatic Change Detection for High-Resolution Satellite Images of Urban and Suburban Areas</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Antigoni%20Panagiotopoulou">Antigoni Panagiotopoulou</a>, <a href="https://publications.waset.org/abstracts/search?q=Lemonia%20Ragia"> Lemonia Ragia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High-resolution satellite images can provide detailed information about change detection on the earth. In the present work, QuickBird images of spatial resolution 60 cm/pixel and WorldView images of resolution 30 cm/pixel are utilized to perform automatic change detection in urban and suburban areas of Crete, Greece. There is a relative time difference of 13 years among the satellite images. Multiindex scene representation is applied on the images to classify the scene into buildings, vegetation, water and ground. Then, automatic change detection is made possible by pixel-per-pixel comparison of the classified multi-temporal images. The vegetation index and the water index which have been developed in this study prove effective. Furthermore, the proposed change detection approach not only indicates whether changes have taken place or not but also provides specific information relative to the types of changes. Experimentations with other different scenes in the future could help optimize the proposed spectral indices as well as the entire change detection methodology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title="change detection">change detection</a>, <a href="https://publications.waset.org/abstracts/search?q=multiindex%20scene%20representation" title=" multiindex scene representation"> multiindex scene representation</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20index" title=" spectral index"> spectral index</a>, <a href="https://publications.waset.org/abstracts/search?q=QuickBird" title=" QuickBird"> QuickBird</a>, <a href="https://publications.waset.org/abstracts/search?q=WorldView" title=" WorldView"> WorldView</a> </p> <a href="https://publications.waset.org/abstracts/132460/automatic-change-detection-for-high-resolution-satellite-images-of-urban-and-suburban-areas" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132460.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">392</span> Deepnic, A Method to Transform Each Variable into Image for Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nguyen%20J.%20M.">Nguyen J. M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucas%20G."> Lucas G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Brunner%20M."> Brunner M.</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruan%20S."> Ruan S.</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonioli%20D."> Antonioli D.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tabular%20data" title="tabular data">tabular data</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=perfect%20trees" title=" perfect trees"> perfect trees</a>, <a href="https://publications.waset.org/abstracts/search?q=NICS" title=" NICS"> NICS</a> </p> <a href="https://publications.waset.org/abstracts/152479/deepnic-a-method-to-transform-each-variable-into-image-for-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152479.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">391</span> Color Image Enhancement Using Multiscale Retinex and Image Fusion Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chang-Hsing%20Lee">Chang-Hsing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng-Chang%20Lien"> Cheng-Chang Lien</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Chuan%20Han"> Chin-Chuan Han</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an edge-strength guided multiscale retinex (EGMSR) approach will be proposed for color image contrast enhancement. In EGMSR, the pixel-dependent weight associated with each pixel in the single scale retinex output image is computed according to the edge strength around this pixel in order to prevent from over-enhancing the noises contained in the smooth dark/bright regions. Further, by fusing together the enhanced results of EGMSR and adaptive multiscale retinex (AMSR), we can get a natural fused image having high contrast and proper tonal rendition. Experimental results on several low-contrast images have shown that our proposed approach can produce natural and appealing enhanced images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=multiscale%20retinex" title=" multiscale retinex"> multiscale retinex</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title=" image fusion"> image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=EGMSR" title=" EGMSR"> EGMSR</a> </p> <a href="https://publications.waset.org/abstracts/15139/color-image-enhancement-using-multiscale-retinex-and-image-fusion-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15139.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">390</span> Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Drew%20Commandeur">Drew Commandeur</a>, <a href="https://publications.waset.org/abstracts/search?q=Ryan%20Brodie"> Ryan Brodie</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Hundza"> Sandra Hundza</a>, <a href="https://publications.waset.org/abstracts/search?q=Marc%20Klimstra"> Marc Klimstra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electromyography" title="electromyography">electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=EMG%20normalization" title=" EMG normalization"> EMG normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20EMG" title=" functional EMG"> functional EMG</a>, <a href="https://publications.waset.org/abstracts/search?q=older%20adults" title=" older adults"> older adults</a> </p> <a href="https://publications.waset.org/abstracts/155978/comparison-of-bioelectric-and-biomechanical-electromyography-normalization-techniques-in-disparate-populations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155978.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">389</span> A New Scheme for Chain Code Normalization in Arabic and Farsi Scripts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reza%20Shakoori">Reza Shakoori</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a structural correction of Arabic and Persian strokes using manipulation of their chain codes in order to improve the rate and performance of Persian and Arabic handwritten word recognition systems. It collects pure and effective features to represent a character with one consolidated feature vector and reduces variations in order to decrease the number of training samples and increase the chance of successful classification. Our results also show that how the proposed approaches can simplify classification and consequently recognition by reducing variations and possible noises on the chain code by keeping orientation of characters and their backbone structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabic" title="Arabic">Arabic</a>, <a href="https://publications.waset.org/abstracts/search?q=chain%20code%20normalization" title=" chain code normalization"> chain code normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR%20systems" title=" OCR systems"> OCR systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/27963/a-new-scheme-for-chain-code-normalization-in-arabic-and-farsi-scripts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">405</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">388</span> Sub-Pixel Level Classification Using Remote Sensing For Arecanut Crop</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Athiralakshmi">S. Athiralakshmi</a>, <a href="https://publications.waset.org/abstracts/search?q=B.E.%20Bhojaraja"> B.E. Bhojaraja</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20Pruthviraj"> U. Pruthviraj</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In agriculture, remote sensing is applied for monitoring of plant development, evaluating of physiological processes and growth conditions. Especially valuable are the spatio-temporal aspects of the remotely sensed data in detecting crop state differences and stress situations. In this study, hyperion imagery is used for classifying arecanut crops based on their age so that these maps can be used in yield estimation of crops, irrigation purposes, applying fertilizers etc. Traditional hard classifiers assigns the mixed pixels to the dominant classes. The proposed method uses a sub pixel level classifier called linear spectral unmixing available in ENVI software. It provides the relative abundance of surface materials and the context within a pixel that may be a potential solution to effectively identifying the land-cover distribution. Validation is done referring to field spectra collected using spectroradiometer and the ground control points obtained from GPS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FLAASH" title="FLAASH">FLAASH</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyperspectral%20remote%20sensing" title=" Hyperspectral remote sensing"> Hyperspectral remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=Linear%20Spectral%20Unmixing" title=" Linear Spectral Unmixing"> Linear Spectral Unmixing</a>, <a href="https://publications.waset.org/abstracts/search?q=Spectral%20Angle%20Mapper%20Classifier." title=" Spectral Angle Mapper Classifier. "> Spectral Angle Mapper Classifier. </a> </p> <a href="https://publications.waset.org/abstracts/32732/sub-pixel-level-classification-using-remote-sensing-for-arecanut-crop" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32732.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">519</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">387</span> Subpixel Corner Detection for Monocular Camera Linear Model Research</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guorong%20Sui">Guorong Sui</a>, <a href="https://publications.waset.org/abstracts/search?q=Xingwei%20Jia"> Xingwei Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Fei%20Tong"> Fei Tong</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiumin%20Gao"> Xiumin Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20linear%20model" title="camera linear model">camera linear model</a>, <a href="https://publications.waset.org/abstracts/search?q=geometric%20imaging%20relationship" title=" geometric imaging relationship"> geometric imaging relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20pixel%20coordinates" title=" image pixel coordinates"> image pixel coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=three%20dimensional%20space%20coordinates" title=" three dimensional space coordinates"> three dimensional space coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-pixel%20corner%20detection" title=" sub-pixel corner detection"> sub-pixel corner detection</a> </p> <a href="https://publications.waset.org/abstracts/77747/subpixel-corner-detection-for-monocular-camera-linear-model-research" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77747.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">386</span> Extensions of Schwarz Lemma in the Half-Plane</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nicolae%20Pascu">Nicolae Pascu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aside from being a fundamental tool in Complex analysis, Schwarz Lemma-which was finalized in its most complete form at the beginning of the last century-generated an important area of research in various fields of mathematics, which continues to advance even today. We present some properties of analytic functions in the half-plane which satisfy the conditions of the classical Schwarz Lemma (Carathéodory functions) and obtain a generalization of the well-known Aleksandrov-Sobolev Lemma for analytic functions in the half-plane (the correspondent of Schwarz-Pick Lemma from the unit disk). Using this Schwarz-type lemma, we obtain a characterization for the entire class of Carathéodory functions, which might be of independent interest. We prove two monotonicity properties for Carathéodory functions that do not depend upon their normalization at infinity (the hydrodynamic normalization). The method is based on conformal mapping arguments for analytic functions in the half-plane satisfying appropriate conditions, in the spirit of Schwarz lemma. According to the research findings in this paper, our main results give estimates for the modulus and the argument for the entire class of Carathéodory functions. As applications, we give several extensions of Julia-Wolf-Carathéodory Lemma in a half-strip and show that our results are sharp. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=schwarz%20lemma" title="schwarz lemma">schwarz lemma</a>, <a href="https://publications.waset.org/abstracts/search?q=Julia-wolf-carat%C3%A9odory%20lemma" title=" Julia-wolf-caratéodory lemma"> Julia-wolf-caratéodory lemma</a>, <a href="https://publications.waset.org/abstracts/search?q=analytic%20function" title=" analytic function"> analytic function</a>, <a href="https://publications.waset.org/abstracts/search?q=normalization%20condition" title=" normalization condition"> normalization condition</a>, <a href="https://publications.waset.org/abstracts/search?q=carat%C3%A9odory%20function" title=" caratéodory function"> caratéodory function</a> </p> <a href="https://publications.waset.org/abstracts/105458/extensions-of-schwarz-lemma-in-the-half-plane" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105458.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">227</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">385</span> Evaluating the Performance of Color Constancy Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Damanjit%20Kaur">Damanjit Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Avani%20Bhatia"> Avani Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20constancy" title="color constancy">color constancy</a>, <a href="https://publications.waset.org/abstracts/search?q=gray%20world" title=" gray world"> gray world</a>, <a href="https://publications.waset.org/abstracts/search?q=white%20patch" title=" white patch"> white patch</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20white%20patch" title=" modified white patch "> modified white patch </a> </p> <a href="https://publications.waset.org/abstracts/4799/evaluating-the-performance-of-color-constancy-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4799.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">321</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">384</span> Human Machine Interface for Controlling a Robot Using Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ambuj%20Kumar%20Gautam">Ambuj Kumar Gautam</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Vasu"> V. Vasu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a head movement based Human Machine Interface (HMI) that uses the right and left movements of head to control a robot motion. Here we present an approach for making an effective technique for real-time face orientation information system, to control a robot which can be efficiently used for Electrical Powered Wheelchair (EPW). Basically this project aims at application related to HMI. The system (machine) identifies the orientation of the face movement with respect to the pixel values of image in a certain areas. Initially we take an image and divide that whole image into three parts on the basis of its number of columns. On the basis of orientation of face, maximum pixel value of approximate same range of (R, G, and B value of a pixel) lie in one of divided parts of image. This information we transfer to the microcontroller through serial communication port and control the motion of robot like forward motion, left and right turn and stop in real time by using head movements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electrical%20powered%20wheelchair%20%28EPW%29" title="electrical powered wheelchair (EPW)">electrical powered wheelchair (EPW)</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20machine%20interface%20%28HMI%29" title=" human machine interface (HMI)"> human machine interface (HMI)</a>, <a href="https://publications.waset.org/abstracts/search?q=robotics" title=" robotics"> robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=microcontroller" title=" microcontroller"> microcontroller</a> </p> <a href="https://publications.waset.org/abstracts/10916/human-machine-interface-for-controlling-a-robot-using-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">383</span> A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Zheng">Yuan Zheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model’s wireframes. Additionally, a normalization operation for the model’s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D-2D%20matching" title="3D-2D matching">3D-2D matching</a>, <a href="https://publications.waset.org/abstracts/search?q=fitness%20function" title=" fitness function"> fitness function</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20vehicle%20model" title=" 3D vehicle model"> 3D vehicle model</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20image%20gradient" title=" local image gradient"> local image gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouette%20information" title=" silhouette information"> silhouette information</a> </p> <a href="https://publications.waset.org/abstracts/45357/a-practical-and-efficient-evaluation-function-for-3d-model-based-vehicle-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45357.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=13">13</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=14">14</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pixel%20normalization&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>