CINXE.COM
Search results for: iris recognition
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: iris recognition</title> <meta name="description" content="Search results for: iris recognition"> <meta name="keywords" content="iris recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="iris recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="iris recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1733</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: iris recognition</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1733</span> Efficient Feature Fusion for Noise Iris in Unconstrained Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao-Hong%20Tsai">Yao-Hong Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet" title=" wavelet"> wavelet</a> </p> <a href="https://publications.waset.org/abstracts/17027/efficient-feature-fusion-for-noise-iris-in-unconstrained-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17027.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1732</span> A Weighted Approach to Unconstrained Iris Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao-Hong%20Tsai">Yao-Hong Tsai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a weighted approach to unconstrained iris recognition. Nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=authentication" title="authentication">authentication</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=adaboost" title=" adaboost"> adaboost</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a> </p> <a href="https://publications.waset.org/abstracts/3876/a-weighted-approach-to-unconstrained-iris-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3876.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">224</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1731</span> Size-Reduction Strategies for Iris Codes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jutta%20H%C3%A4mmerle-Uhl">Jutta Hämmerle-Uhl</a>, <a href="https://publications.waset.org/abstracts/search?q=Georg%20Penn"> Georg Penn</a>, <a href="https://publications.waset.org/abstracts/search?q=Gerhard%20P%C3%B6tzelsberger"> Gerhard Pötzelsberger</a>, <a href="https://publications.waset.org/abstracts/search?q=Andreas%20Uhl"> Andreas Uhl</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris codes contain bits with different entropy. This work investigates different strategies to reduce the size of iris code templates with the aim of reducing storage requirements and computational demand in the matching process. Besides simple sub-sampling schemes, also a binary multi-resolution representation as used in the JBIG hierarchical coding mode is assessed. We find that iris code template size can be reduced significantly while maintaining recognition accuracy. Besides, we propose a two stage identification approach, using small-sized iris code templates in a pre-selection satge, and full resolution templates for final identification, which shows promising recognition behaviour. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title="iris recognition">iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=compact%20iris%20code" title=" compact iris code"> compact iris code</a>, <a href="https://publications.waset.org/abstracts/search?q=fast%20matching" title=" fast matching"> fast matching</a>, <a href="https://publications.waset.org/abstracts/search?q=best%20bits" title=" best bits"> best bits</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-selection%20identification" title=" pre-selection identification"> pre-selection identification</a>, <a href="https://publications.waset.org/abstracts/search?q=two-stage%20identification" title=" two-stage identification"> two-stage identification</a> </p> <a href="https://publications.waset.org/abstracts/20877/size-reduction-strategies-for-iris-codes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20877.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1730</span> Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bamidele%20Samson%20Alobalorun">Bamidele Samson Alobalorun</a>, <a href="https://publications.waset.org/abstracts/search?q=Ifedotun%20Roseline%20Idowu"> Ifedotun Roseline Idowu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daugman%20rubber%20sheet" title="Daugman rubber sheet">Daugman rubber sheet</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamming%20distance" title=" Hamming distance"> Hamming distance</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition%20system" title=" iris recognition system"> iris recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=2D%20Gabor%20wavelet%20transform" title=" 2D Gabor wavelet transform"> 2D Gabor wavelet transform</a> </p> <a href="https://publications.waset.org/abstracts/170345/iris-feature-extraction-and-recognition-based-on-two-dimensional-gabor-wavelength-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170345.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">65</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1729</span> Effects of Reversible Watermarking on Iris Recognition Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrew%20Lock">Andrew Lock</a>, <a href="https://publications.waset.org/abstracts/search?q=Alastair%20Allen"> Alastair Allen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fragile watermarking has been proposed as a means of adding additional security or functionality to biometric systems, particularly for authentication and tamper detection. In this paper we describe an experimental study on the effect of watermarking iris images with a particular class of fragile algorithm, reversible algorithms, and the ability to correctly perform iris recognition. We investigate two scenarios, matching watermarked images to unmodified images, and matching watermarked images to watermarked images. We show that different watermarking schemes give very different results for a given capacity, highlighting the importance of investigation. At high embedding rates most algorithms cause significant reduction in recognition performance. However, in many cases, for low embedding rates, recognition accuracy is improved by the watermarking process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=reversible%20watermarking" title=" reversible watermarking"> reversible watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20engineering" title=" vision engineering"> vision engineering</a> </p> <a href="https://publications.waset.org/abstracts/9663/effects-of-reversible-watermarking-on-iris-recognition-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9663.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">456</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1728</span> Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rabab%20M.%20Ramadan">Rabab M. Ramadan</a>, <a href="https://publications.waset.org/abstracts/search?q=Elaraby%20A.%20Elgallad"> Elaraby A. Elgallad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title="iris recognition">iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20swarm%20optimization" title=" particle swarm optimization"> particle swarm optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=palm%20print" title=" palm print"> palm print</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20Scale%20Invariant%20Feature%20Transform%20%28SIFT%29" title=" the Scale Invariant Feature Transform (SIFT)"> the Scale Invariant Feature Transform (SIFT)</a> </p> <a href="https://publications.waset.org/abstracts/90535/implementation-of-a-multimodal-biometrics-recognition-system-with-combined-palm-print-and-iris-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90535.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1727</span> Iris Recognition Based on the Low Order Norms of Gradient Components</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iman%20A.%20Saad">Iman A. Saad</a>, <a href="https://publications.waset.org/abstracts/search?q=Loay%20E.%20George"> Loay E. George</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title="iris recognition">iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20stretching" title=" contrast stretching"> contrast stretching</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20features" title=" gradient features"> gradient features</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20features" title=" texture features"> texture features</a>, <a href="https://publications.waset.org/abstracts/search?q=Euclidean%20metric" title=" Euclidean metric"> Euclidean metric</a> </p> <a href="https://publications.waset.org/abstracts/13277/iris-recognition-based-on-the-low-order-norms-of-gradient-components" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13277.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">334</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1726</span> Developing a Secure Iris Recognition System by Using Advance Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kamyar%20Fakhr">Kamyar Fakhr</a>, <a href="https://publications.waset.org/abstracts/search?q=Roozbeh%20Salmani"> Roozbeh Salmani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Alphonse Bertillon developed the first biometric security system in the 1800s. Today, many governments and giant companies are considering or have procured biometrically enabled security schemes. Iris is a kaleidoscope of patterns and colors. Each individual holds a set of irises more unique than their thumbprint. Every single day, giant companies like Google and Apple are experimenting with reliable biometric systems. Now, after almost 200 years of improvements, face ID does not work with masks, it gives access to fake 3D images, and there is no global usage of biometric recognition systems as national identity (ID) card. The goal of this paper is to demonstrate the advantages of iris recognition overall biometric recognition systems. It make two extensions: first, we illustrate how a very large amount of internet fraud and cyber abuse is happening due to bugs in face recognition systems and in a very large dataset of 3.4M people; second, we discuss how establishing a secure global network of iris recognition devices connected to authoritative convolutional neural networks could be the safest solution to this dilemma. Another aim of this study is to provide a system that will prevent system infiltration caused by cyber-attacks and will block all wireframes to the data until the main user ceases the procedure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20system" title="biometric system">biometric system</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=cyber-attack" title=" cyber-attack"> cyber-attack</a>, <a href="https://publications.waset.org/abstracts/search?q=secure" title=" secure"> secure</a> </p> <a href="https://publications.waset.org/abstracts/135501/developing-a-secure-iris-recognition-system-by-using-advance-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135501.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">218</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1725</span> Dual Biometrics Fusion Based Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prakash">Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=Vikash%20Kumar"> Vikash Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Bansal"> Vinay Bansal</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20N.%20Das"> L. N. Das</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dual biometrics is a subpart of multimodal biometrics, which refers to the use of a variety of modalities to identify and authenticate persons rather than just one. We limit the risks of mistakes by mixing several modals, and hackers have a tiny possibility of collecting information. Our goal is to collect the precise characteristics of iris and palmprint, produce a fusion of both methodologies, and ensure that authentication is only successful when the biometrics match a particular user. After combining different modalities, we created an effective strategy with a mean DI and EER of 2.41 and 5.21, respectively. A biometric system has been proposed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=palmprint" title=" palmprint"> palmprint</a>, <a href="https://publications.waset.org/abstracts/search?q=Iris" title=" Iris"> Iris</a>, <a href="https://publications.waset.org/abstracts/search?q=EER" title=" EER"> EER</a>, <a href="https://publications.waset.org/abstracts/search?q=DI" title=" DI"> DI</a> </p> <a href="https://publications.waset.org/abstracts/149996/dual-biometrics-fusion-based-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1724</span> Biometric Recognition Techniques: A Survey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shabir%20Ahmad%20Sofi">Shabir Ahmad Sofi</a>, <a href="https://publications.waset.org/abstracts/search?q=Shubham%20Aggarwal"> Shubham Aggarwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanyam%20Singhal"> Sanyam Singhal</a>, <a href="https://publications.waset.org/abstracts/search?q=Roohie%20Naaz"> Roohie Naaz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric recognition refers to an automatic recognition of individuals based on a feature vector(s) derived from their physiological and/or behavioral characteristic. Biometric recognition systems should provide a reliable personal recognition schemes to either confirm or determine the identity of an individual. These features are used to provide an authentication for computer based security systems. Applications of such a system include computer systems security, secure electronic banking, mobile phones, credit cards, secure access to buildings, health and social services. By using biometrics a person could be identified based on 'who she/he is' rather than 'what she/he has' (card, token, key) or 'what she/he knows' (password, PIN). In this paper, a brief overview of biometric methods, both unimodal and multimodal and their advantages and disadvantages, will be presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric" title="biometric">biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=DNA" title=" DNA"> DNA</a>, <a href="https://publications.waset.org/abstracts/search?q=fingerprint" title=" fingerprint"> fingerprint</a>, <a href="https://publications.waset.org/abstracts/search?q=ear" title=" ear"> ear</a>, <a href="https://publications.waset.org/abstracts/search?q=face" title=" face"> face</a>, <a href="https://publications.waset.org/abstracts/search?q=retina%20scan" title=" retina scan"> retina scan</a>, <a href="https://publications.waset.org/abstracts/search?q=gait" title=" gait"> gait</a>, <a href="https://publications.waset.org/abstracts/search?q=iris" title=" iris"> iris</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=unimodal%20biometric" title=" unimodal biometric"> unimodal biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20biometric" title=" multimodal biometric"> multimodal biometric</a> </p> <a href="https://publications.waset.org/abstracts/15520/biometric-recognition-techniques-a-survey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">755</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1723</span> Biimodal Biometrics System Using Fusion of Iris and Fingerprint</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Attallah%20Bilal">Attallah Bilal</a>, <a href="https://publications.waset.org/abstracts/search?q=Hendel%20Fatiha"> Hendel Fatiha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris" title="iris">iris</a>, <a href="https://publications.waset.org/abstracts/search?q=fingerprint" title=" fingerprint"> fingerprint</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20rule" title=" sum rule"> sum rule</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a> </p> <a href="https://publications.waset.org/abstracts/18556/biimodal-biometrics-system-using-fusion-of-iris-and-fingerprint" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18556.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1722</span> Iris Cancer Detection System Using Image Processing and Neural Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdulkader%20Helwan">Abdulkader Helwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20cancer" title="iris cancer">iris cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=intraocular%20melanoma" title=" intraocular melanoma"> intraocular melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=cancerous" title=" cancerous"> cancerous</a>, <a href="https://publications.waset.org/abstracts/search?q=prewitt%20edge%20detection%20algorithm" title=" prewitt edge detection algorithm"> prewitt edge detection algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=sclera" title=" sclera"> sclera</a> </p> <a href="https://publications.waset.org/abstracts/16796/iris-cancer-detection-system-using-image-processing-and-neural-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16796.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">503</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1721</span> Comparative Evaluation of Postoperative Cosmesis, Mydriasis and Anterior Chamber Morphology after Single-Pass Four-Throw Pupilloplasty between Traumatic and Congenital Iris Defects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20P.%20Singh">S. P. Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Shweta%20Gupta"> Shweta Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Kshama%20Dwivedi"> Kshama Dwivedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Shivangi%20Singh"> Shivangi Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aim: To compare the postoperative pupil cosmesis, mydriasis, and anterior chamber depth (ACD) in traumatic and congenital iris defects after Single-Pass Four-Throw pupilloplasty (SFTP). Method: SFTP was performed along with cataract surgery in 6 patients, each of congenital and traumatic iris defects and pupil size, mydriasis, and ACD was compared after three months. Results: SFTP was successful in repairing congenital and traumatic cases except in 1 traumatic case with a large iris defect. Horizontal pupil diameter decreased while ACD increased in both groups and was comparable between the two groups. The traumatic group showed a significant decrease in pupil diameter while there was an insignificant change in the horizontal pupil diameter in the congenital group. Mydriasis was adequate for fundus examination and was comparable between the two groups. The effect of SFTP on ACD was inconclusive due to the confounding effect of cataract surgery. The incidence of iris atrophy was equal in both groups. Conclusion: SFTP results in anatomical and functional restoration in cases of iris defects with no inadvertent effect on mydriasis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anterior%20chamber%20depth" title="anterior chamber depth">anterior chamber depth</a>, <a href="https://publications.waset.org/abstracts/search?q=mydriasis" title=" mydriasis"> mydriasis</a>, <a href="https://publications.waset.org/abstracts/search?q=pupil%20cosmesis" title=" pupil cosmesis"> pupil cosmesis</a>, <a href="https://publications.waset.org/abstracts/search?q=single-pass%20four-throw%20pupilloplasty" title=" single-pass four-throw pupilloplasty"> single-pass four-throw pupilloplasty</a> </p> <a href="https://publications.waset.org/abstracts/147385/comparative-evaluation-of-postoperative-cosmesis-mydriasis-and-anterior-chamber-morphology-after-single-pass-four-throw-pupilloplasty-between-traumatic-and-congenital-iris-defects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147385.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1720</span> Iris Detection on RGB Image for Controlling Side Mirror</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Norzalina%20Othman">Norzalina Othman</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Na%E2%80%99imy%20Wan"> Nurul Na’imy Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Azliza%20Mohd%20Rusli"> Azliza Mohd Rusli</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Noor%20Syahirah%20Meor%20Idris"> Wan Noor Syahirah Meor Idris</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20detection" title="iris detection">iris detection</a>, <a href="https://publications.waset.org/abstracts/search?q=midpoint%20coordinates" title=" midpoint coordinates"> midpoint coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20images" title=" RGB images"> RGB images</a>, <a href="https://publications.waset.org/abstracts/search?q=side%20mirror" title=" side mirror"> side mirror</a> </p> <a href="https://publications.waset.org/abstracts/8133/iris-detection-on-rgb-image-for-controlling-side-mirror" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1719</span> Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Piyush%20Samant">Piyush Samant</a>, <a href="https://publications.waset.org/abstracts/search?q=Ravinder%20Agarwal"> Ravinder Agarwal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=complementary%20and%20alternative%20medicine" title="complementary and alternative medicine">complementary and alternative medicine</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=iridology" title=" iridology"> iridology</a>, <a href="https://publications.waset.org/abstracts/search?q=iris" title=" iris"> iris</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=disease%20prediction" title=" disease prediction"> disease prediction</a> </p> <a href="https://publications.waset.org/abstracts/64053/diagnosis-of-diabetes-using-computer-methods-soft-computing-methods-for-diabetes-detection-using-iris" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64053.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">407</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1718</span> Clustering Performance Analysis using New Correlation-Based Cluster Validity Indices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nathakhun%20Wiroonsri">Nathakhun Wiroonsri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are various cluster validity measures used for evaluating clustering results. One of the main objectives of using these measures is to seek the optimal unknown number of clusters. Some measures work well for clusters with different densities, sizes and shapes. Yet, one of the weaknesses that those validity measures share is that they sometimes provide only one clear optimal number of clusters. That number is actually unknown and there might be more than one potential sub-optimal option that a user may wish to choose based on different applications. We develop two new cluster validity indices based on a correlation between an actual distance between a pair of data points and a centroid distance of clusters that the two points are located in. Our proposed indices constantly yield several peaks at different numbers of clusters which overcome the weakness previously stated. Furthermore, the introduced correlation can also be used for evaluating the quality of a selected clustering result. Several experiments in different scenarios, including the well-known iris data set and a real-world marketing application, have been conducted to compare the proposed validity indices with several well-known ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering%20algorithm" title="clustering algorithm">clustering algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=cluster%20validity%20measure" title=" cluster validity measure"> cluster validity measure</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20partitions" title=" data partitions"> data partitions</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20data%20set" title=" iris data set"> iris data set</a>, <a href="https://publications.waset.org/abstracts/search?q=marketing" title=" marketing"> marketing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/147709/clustering-performance-analysis-using-new-correlation-based-cluster-validity-indices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147709.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1717</span> Handwriting Recognition of Gurmukhi Script: A Survey of Online and Offline Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ravneet%20Kaur">Ravneet Kaur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Character recognition is a very interesting area of pattern recognition. From past few decades, an intensive research on character recognition for Roman, Chinese, and Japanese and Indian scripts have been reported. In this paper, a review of Handwritten Character Recognition work on Indian Script Gurmukhi is being highlighted. Most of the published papers were summarized, various methodologies were analysed and their results are reported. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gurmukhi%20character%20recognition" title="Gurmukhi character recognition">Gurmukhi character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=online" title=" online"> online</a>, <a href="https://publications.waset.org/abstracts/search?q=offline" title=" offline"> offline</a>, <a href="https://publications.waset.org/abstracts/search?q=HCR%20survey" title=" HCR survey"> HCR survey</a> </p> <a href="https://publications.waset.org/abstracts/46337/handwriting-recognition-of-gurmukhi-script-a-survey-of-online-and-offline-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1716</span> OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Bagirzade">A. R. Bagirzade</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sh.%20Najafova"> A. Sh. Najafova</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Yessirkepova"> S. M. Yessirkepova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20S.%20Albert"> E. S. Albert</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ABBYY%20FineReader%20system" title="ABBYY FineReader system">ABBYY FineReader system</a>, <a href="https://publications.waset.org/abstracts/search?q=algorithm%20symbol%20recognition" title=" algorithm symbol recognition"> algorithm symbol recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR%2FICR%20techniques" title=" OCR/ICR techniques"> OCR/ICR techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20technologies" title=" recognition technologies"> recognition technologies</a> </p> <a href="https://publications.waset.org/abstracts/130255/ocricr-text-recognition-using-abbyy-finereader-as-an-example-text" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130255.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1715</span> An Improved OCR Algorithm on Appearance Recognition of Electronic Components Based on Self-adaptation of Multifont Template</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhu-Qing%20Jia">Zhu-Qing Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Lin"> Tao Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Tong%20Zhou"> Tong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The recognition method of Optical Character Recognition has been expensively utilized, while it is rare to be employed specifically in recognition of electronic components. This paper suggests a high-effective algorithm on appearance identification of integrated circuit components based on the existing methods of character recognition, and analyze the pros and cons. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20character%20recognition" title="optical character recognition">optical character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20page%20identification" title=" fuzzy page identification"> fuzzy page identification</a>, <a href="https://publications.waset.org/abstracts/search?q=mutual%20correlation%20matrix" title=" mutual correlation matrix"> mutual correlation matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=confidence%20self-adaptation" title=" confidence self-adaptation"> confidence self-adaptation</a> </p> <a href="https://publications.waset.org/abstracts/14322/an-improved-ocr-algorithm-on-appearance-recognition-of-electronic-components-based-on-self-adaptation-of-multifont-template" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14322.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">540</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1714</span> Facial Recognition on the Basis of Facial Fragments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk">Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Bonilla%20Meza"> Sandra Bonilla Meza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild<em>) </em>face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=labeled%20faces%20in%20the%20wild%20%28LFW%29%20database" title=" labeled faces in the wild (LFW) database"> labeled faces in the wild (LFW) database</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20local%20descriptor%20%28RLD%29" title=" random local descriptor (RLD)"> random local descriptor (RLD)</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20features" title=" random features"> random features</a> </p> <a href="https://publications.waset.org/abstracts/50117/facial-recognition-on-the-basis-of-facial-fragments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50117.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1713</span> DBN-Based Face Recognition System Using Light Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bing%20Gu">Bing Gu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Abstract—Most of Conventional facial recognition systems are based on image features, such as LBP, SIFT. Recently some DBN-based 2D facial recognition systems have been proposed. However, we find there are few DBN-based 3D facial recognition system and relative researches. 3D facial images include all the individual biometric information. We can use these information to build more accurate features, So we present our DBN-based face recognition system using Light Field. We can see Light Field as another presentation of 3D image, and Light Field Camera show us a way to receive a Light Field. We use the commercially available Light Field Camera to act as the collector of our face recognition system, and the system receive a state-of-art performance as convenient as conventional 2D face recognition system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DBN" title="DBN">DBN</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20field" title=" light field"> light field</a>, <a href="https://publications.waset.org/abstracts/search?q=Lytro" title=" Lytro"> Lytro</a> </p> <a href="https://publications.waset.org/abstracts/10821/dbn-based-face-recognition-system-using-light-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10821.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1712</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1711</span> Automatic Teller Machine System Security by Using Mobile SMS Code </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Husnain%20Mushtaq">Husnain Mushtaq</a>, <a href="https://publications.waset.org/abstracts/search?q=Mary%20Anjum"> Mary Anjum</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Aleem"> Muhammad Aleem </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main objective of this paper is used to develop a high security in Automatic Teller Machine (ATM). In these system bankers will collect the mobile numbers from the customers and then provide a code on their mobile number. In most country existing ATM machine use the magnetic card reader. The customer is identifying by inserting an ATM card with magnetic card that hold unique information such as card number and some security limitations. By entering a personal identification number, first the customer is authenticated then will access bank account in order to make cash withdraw or other services provided by the bank. Cases of card fraud are another problem once the user’s bank card is missing and the password is stolen, or simply steal a customer’s card & PIN the criminal will draw all cash in very short time, which will being great financial losses in customer, this type of fraud has increase worldwide. So to resolve this problem we are going to provide the solution using “Mobile SMS code” and ATM “PIN code” in order to improve the verify the security of customers using ATM system and confidence in the banking area. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PIN" title="PIN">PIN</a>, <a href="https://publications.waset.org/abstracts/search?q=inquiry" title=" inquiry"> inquiry</a>, <a href="https://publications.waset.org/abstracts/search?q=biometric" title=" biometric"> biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetic%20strip" title=" magnetic strip"> magnetic strip</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a> </p> <a href="https://publications.waset.org/abstracts/31509/automatic-teller-machine-system-security-by-using-mobile-sms-code" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31509.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1710</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/42384/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1709</span> Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Nhan%20Nguyen">Van Nhan Nguyen</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20Holone"> Harald Holone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=asr" title=" asr"> asr</a>, <a href="https://publications.waset.org/abstracts/search?q=air%20traffic%20control" title=" air traffic control"> air traffic control</a>, <a href="https://publications.waset.org/abstracts/search?q=atc" title=" atc"> atc</a> </p> <a href="https://publications.waset.org/abstracts/31004/possibilities-challenges-and-the-state-of-the-art-of-automatic-speech-recognition-in-air-traffic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1708</span> A Contribution to Human Activities Recognition Using Expert System Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with human activity recognition from sensor data. It is an active research area, and the main objective is to obtain a high recognition rate. In this work, a recognition system based on expert systems is proposed; the recognition is performed using the objects, object states, and gestures and taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions and the activity). The system recognizes complex activities after decomposing them into simple, easy-to-recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/171721/a-contribution-to-human-activities-recognition-using-expert-system-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171721.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1707</span> Switching to the Latin Alphabet in Kazakhstan: A Brief Overview of Character Recognition Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ainagul%20Yermekova">Ainagul Yermekova</a>, <a href="https://publications.waset.org/abstracts/search?q=Liudmila%20Goncharenko"> Liudmila Goncharenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Baghirzade"> Ali Baghirzade</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergey%20Sybachin"> Sergey Sybachin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article, we address the problem of Kazakhstan's transition to the Latin alphabet. The transition process started in 2017 and is scheduled to be completed in 2025. In connection with these events, the problem of recognizing the characters of the new alphabet is raised. Well-known character recognition programs such as ABBYY FineReader, FormReader, MyScript Stylus did not recognize specific Kazakh letters that were used in Cyrillic. The author tries to give an assessment of the well-known method of character recognition that could be in demand as part of the country's transition to the Latin alphabet. Three methods of character recognition: template, structured, and feature-based, are considered through the algorithms of operation. At the end of the article, a general conclusion is made about the possibility of applying a certain method to a particular recognition process: for example, in the process of population census, recognition of typographic text in Latin, or recognition of photos of car numbers, store signs, etc. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title="text detection">text detection</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20method" title=" template method"> template method</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20algorithm" title=" recognition algorithm"> recognition algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=structured%20method" title=" structured method"> structured method</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20method" title=" feature method"> feature method</a> </p> <a href="https://publications.waset.org/abstracts/138734/switching-to-the-latin-alphabet-in-kazakhstan-a-brief-overview-of-character-recognition-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">186</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1706</span> Recognizing an Individual, Their Topic of Conversation and Cultural Background from 3D Body Movement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gheida%20J.%20Shahrour">Gheida J. Shahrour</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20J.%20Russell"> Martin J. Russell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that inter-subject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=person%20recognition" title="person recognition">person recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20recognition" title=" topic recognition"> topic recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=culture%20recognition" title=" culture recognition"> culture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20body%20movement%20signals" title=" 3D body movement signals"> 3D body movement signals</a>, <a href="https://publications.waset.org/abstracts/search?q=variability%20compensation" title=" variability compensation"> variability compensation</a> </p> <a href="https://publications.waset.org/abstracts/19473/recognizing-an-individual-their-topic-of-conversation-and-cultural-background-from-3d-body-movement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19473.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">541</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1705</span> Human Activities Recognition Based on Expert System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of human activities from sensor data is an active research area, and the main objective is to obtain a high recognition rate. In this work, we propose a recognition system based on expert systems. The proposed system makes the recognition based on the objects, object states, and gestures, taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions, and the activity). This work focuses on complex activities which are decomposed into simple easy to recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/151943/human-activities-recognition-based-on-expert-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1704</span> Enhanced Face Recognition with Daisy Descriptors Using 1BT Based Registration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sevil%20Igit">Sevil Igit</a>, <a href="https://publications.waset.org/abstracts/search?q=Merve%20Meric"> Merve Meric</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarp%20Erturk"> Sarp Erturk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, it is proposed to improve Daisy descriptor based face recognition using a novel One-Bit Transform (1BT) based pre-registration approach. The 1BT based pre-registration procedure is fast and has low computational complexity. It is shown that the face recognition accuracy is improved with the proposed approach. The proposed approach can facilitate highly accurate face recognition using DAISY descriptor with simple matching and thereby facilitate a low-complexity approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Daisy%20descriptor" title=" Daisy descriptor"> Daisy descriptor</a>, <a href="https://publications.waset.org/abstracts/search?q=One-Bit%20Transform" title=" One-Bit Transform"> One-Bit Transform</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title=" image registration"> image registration</a> </p> <a href="https://publications.waset.org/abstracts/12593/enhanced-face-recognition-with-daisy-descriptors-using-1bt-based-registration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12593.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=57">57</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=58">58</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=iris%20recognition&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>