CINXE.COM

Search results for: face recognition.

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: face recognition.</title> <meta name="description" content="Search results for: face recognition."> <meta name="keywords" content="face recognition."> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="face recognition." name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="face recognition."> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1297</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: face recognition.</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1297</span> Face Recognition: A Literature Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20S.%20Tolba">A. S. Tolba</a>, <a href="https://publications.waset.org/search?q=A.H.%20El-Baz"> A.H. El-Baz</a>, <a href="https://publications.waset.org/search?q=A.A.%20El-Harby"> A.A. El-Harby</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented. Description and limitations of face databases which are used to test the performance of these face recognition algorithms are given. A brief summary of the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face recognition technology, and its conclusions are also given. Finally, we give a summary of the research results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Combined%20classifiers" title="Combined classifiers">Combined classifiers</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=graph%20matching" title=" graph matching"> graph matching</a>, <a href="https://publications.waset.org/search?q=neural%20networks." title=" neural networks."> neural networks.</a> </p> <a href="https://publications.waset.org/7912/face-recognition-a-literature-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7912/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7912/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7912/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7912/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7912/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7912/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7912/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7912/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7912/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7912/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">7723</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1296</span> Practical Aspects of Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Vural">S. Vural</a>, <a href="https://publications.waset.org/search?q=H.%20Yamauchi"> H. Yamauchi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Current systems for face recognition techniques often use either SVM or Adaboost techniques for face detection part and use PCA for face recognition part. In this paper, we offer a novel method for not only a powerful face detection system based on Six-segment-filters (SSR) and Adaboost learning algorithms but also for a face recognition system. A new exclusive face detection algorithm has been developed and connected with the recognition algorithm. As a result of it, we obtained an overall high-system performance compared with current systems. The proposed algorithm was tested on CMU, FERET, UNIBE, MIT face databases and significant performance has obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Adaboost" title="Adaboost">Adaboost</a>, <a href="https://publications.waset.org/search?q=Face%20Detection" title=" Face Detection"> Face Detection</a>, <a href="https://publications.waset.org/search?q=Face%20recognition" title=" Face recognition"> Face recognition</a>, <a href="https://publications.waset.org/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/search?q=Gabor%20filters" title=" Gabor filters"> Gabor filters</a>, <a href="https://publications.waset.org/search?q=PCA-ICA." title=" PCA-ICA."> PCA-ICA.</a> </p> <a href="https://publications.waset.org/3670/practical-aspects-of-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3670/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3670/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3670/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3670/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3670/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3670/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3670/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3670/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3670/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3670/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1598</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1295</span> A New Biologically Inspired Pattern Recognition Spproach for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=V.%20Kabeer">V. Kabeer</a>, <a href="https://publications.waset.org/search?q=N.K.Narayanan"> N.K.Narayanan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper reports a new pattern recognition approach for face recognition. The biological model of light receptors - cones and rods in human eyes and the way they are associated with pattern vision in human vision forms the basis of this approach. The functional model is simulated using CWD and WPD. The paper also discusses the experiments performed for face recognition using the features extracted from images in the AT &amp; T face database. Artificial Neural Network and k- Nearest Neighbour classifier algorithms are employed for the recognition purpose. A feature vector is formed for each of the face images in the database and recognition accuracies are computed and compared using the classifiers. Simulation results show that the proposed method outperforms traditional way of feature extraction methods prevailing for pattern recognition in terms of recognition accuracy for face images with pose and illumination variations.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Image%20analysis" title=" Image analysis"> Image analysis</a>, <a href="https://publications.waset.org/search?q=Wavelet%20feature%20extraction" title=" Wavelet feature extraction"> Wavelet feature extraction</a>, <a href="https://publications.waset.org/search?q=Pattern%20recognition" title=" Pattern recognition"> Pattern recognition</a>, <a href="https://publications.waset.org/search?q=Classifier%20algorithms" title=" Classifier algorithms"> Classifier algorithms</a> </p> <a href="https://publications.waset.org/13389/a-new-biologically-inspired-pattern-recognition-spproach-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13389/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13389/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13389/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13389/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13389/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13389/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13389/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13389/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13389/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13389/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13389.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1677</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1294</span> Facial Recognition on the Basis of Facial Fragments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tetyana%20Baydyk">Tetyana Baydyk</a>, <a href="https://publications.waset.org/search?q=Ernst%20Kussul"> Ernst Kussul</a>, <a href="https://publications.waset.org/search?q=Sandra%20Bonilla%20Meza"> Sandra Bonilla Meza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild<em>) </em>face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Labeled%20Faces%20in%20the%20Wild%20%28LFW%29%20database" title=" Labeled Faces in the Wild (LFW) database"> Labeled Faces in the Wild (LFW) database</a>, <a href="https://publications.waset.org/search?q=Random%20Local%20Descriptor%20%28RLD%29" title=" Random Local Descriptor (RLD)"> Random Local Descriptor (RLD)</a>, <a href="https://publications.waset.org/search?q=random%20features." title=" random features."> random features.</a> </p> <a href="https://publications.waset.org/10007234/facial-recognition-on-the-basis-of-facial-fragments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10007234/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10007234/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10007234/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10007234/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10007234/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10007234/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10007234/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10007234/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10007234/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10007234/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10007234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1013</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1293</span> Face Recognition Using Eigen face Coefficients and Principal Component Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Parvinder%20S.%20Sandhu">Parvinder S. Sandhu</a>, <a href="https://publications.waset.org/search?q=Iqbaldeep%20Kaur"> Iqbaldeep Kaur</a>, <a href="https://publications.waset.org/search?q=Amit%20Verma"> Amit Verma</a>, <a href="https://publications.waset.org/search?q=Samriti%20Jindal"> Samriti Jindal</a>, <a href="https://publications.waset.org/search?q=Inderpreet%20Kaur"> Inderpreet Kaur</a>, <a href="https://publications.waset.org/search?q=Shilpi%20Kumari"> Shilpi Kumari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face Recognition is a field of multidimensional applications. A lot of work has been done, extensively on the most of details related to face recognition. This idea of face recognition using PCA is one of them. In this paper the PCA features for Feature extraction are used and matching is done for the face under consideration with the test image using Eigen face coefficients. The crux of the work lies in optimizing Euclidean distance and paving the way to test the same algorithm using Matlab which is an efficient tool having powerful user interface along with simplicity in representing complex images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Eigen%20Face" title="Eigen Face">Eigen Face</a>, <a href="https://publications.waset.org/search?q=Multidimensional" title=" Multidimensional"> Multidimensional</a>, <a href="https://publications.waset.org/search?q=Matching" title=" Matching"> Matching</a>, <a href="https://publications.waset.org/search?q=PCA." title=" PCA."> PCA.</a> </p> <a href="https://publications.waset.org/3288/face-recognition-using-eigen-face-coefficients-and-principal-component-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3288/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3288/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3288/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3288/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3288/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3288/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3288/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3288/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3288/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3288/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2870</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1292</span> Infrared Face Recognition Using Distance Transforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Moulay%20A.%20Akhloufi">Moulay A. Akhloufi</a>, <a href="https://publications.waset.org/search?q=Abdelhakim%20Bendada"> Abdelhakim Bendada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work we present an efficient approach for face recognition in the infrared spectrum. In the proposed approach physiological features are extracted from thermal images in order to build a unique thermal faceprint. Then, a distance transform is used to get an invariant representation for face recognition. The obtained physiological features are related to the distribution of blood vessels under the face skin. This blood network is unique to each individual and can be used in infrared face recognition. The obtained results are promising and show the effectiveness of the proposed scheme. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=biometrics" title=" biometrics"> biometrics</a>, <a href="https://publications.waset.org/search?q=infrared%20imaging." title=" infrared imaging."> infrared imaging.</a> </p> <a href="https://publications.waset.org/14983/infrared-face-recognition-using-distance-transforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14983/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14983/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14983/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14983/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14983/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14983/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14983/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14983/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14983/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14983/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14983.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1423</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1291</span> New Adaptive Linear Discriminante Analysis for Face Recognition with SVM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mehdi%20Ghayoumi">Mehdi Ghayoumi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We have applied new accelerated algorithm for linear discriminate analysis (LDA) in face recognition with support vector machine. The new algorithm has the advantage of optimal selection of the step size. The gradient descent method and new algorithm has been implemented in software and evaluated on the Yale face database B. The eigenfaces of these approaches have been used to training a KNN. Recognition rate with new algorithm is compared with gradient. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=lda" title="lda">lda</a>, <a href="https://publications.waset.org/search?q=adaptive" title=" adaptive"> adaptive</a>, <a href="https://publications.waset.org/search?q=svm" title=" svm"> svm</a>, <a href="https://publications.waset.org/search?q=face%20recognition." title=" face recognition."> face recognition.</a> </p> <a href="https://publications.waset.org/12509/new-adaptive-linear-discriminante-analysis-for-face-recognition-with-svm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12509/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12509/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12509/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12509/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12509/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12509/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12509/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12509/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12509/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12509/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12509.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1422</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1290</span> Probabilistic Bayesian Framework for Infrared Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Moulay%20A.%20Akhloufi">Moulay A. Akhloufi</a>, <a href="https://publications.waset.org/search?q=Abdelhakim%20Bendada"> Abdelhakim Bendada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA and LDA. In this work, we introduce a probabilistic Bayesian framework for face recognition in the infrared spectrum. In the infrared spectrum, variations can occur between face images of the same individual due to pose, metabolic, time changes, etc. Bayesian approaches permit to reduce intrapersonal variation, thus making them very interesting for infrared face recognition. This framework is compared with classical linear techniques. Non linear techniques we developed recently for infrared face recognition are also presented and compared to the Bayesian face recognition framework. A new approach for infrared face extraction based on SVM is introduced. Experimental results show that the Bayesian technique is promising and lead to interesting results in the infrared spectrum when a sufficient number of face images is used in an intrapersonal learning process.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=biometrics" title=" biometrics"> biometrics</a>, <a href="https://publications.waset.org/search?q=probabilistic%20imageprocessing" title=" probabilistic imageprocessing"> probabilistic imageprocessing</a>, <a href="https://publications.waset.org/search?q=infrared%20imaging." title=" infrared imaging."> infrared imaging.</a> </p> <a href="https://publications.waset.org/12837/probabilistic-bayesian-framework-for-infrared-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12837/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12837/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12837/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12837/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12837/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12837/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12837/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12837/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12837/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12837/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12837.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1877</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1289</span> Liveness Detection for Embedded Face Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hyung-Keun%20Jee">Hyung-Keun Jee</a>, <a href="https://publications.waset.org/search?q=Sung-Uk%20Jung"> Sung-Uk Jung</a>, <a href="https://publications.waset.org/search?q=Jang-Hee%20Yoo"> Jang-Hee Yoo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Liveness%20Detection" title="Liveness Detection">Liveness Detection</a>, <a href="https://publications.waset.org/search?q=Eye%20detection" title=" Eye detection"> Eye detection</a>, <a href="https://publications.waset.org/search?q=SQI." title=" SQI."> SQI.</a> </p> <a href="https://publications.waset.org/5308/liveness-detection-for-embedded-face-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5308/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5308/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5308/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5308/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5308/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5308/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5308/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5308/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5308/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5308/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5308.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3181</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1288</span> Face Recognition Based On Vector Quantization Using Fuzzy Neuro Clustering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Elizabeth%20B.%20Varghese">Elizabeth B. Varghese</a>, <a href="https://publications.waset.org/search?q=M.%20Wilscy"> M. Wilscy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame. A lot of algorithms have been proposed for face recognition. Vector Quantization (VQ) based face recognition is a novel approach for face recognition. Here a new codebook generation for VQ based face recognition using Integrated Adaptive Fuzzy Clustering (IAFC) is proposed. IAFC is a fuzzy neural network which incorporates a fuzzy learning rule into a competitive neural network. The performance of proposed algorithm is demonstrated by using publicly available AT&amp;T database, Yale database, Indian Face database and a small face database, DCSKU database created in our lab. In all the databases the proposed approach got a higher recognition rate than most of the existing methods. In terms of Equal Error Rate (ERR) also the proposed codebook is better than the existing methods.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Vector%20Quantization" title=" Vector Quantization"> Vector Quantization</a>, <a href="https://publications.waset.org/search?q=Integrated%20Adaptive%20Fuzzy%20Clustering" title=" Integrated Adaptive Fuzzy Clustering"> Integrated Adaptive Fuzzy Clustering</a>, <a href="https://publications.waset.org/search?q=Self%20Organization%20Map." title=" Self Organization Map."> Self Organization Map.</a> </p> <a href="https://publications.waset.org/9997450/face-recognition-based-on-vector-quantization-using-fuzzy-neuro-clustering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9997450/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9997450/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9997450/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9997450/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9997450/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9997450/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9997450/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9997450/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9997450/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9997450/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9997450.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2241</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1287</span> A New Approach to Face Recognition Using Dual Dimension Reduction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Almas%20Anjum">M. Almas Anjum</a>, <a href="https://publications.waset.org/search?q=M.%20Younus%20Javed"> M. Younus Javed</a>, <a href="https://publications.waset.org/search?q=A.%20Basit"> A. Basit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometrics" title="Biometrics">Biometrics</a>, <a href="https://publications.waset.org/search?q=DCT" title=" DCT"> DCT</a>, <a href="https://publications.waset.org/search?q=Face%20Recognition" title=" Face Recognition"> Face Recognition</a>, <a href="https://publications.waset.org/search?q=Illumination" title=" Illumination"> Illumination</a>, <a href="https://publications.waset.org/search?q=Computation" title=" Computation"> Computation</a>, <a href="https://publications.waset.org/search?q=Feature%20extraction." title=" Feature extraction."> Feature extraction.</a> </p> <a href="https://publications.waset.org/4885/a-new-approach-to-face-recognition-using-dual-dimension-reduction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4885/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4885/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4885/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4885/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4885/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4885/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4885/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4885/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4885/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4885/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4885.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1686</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1286</span> Enhanced Face Recognition with Daisy Descriptors Using 1BT Based Registration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sevil%20Igit">Sevil Igit</a>, <a href="https://publications.waset.org/search?q=Merve%20Meric"> Merve Meric</a>, <a href="https://publications.waset.org/search?q=Sarp%20Erturk"> Sarp Erturk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, it is proposed to improve Daisy Descriptor based face recognition using a novel One-Bit Transform (1BT) based pre-registration approach. The 1BT based pre-registration procedure is fast and has low computational complexity. It is shown that the face recognition accuracy is improved with the proposed approach. The proposed approach can facilitate highly accurate face recognition using DAISY descriptor with simple matching and thereby facilitate a low-complexity approach.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Daisy%20Descriptor" title=" Daisy Descriptor"> Daisy Descriptor</a>, <a href="https://publications.waset.org/search?q=One-Bit%20Transform" title=" One-Bit Transform"> One-Bit Transform</a>, <a href="https://publications.waset.org/search?q=Image%20Registration." title=" Image Registration. "> Image Registration. </a> </p> <a href="https://publications.waset.org/9998949/enhanced-face-recognition-with-daisy-descriptors-using-1bt-based-registration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998949/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998949/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998949/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998949/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998949/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998949/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998949/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998949/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998949/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998949/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998949.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1972</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1285</span> On Face Recognition using Gabor Filters </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Al-Amin%20Bhuiyan">Al-Amin Bhuiyan</a>, <a href="https://publications.waset.org/search?q=Chang%20Hong%20Liu"> Chang Hong Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gabor-based face representation has achieved enormous success in face recognition. This paper addresses a novel algorithm for face recognition using neural networks trained by Gabor features. The system is commenced on convolving a face image with a series of Gabor filter coefficients at different scales and orientations. Two novel contributions of this paper are: scaling of rms contrast and introduction of fuzzily skewed filter. The neural network employed for face recognition is based on the multilayer perceptron (MLP) architecture with backpropagation algorithm and incorporates the convolution filter response of Gabor jet. The effectiveness of the algorithm has been justified over a face database with images captured at different illumination conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Fuzzily%20skewed%20filter" title="Fuzzily skewed filter">Fuzzily skewed filter</a>, <a href="https://publications.waset.org/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/search?q=rms%20contrast" title=" rms contrast"> rms contrast</a>, <a href="https://publications.waset.org/search?q=neural%20network." title="neural network.">neural network.</a> </p> <a href="https://publications.waset.org/10902/on-face-recognition-using-gabor-filters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10902/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10902/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10902/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10902/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10902/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10902/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10902/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10902/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10902/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10902/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10902.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3101</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1284</span> Face Recognition Using Double Dimension Reduction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20A%20Anjum">M. A Anjum</a>, <a href="https://publications.waset.org/search?q=M.%20Y.%20Javed"> M. Y. Javed</a>, <a href="https://publications.waset.org/search?q=A.%20Basit"> A. Basit </a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometrics" title="Biometrics">Biometrics</a>, <a href="https://publications.waset.org/search?q=DCT" title=" DCT"> DCT</a>, <a href="https://publications.waset.org/search?q=Face%20Recognition" title=" Face Recognition"> Face Recognition</a>, <a href="https://publications.waset.org/search?q=Feature%0D%0Aextraction." title=" Feature extraction."> Feature extraction.</a> </p> <a href="https://publications.waset.org/11707/face-recognition-using-double-dimension-reduction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11707/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11707/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11707/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11707/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11707/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11707/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11707/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11707/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11707/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11707/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11707.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1492</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1283</span> Face Recognition using Radial Basis Function Network based on LDA</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Byung-Joo%20Oh">Byung-Joo Oh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=linear%20discriminant%20analysis" title=" linear discriminant analysis"> linear discriminant analysis</a>, <a href="https://publications.waset.org/search?q=radial%20basis%20function%20network." title=" radial basis function network."> radial basis function network.</a> </p> <a href="https://publications.waset.org/2876/face-recognition-using-radial-basis-function-network-based-on-lda" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2876/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2876/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2876/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2876/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2876/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2876/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2876/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2876/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2876/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2876/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2876.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2122</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1282</span> 3D Face Recognition Using Modified PCA Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Omid%20Gervei">Omid Gervei</a>, <a href="https://publications.waset.org/search?q=Ahmad%20Ayatollahi"> Ahmad Ayatollahi</a>, <a href="https://publications.waset.org/search?q=Navid%20Gervei"> Navid Gervei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present an approach for 3D face recognition based on extracting principal components of range images by utilizing modified PCA methods namely 2DPCA and bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing stage was implemented on the images to smooth them using median and Gaussian filtering. In the normalization stage we locate the nose tip to lay it at the center of images then crop each image to a standard size of 100*100. In the face recognition stage we extract the principal component of each image using both 2DPCA and (2D) 2 PCA. Finally, we use Euclidean distance to measure the minimum distance between a given test image to the training images in the database. We also compare the result of using both methods. The best result achieved by experiments on a public face database shows that 83.3 percent is the rate of face recognition for a random facial expression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=3D%20face%20recognition" title="3D face recognition">3D face recognition</a>, <a href="https://publications.waset.org/search?q=2DPCA" title=" 2DPCA"> 2DPCA</a>, <a href="https://publications.waset.org/search?q=%282D%29%202%20PCA" title=" (2D) 2 PCA"> (2D) 2 PCA</a>, <a href="https://publications.waset.org/search?q=Rangeimage" title=" Rangeimage"> Rangeimage</a> </p> <a href="https://publications.waset.org/5789/3d-face-recognition-using-modified-pca-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5789/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5789/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5789/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5789/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5789/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5789/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5789/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5789/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5789/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5789/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3066</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1281</span> A Structural Support Vector Machine Approach for Biometric Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vishal%20Awasthi">Vishal Awasthi</a>, <a href="https://publications.waset.org/search?q=Atul%20Kumar%20Agnihotri"> Atul Kumar Agnihotri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face is a non-intrusive strong biometrics for identification of original and dummy facial by different artificial means. Face recognition is extremely important in the contexts of computer vision, psychology, surveillance, pattern recognition, neural network, content based video processing. The availability of a widespread face database is crucial to test the performance of these face recognition algorithms. The openly available face databases include face images with a wide range of poses, illumination, gestures and face occlusions but there is no dummy face database accessible in public domain. This paper presents a face detection algorithm based on the image segmentation in terms of distance from a fixed point and template matching methods. This proposed work is having the most appropriate number of nodal points resulting in most appropriate outcomes in terms of face recognition and detection. The time taken to identify and extract distinctive facial features is improved in the range of 90 to 110 sec. with the increment of efficiency by 3%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/search?q=Linear%20Discriminant%20Analysis" title=" Linear Discriminant Analysis"> Linear Discriminant Analysis</a>, <a href="https://publications.waset.org/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/search?q=Improved%20Support%0D%0AVector%20Machine" title=" Improved Support Vector Machine"> Improved Support Vector Machine</a>, <a href="https://publications.waset.org/search?q=iSVM" title=" iSVM"> iSVM</a>, <a href="https://publications.waset.org/search?q=elastic%20bunch%20mapping%20technique." title=" elastic bunch mapping technique."> elastic bunch mapping technique.</a> </p> <a href="https://publications.waset.org/10011989/a-structural-support-vector-machine-approach-for-biometric-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011989/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011989/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011989/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011989/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011989/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011989/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011989/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011989/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011989/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011989/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1280</span> Quantitative Analysis of PCA, ICA, LDA and SVM in Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Liton%20Jude%20Rozario">Liton Jude Rozario</a>, <a href="https://publications.waset.org/search?q=Mohammad%20Reduanul%20Haque"> Mohammad Reduanul Haque</a>, <a href="https://publications.waset.org/search?q=Md.%20Ziarul%20Islam"> Md. Ziarul Islam</a>, <a href="https://publications.waset.org/search?q=Mohammad%20Shorif%20Uddin"> Mohammad Shorif Uddin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Face recognition is a technique to automatically identify or verify individuals. It receives great attention in identification, authentication, security and many more applications. Diverse methods had been proposed for this purpose and also a lot of comparative studies were performed. However, researchers could not reach unified conclusion. In this paper, we are reporting an extensive quantitative accuracy analysis of four most widely used face recognition algorithms: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) using AT&amp;T, Sheffield and Bangladeshi people face databases under diverse situations such as illumination, alignment and pose variations.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=PCA" title="PCA">PCA</a>, <a href="https://publications.waset.org/search?q=ICA" title=" ICA"> ICA</a>, <a href="https://publications.waset.org/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=noise." title=" noise."> noise.</a> </p> <a href="https://publications.waset.org/9999412/quantitative-analysis-of-pca-ica-lda-and-svm-in-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999412/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999412/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999412/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999412/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999412/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999412/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999412/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999412/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999412/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999412/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999412.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2431</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1279</span> Mobile to Server Face Recognition: A System Overview</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nurulhuda%20Ismail">Nurulhuda Ismail</a>, <a href="https://publications.waset.org/search?q=Mas%20Idayu%20Md.%20Sabri"> Mas Idayu Md. Sabri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a system overview of Mobile to Server Face Recognition, which is a face recognition application developed specifically for mobile phones. Images taken from mobile phone cameras lack of quality due to the low resolution of the cameras. Thus, a prototype is developed to experiment the chosen method. However, this paper shows a result of system backbone without the face recognition functionality. The result demonstrated in this paper indicates that the interaction between mobile phones and server is successfully working. The result shown before the database is completely ready. The system testing is currently going on using real images and a mock-up database to test the functionality of the face recognition algorithm used in this system. An overview of the whole system including screenshots and system flow-chart are presented in this paper. This paper also presents the inspiration or motivation and the justification in developing this system.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Mobile%20to%20server" title="Mobile to server">Mobile to server</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=system%20overview." title=" system overview."> system overview.</a> </p> <a href="https://publications.waset.org/710/mobile-to-server-face-recognition-a-system-overview" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/710/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/710/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/710/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/710/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/710/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/710/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/710/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/710/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/710/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/710/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/710.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2426</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1278</span> Constructing of Classifier for Face Recognition on the Basis of the Conjugation Indexes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vladimir%20A.%20Fursov">Vladimir A. Fursov</a>, <a href="https://publications.waset.org/search?q=Nikita%20E.%20Kozin"> Nikita E. Kozin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work the opportunity of construction of the qualifiers for face-recognition systems based on conjugation criteria is investigated. The linkage between the bipartite conjugation, the conjugation with a subspace and the conjugation with the null-space is shown. The unified solving rule is investigated. It makes the decision on the rating of face to a class considering the linkage between conjugation values. The described recognition method can be successfully applied to the distributed systems of video control and video observation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Conjugation" title="Conjugation">Conjugation</a>, <a href="https://publications.waset.org/search?q=Eigenfaces" title=" Eigenfaces"> Eigenfaces</a>, <a href="https://publications.waset.org/search?q=Recognition." title=" Recognition."> Recognition.</a> </p> <a href="https://publications.waset.org/4368/constructing-of-classifier-for-face-recognition-on-the-basis-of-the-conjugation-indexes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/4368/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/4368/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/4368/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/4368/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/4368/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/4368/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/4368/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/4368/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/4368/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/4368/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/4368.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1467</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1277</span> Face Recognition Using Discrete Orthogonal Hahn Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fatima%20Akhmedova">Fatima Akhmedova</a>, <a href="https://publications.waset.org/search?q=Simon%20Liao"> Simon Liao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, nonredundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Hahn%20moments" title=" Hahn moments"> Hahn moments</a>, <a href="https://publications.waset.org/search?q=Recognition-by-parts" title=" Recognition-by-parts"> Recognition-by-parts</a>, <a href="https://publications.waset.org/search?q=Time-lapse." title=" Time-lapse."> Time-lapse.</a> </p> <a href="https://publications.waset.org/10002256/face-recognition-using-discrete-orthogonal-hahn-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002256/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002256/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002256/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002256/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002256/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002256/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002256/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002256/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002256/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002256/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002256.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1777</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1276</span> Face Recognition using a Kernelization of Graph Embedding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Pang%20Ying%20Han">Pang Ying Han</a>, <a href="https://publications.waset.org/search?q=Hiew%20Fu%20San"> Hiew Fu San</a>, <a href="https://publications.waset.org/search?q=Ooi%20Shih%20Yin"> Ooi Shih Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Linearization of graph embedding has been emerged as an effective dimensionality reduction technique in pattern recognition. However, it may not be optimal for nonlinearly distributed real world data, such as face, due to its linear nature. So, a kernelization of graph embedding is proposed as a dimensionality reduction technique in face recognition. In order to further boost the recognition capability of the proposed technique, the Fisher-s criterion is opted in the objective function for better data discrimination. The proposed technique is able to characterize the underlying intra-class structure as well as the inter-class separability. Experimental results on FRGC database validate the effectiveness of the proposed technique as a feature descriptor. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Fisher%20discriminant" title=" Fisher discriminant"> Fisher discriminant</a>, <a href="https://publications.waset.org/search?q=graph%0Aembedding" title=" graph embedding"> graph embedding</a>, <a href="https://publications.waset.org/search?q=kernelization." title=" kernelization."> kernelization.</a> </p> <a href="https://publications.waset.org/13302/face-recognition-using-a-kernelization-of-graph-embedding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13302/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13302/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13302/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13302/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13302/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13302/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13302/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13302/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13302/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13302/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13302.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1701</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1275</span> Low Resolution Face Recognition Using Mixture of Experts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fatemeh%20Behjati%20Ardakani">Fatemeh Behjati Ardakani</a>, <a href="https://publications.waset.org/search?q=Fatemeh%20Khademian"> Fatemeh Khademian</a>, <a href="https://publications.waset.org/search?q=Abbas%20Nowzari%20Dalini"> Abbas Nowzari Dalini</a>, <a href="https://publications.waset.org/search?q=Reza%20Ebrahimpour"> Reza Ebrahimpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 脳 48 ORL images to 12 脳 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Low%20resolution%20face%20recognition" title="Low resolution face recognition">Low resolution face recognition</a>, <a href="https://publications.waset.org/search?q=Multilayered%20neuralnetwork" title=" Multilayered neuralnetwork"> Multilayered neuralnetwork</a>, <a href="https://publications.waset.org/search?q=Mixture%20of%20experts%20neural%20network" title=" Mixture of experts neural network"> Mixture of experts neural network</a>, <a href="https://publications.waset.org/search?q=Principal%20componentanalysis" title=" Principal componentanalysis"> Principal componentanalysis</a>, <a href="https://publications.waset.org/search?q=Bicubic%20interpolation" title=" Bicubic interpolation"> Bicubic interpolation</a>, <a href="https://publications.waset.org/search?q=Nearest%20neighbor%20interpolation." title=" Nearest neighbor interpolation."> Nearest neighbor interpolation.</a> </p> <a href="https://publications.waset.org/7504/low-resolution-face-recognition-using-mixture-of-experts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7504/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7504/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7504/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7504/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7504/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7504/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7504/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7504/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7504/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7504/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7504.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1724</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1274</span> An Improved Illumination Normalization based on Anisotropic Smoothing for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sanghoon%20Kim">Sanghoon Kim</a>, <a href="https://publications.waset.org/search?q=Sun-Tae%20Chung"> Sun-Tae Chung</a>, <a href="https://publications.waset.org/search?q=Souhwan%20Jung"> Souhwan Jung</a>, <a href="https://publications.waset.org/search?q=Seongwon%20Cho"> Seongwon Cho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an improved illumination normalization method for face recognition. Illumination normalization algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing-based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing the effect of illumination variations. Due to the result of these improvements, face images preprocessed by the proposed illumination normalization method becomes to have more distinctive feature vectors (Gabor feature vectors) for face recognition. Through experiments of face recognition based on Gabor feature vector similarity, the effectiveness of the proposed illumination normalization method is verified. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Illumination%20Normalization" title="Illumination Normalization">Illumination Normalization</a>, <a href="https://publications.waset.org/search?q=Face%20Recognition" title=" Face Recognition"> Face Recognition</a>, <a href="https://publications.waset.org/search?q=Anisotropic%20smoothing" title="Anisotropic smoothing">Anisotropic smoothing</a>, <a href="https://publications.waset.org/search?q=Gabor%20feature%20vector." title=" Gabor feature vector."> Gabor feature vector.</a> </p> <a href="https://publications.waset.org/3973/an-improved-illumination-normalization-based-on-anisotropic-smoothing-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3973/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3973/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3973/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3973/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3973/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3973/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3973/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3973/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3973/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3973/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3973.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1549</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1273</span> An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Qiu%20Chen">Qiu Chen</a>, <a href="https://publications.waset.org/search?q=Koji%20Kotani"> Koji Kotani</a>, <a href="https://publications.waset.org/search?q=Feifei%20Lee"> Feifei Lee</a>, <a href="https://publications.waset.org/search?q=Tadahiro%20Ohmi"> Tadahiro Ohmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Binary%20vector%20quantization%20%28BVQ%29" title=" Binary vector quantization (BVQ)"> Binary vector quantization (BVQ)</a>, <a href="https://publications.waset.org/search?q=Local%20Binary%20Patterns%20%28LBP%29" title=" Local Binary Patterns (LBP)"> Local Binary Patterns (LBP)</a>, <a href="https://publications.waset.org/search?q=DCT%20coefficients." title=" DCT coefficients."> DCT coefficients.</a> </p> <a href="https://publications.waset.org/10003903/an-improved-face-recognition-algorithm-using-histogram-based-features-in-spatial-and-frequency-domains" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10003903/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10003903/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10003903/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10003903/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10003903/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10003903/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10003903/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10003903/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10003903/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10003903/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10003903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1619</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1272</span> Face Localization and Recognition in Varied Expressions and Illumination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hui-Yu%20Huang">Hui-Yu Huang</a>, <a href="https://publications.waset.org/search?q=Shih-Hang%20Hsu"> Shih-Hang Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gabor%20filter" title="Gabor filter">Gabor filter</a>, <a href="https://publications.waset.org/search?q=improved%20active%20shape%20model%20%28IASM%29" title=" improved active shape model (IASM)"> improved active shape model (IASM)</a>, <a href="https://publications.waset.org/search?q=principal%20component%20analysis%20%28PCA%29" title=" principal component analysis (PCA)"> principal component analysis (PCA)</a>, <a href="https://publications.waset.org/search?q=face%20alignment" title=" face alignment"> face alignment</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a> </p> <a href="https://publications.waset.org/5662/face-localization-and-recognition-in-varied-expressions-and-illumination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5662/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5662/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5662/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5662/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5662/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5662/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5662/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5662/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5662/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5662/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5662.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1491</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1271</span> Neural Network Based Approach for Face Detection cum Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kesari%20Verma">Kesari Verma</a>, <a href="https://publications.waset.org/search?q=Aniruddha%20S.%20Thoke"> Aniruddha S. Thoke</a>, <a href="https://publications.waset.org/search?q=Pritam%20Singh"> Pritam Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic face detection is a complex problem in image processing. Many methods exist to solve this problem such as template matching, Fisher Linear Discriminate, Neural Networks, SVM, and MRC. Success has been achieved with each method to varying degrees and complexities. In proposed algorithm we used upright, frontal faces for single gray scale images with decent resolution and under good lighting condition. In the field of face recognition technique the single face is matched with single face from the training dataset. The author proposed a neural network based face detection algorithm from the photographs as well as if any test data appears it check from the online scanned training dataset. Experimental result shows that the algorithm detected up to 95% accuracy for any image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Detection" title="Face Detection">Face Detection</a>, <a href="https://publications.waset.org/search?q=Face%20Recognition" title=" Face Recognition"> Face Recognition</a>, <a href="https://publications.waset.org/search?q=NN%20Approach" title=" NN Approach"> NN Approach</a>, <a href="https://publications.waset.org/search?q=PCA%20Algorithm." title=" PCA Algorithm."> PCA Algorithm.</a> </p> <a href="https://publications.waset.org/7329/neural-network-based-approach-for-face-detection-cum-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7329/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7329/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7329/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7329/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7329/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7329/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7329/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7329/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7329/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7329/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7329.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2301</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1270</span> Video-based Face Recognition: A Survey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Huafeng%20Wang">Huafeng Wang</a>, <a href="https://publications.waset.org/search?q=Yunhong%20Wang"> Yunhong Wang</a>, <a href="https://publications.waset.org/search?q=Yuan%20Cao"> Yuan Cao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the past several years, face recognition in video has received significant attention. Not only the wide range of commercial and law enforcement applications, but also the availability of feasible technologies after several decades of research contributes to the trend. Although current face recognition systems have reached a certain level of maturity, their development is still limited by the conditions brought about by many real applications. For example, recognition images of video sequence acquired in an open environment with changes in illumination and/or pose and/or facial occlusion and/or low resolution of acquired image remains a largely unsolved problem. In other words, current algorithms are yet to be developed. This paper provides an up-to-date survey of video-based face recognition research. To present a comprehensive survey, we categorize existing video based recognition approaches and present detailed descriptions of representative methods within each category. In addition, relevant topics such as real time detection, real time tracking for video, issues such as illumination, pose, 3D and low resolution are covered. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=video-based" title=" video-based"> video-based</a>, <a href="https://publications.waset.org/search?q=survey" title=" survey"> survey</a> </p> <a href="https://publications.waset.org/15131/video-based-face-recognition-a-survey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/15131/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/15131/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/15131/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/15131/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/15131/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/15131/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/15131/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/15131/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/15131/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/15131/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/15131.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">4121</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1269</span> An Experimental Comparison of Unsupervised Learning Techniques for Face Recognition </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dinesh%20Kumar">Dinesh Kumar</a>, <a href="https://publications.waset.org/search?q=C.S.%20Rai"> C.S. Rai</a>, <a href="https://publications.waset.org/search?q=Shakti%20Kumar"> Shakti Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Face Recognition has always been a fascinating research area. It has drawn the attention of many researchers because of its various potential applications such as security systems, entertainment, criminal identification etc. Many supervised and unsupervised learning techniques have been reported so far. Principal Component Analysis (PCA), Self Organizing Maps (SOM) and Independent Component Analysis (ICA) are the three techniques among many others as proposed by different researchers for Face Recognition, known as the unsupervised techniques. This paper proposes integration of the two techniques, SOM and PCA, for dimensionality reduction and feature selection. Simulation results show that, though, the individual techniques SOM and PCA itself give excellent performance but the combination of these two can also be utilized for face recognition. Experimental results also indicate that for the given face database and the classifier used, SOM performs better as compared to other unsupervised learning techniques. A comparison of two proposed methodologies of SOM, Local and Global processing, shows the superiority of the later but at the cost of more computational time.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=Self%20Organizing%20Maps" title=" Self Organizing Maps"> Self Organizing Maps</a>, <a href="https://publications.waset.org/search?q=Independent%20Component%20Analysis" title=" Independent Component Analysis"> Independent Component Analysis</a> </p> <a href="https://publications.waset.org/294/an-experimental-comparison-of-unsupervised-learning-techniques-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/294/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/294/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/294/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/294/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/294/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/294/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/294/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/294/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/294/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/294/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1880</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1268</span> A New Face Recognition Method using PCA, LDA and Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20Hossein%20Sahoolizadeh">A. Hossein Sahoolizadeh</a>, <a href="https://publications.waset.org/search?q=B.%20Zargham%20Heidari"> B. Zargham Heidari</a>, <a href="https://publications.waset.org/search?q=C.%20Hamid%20Dehghani"> C. Hamid Dehghani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new face recognition method based on PCA (principal Component Analysis), LDA (Linear Discriminant Analysis) and neural networks is proposed. This method consists of four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii) feature extraction using LDA and iv) classification using neural network. Combination of PCA and LDA is used for improving the capability of LDA when a few samples of images are available and neural classifier is used to reduce number misclassification caused by not-linearly separable classes. The proposed method was tested on Yale face database. Experimental results on this database demonstrated the effectiveness of the proposed method for face recognition with less misclassification in comparison with previous methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis" title="Face recognition Principal component analysis">Face recognition Principal component analysis</a>, <a href="https://publications.waset.org/search?q=Linear%20discriminant%20analysis" title=" Linear discriminant analysis"> Linear discriminant analysis</a>, <a href="https://publications.waset.org/search?q=Neural%20networks." title=" Neural networks."> Neural networks.</a> </p> <a href="https://publications.waset.org/13908/a-new-face-recognition-method-using-pca-lda-and-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13908/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13908/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13908/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13908/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13908/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13908/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13908/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13908/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13908/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13908/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3213</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=43">43</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=44">44</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=face%20recognition.&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10