CINXE.COM
Search results for: Face recognition Principal component analysis
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Face recognition Principal component analysis</title> <meta name="description" content="Search results for: Face recognition Principal component analysis"> <meta name="keywords" content="Face recognition Principal component analysis"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Face recognition Principal component analysis" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Face recognition Principal component analysis"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 10226</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Face recognition Principal component analysis</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10226</span> Non-negative Principal Component Analysis for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Zhang%20Yan">Zhang Yan</a>, <a href="https://publications.waset.org/search?q=Yu%20Bin"> Yu Bin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Principle component analysis is often combined with the state-of-art classification algorithms to recognize human faces. However, principle component analysis can only capture these features contributing to the global characteristics of data because it is a global feature selection algorithm. It misses those features contributing to the local characteristics of data because each principal component only contains some levels of global characteristics of data. In this study, we present a novel face recognition approach using non-negative principal component analysis which is added with the constraint of non-negative to improve data locality and contribute to elucidating latent data structures. Experiments are performed on the Cambridge ORL face database. We demonstrate the strong performances of the algorithm in recognizing human faces in comparison with PCA and NREMF approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=non-negativeprinciple%20component%20analysis%20%28NPCA%29" title=" non-negativeprinciple component analysis (NPCA)"> non-negativeprinciple component analysis (NPCA)</a> </p> <a href="https://publications.waset.org/14158/non-negative-principal-component-analysis-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14158/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14158/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14158/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14158/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14158/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14158/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14158/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14158/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14158/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14158/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14158.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1695</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10225</span> An Experimental Comparison of Unsupervised Learning Techniques for Face Recognition </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dinesh%20Kumar">Dinesh Kumar</a>, <a href="https://publications.waset.org/search?q=C.S.%20Rai"> C.S. Rai</a>, <a href="https://publications.waset.org/search?q=Shakti%20Kumar"> Shakti Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Face Recognition has always been a fascinating research area. It has drawn the attention of many researchers because of its various potential applications such as security systems, entertainment, criminal identification etc. Many supervised and unsupervised learning techniques have been reported so far. Principal Component Analysis (PCA), Self Organizing Maps (SOM) and Independent Component Analysis (ICA) are the three techniques among many others as proposed by different researchers for Face Recognition, known as the unsupervised techniques. This paper proposes integration of the two techniques, SOM and PCA, for dimensionality reduction and feature selection. Simulation results show that, though, the individual techniques SOM and PCA itself give excellent performance but the combination of these two can also be utilized for face recognition. Experimental results also indicate that for the given face database and the classifier used, SOM performs better as compared to other unsupervised learning techniques. A comparison of two proposed methodologies of SOM, Local and Global processing, shows the superiority of the later but at the cost of more computational time.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=Self%20Organizing%20Maps" title=" Self Organizing Maps"> Self Organizing Maps</a>, <a href="https://publications.waset.org/search?q=Independent%20Component%20Analysis" title=" Independent Component Analysis"> Independent Component Analysis</a> </p> <a href="https://publications.waset.org/294/an-experimental-comparison-of-unsupervised-learning-techniques-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/294/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/294/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/294/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/294/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/294/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/294/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/294/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/294/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/294/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/294/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1880</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10224</span> A New Face Recognition Method using PCA, LDA and Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20Hossein%20Sahoolizadeh">A. Hossein Sahoolizadeh</a>, <a href="https://publications.waset.org/search?q=B.%20Zargham%20Heidari"> B. Zargham Heidari</a>, <a href="https://publications.waset.org/search?q=C.%20Hamid%20Dehghani"> C. Hamid Dehghani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new face recognition method based on PCA (principal Component Analysis), LDA (Linear Discriminant Analysis) and neural networks is proposed. This method consists of four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii) feature extraction using LDA and iv) classification using neural network. Combination of PCA and LDA is used for improving the capability of LDA when a few samples of images are available and neural classifier is used to reduce number misclassification caused by not-linearly separable classes. The proposed method was tested on Yale face database. Experimental results on this database demonstrated the effectiveness of the proposed method for face recognition with less misclassification in comparison with previous methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis" title="Face recognition Principal component analysis">Face recognition Principal component analysis</a>, <a href="https://publications.waset.org/search?q=Linear%20discriminant%20analysis" title=" Linear discriminant analysis"> Linear discriminant analysis</a>, <a href="https://publications.waset.org/search?q=Neural%20networks." title=" Neural networks."> Neural networks.</a> </p> <a href="https://publications.waset.org/13908/a-new-face-recognition-method-using-pca-lda-and-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13908/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13908/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13908/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13908/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13908/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13908/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13908/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13908/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13908/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13908/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3213</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10223</span> Quantitative Analysis of PCA, ICA, LDA and SVM in Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Liton%20Jude%20Rozario">Liton Jude Rozario</a>, <a href="https://publications.waset.org/search?q=Mohammad%20Reduanul%20Haque"> Mohammad Reduanul Haque</a>, <a href="https://publications.waset.org/search?q=Md.%20Ziarul%20Islam"> Md. Ziarul Islam</a>, <a href="https://publications.waset.org/search?q=Mohammad%20Shorif%20Uddin"> Mohammad Shorif Uddin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Face recognition is a technique to automatically identify or verify individuals. It receives great attention in identification, authentication, security and many more applications. Diverse methods had been proposed for this purpose and also a lot of comparative studies were performed. However, researchers could not reach unified conclusion. In this paper, we are reporting an extensive quantitative accuracy analysis of four most widely used face recognition algorithms: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) using AT&T, Sheffield and Bangladeshi people face databases under diverse situations such as illumination, alignment and pose variations.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=PCA" title="PCA">PCA</a>, <a href="https://publications.waset.org/search?q=ICA" title=" ICA"> ICA</a>, <a href="https://publications.waset.org/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=noise." title=" noise."> noise.</a> </p> <a href="https://publications.waset.org/9999412/quantitative-analysis-of-pca-ica-lda-and-svm-in-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999412/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999412/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999412/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999412/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999412/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999412/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999412/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999412/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999412/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999412/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999412.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2431</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10222</span> Face Recognition with PCA and KPCA using Elman Neural Network and SVM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hossein%20Esbati">Hossein Esbati</a>, <a href="https://publications.waset.org/search?q=Jalil%20Shirazi"> Jalil Shirazi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, in order to categorize ORL database face pictures, principle Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) methods by using Elman neural network and Support Vector Machine (SVM) categorization methods are used. Elman network as a recurrent neural network is proposed for modeling storage systems and also it is used for reviewing the effect of using PCA numbers on system categorization precision rate and database pictures categorization time. Categorization stages are conducted with various components numbers and the obtained results of both Elman neural network categorization and support vector machine are compared. In optimum manner 97.41% recognition accuracy is obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=Kernel%20Principal%20Component%20Analysis" title=" Kernel Principal Component Analysis"> Kernel Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=Neural%20network" title=" Neural network"> Neural network</a>, <a href="https://publications.waset.org/search?q=Support%0AVector%20Machine." title=" Support Vector Machine."> Support Vector Machine.</a> </p> <a href="https://publications.waset.org/3148/face-recognition-with-pca-and-kpca-using-elman-neural-network-and-svm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3148/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3148/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3148/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3148/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3148/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3148/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3148/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3148/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3148/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3148/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3148.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1930</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10221</span> Finger Vein Recognition using PCA-based Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sepehr%20Damavandinejadmonfared">Sepehr Damavandinejadmonfared</a>, <a href="https://publications.waset.org/search?q=Ali%20Khalili%20Mobarakeh"> Ali Khalili Mobarakeh</a>, <a href="https://publications.waset.org/search?q=Mohsen%20Pashna">Mohsen Pashna</a>, <a href="https://publications.waset.org/search?q="></a>, <a href="https://publications.waset.org/search?q=Jiangping%20Gou%0D%0ASayedmehran%20Mirsafaie%20Rizi"> Jiangping Gou Sayedmehran Mirsafaie Rizi</a>, <a href="https://publications.waset.org/search?q=Saba%20Nazari"> Saba Nazari</a>, <a href="https://publications.waset.org/search?q=Shadi%20Mahmoodi%20Khaniabadi"> Shadi Mahmoodi Khaniabadi</a>, <a href="https://publications.waset.org/search?q=Mohamad%20Ali%20Bagheri"> Mohamad Ali Bagheri </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper a novel algorithm is proposed to merit the accuracy of finger vein recognition. The performances of Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA), and Kernel Entropy Component Analysis (KECA) in this algorithm are validated and compared with each other in order to determine which one is the most appropriate one in terms of finger vein recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometrics" title="Biometrics">Biometrics</a>, <a href="https://publications.waset.org/search?q=finger%20vein%20recognition" title=" finger vein recognition"> finger vein recognition</a>, <a href="https://publications.waset.org/search?q=PrincipalComponent%20Analysis%20%28PCA%29" title=" PrincipalComponent Analysis (PCA)"> PrincipalComponent Analysis (PCA)</a>, <a href="https://publications.waset.org/search?q=Kernel%20Principal%20Component%20Analysis%28KPCA%29" title=" Kernel Principal Component Analysis(KPCA)"> Kernel Principal Component Analysis(KPCA)</a>, <a href="https://publications.waset.org/search?q=Kernel%20Entropy%20Component%20Analysis%20%28KPCA%29." title=" Kernel Entropy Component Analysis (KPCA)."> Kernel Entropy Component Analysis (KPCA).</a> </p> <a href="https://publications.waset.org/9030/finger-vein-recognition-using-pca-based-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9030/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9030/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9030/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9030/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9030/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9030/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9030/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9030/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9030/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9030/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9030.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2680</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10220</span> Face Recognition Using Eigen face Coefficients and Principal Component Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Parvinder%20S.%20Sandhu">Parvinder S. Sandhu</a>, <a href="https://publications.waset.org/search?q=Iqbaldeep%20Kaur"> Iqbaldeep Kaur</a>, <a href="https://publications.waset.org/search?q=Amit%20Verma"> Amit Verma</a>, <a href="https://publications.waset.org/search?q=Samriti%20Jindal"> Samriti Jindal</a>, <a href="https://publications.waset.org/search?q=Inderpreet%20Kaur"> Inderpreet Kaur</a>, <a href="https://publications.waset.org/search?q=Shilpi%20Kumari"> Shilpi Kumari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face Recognition is a field of multidimensional applications. A lot of work has been done, extensively on the most of details related to face recognition. This idea of face recognition using PCA is one of them. In this paper the PCA features for Feature extraction are used and matching is done for the face under consideration with the test image using Eigen face coefficients. The crux of the work lies in optimizing Euclidean distance and paving the way to test the same algorithm using Matlab which is an efficient tool having powerful user interface along with simplicity in representing complex images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Eigen%20Face" title="Eigen Face">Eigen Face</a>, <a href="https://publications.waset.org/search?q=Multidimensional" title=" Multidimensional"> Multidimensional</a>, <a href="https://publications.waset.org/search?q=Matching" title=" Matching"> Matching</a>, <a href="https://publications.waset.org/search?q=PCA." title=" PCA."> PCA.</a> </p> <a href="https://publications.waset.org/3288/face-recognition-using-eigen-face-coefficients-and-principal-component-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3288/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3288/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3288/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3288/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3288/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3288/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3288/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3288/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3288/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3288/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2870</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10219</span> Face Localization and Recognition in Varied Expressions and Illumination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hui-Yu%20Huang">Hui-Yu Huang</a>, <a href="https://publications.waset.org/search?q=Shih-Hang%20Hsu"> Shih-Hang Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gabor%20filter" title="Gabor filter">Gabor filter</a>, <a href="https://publications.waset.org/search?q=improved%20active%20shape%20model%20%28IASM%29" title=" improved active shape model (IASM)"> improved active shape model (IASM)</a>, <a href="https://publications.waset.org/search?q=principal%20component%20analysis%20%28PCA%29" title=" principal component analysis (PCA)"> principal component analysis (PCA)</a>, <a href="https://publications.waset.org/search?q=face%20alignment" title=" face alignment"> face alignment</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a> </p> <a href="https://publications.waset.org/5662/face-localization-and-recognition-in-varied-expressions-and-illumination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5662/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5662/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5662/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5662/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5662/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5662/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5662/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5662/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5662/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5662/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5662.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1491</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10218</span> Face Recognition using Radial Basis Function Network based on LDA</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Byung-Joo%20Oh">Byung-Joo Oh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=linear%20discriminant%20analysis" title=" linear discriminant analysis"> linear discriminant analysis</a>, <a href="https://publications.waset.org/search?q=radial%20basis%20function%20network." title=" radial basis function network."> radial basis function network.</a> </p> <a href="https://publications.waset.org/2876/face-recognition-using-radial-basis-function-network-based-on-lda" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2876/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2876/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2876/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2876/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2876/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2876/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2876/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2876/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2876/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2876/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2876.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2122</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10217</span> Walsh-Hadamard Transform for Facial Feature Extraction in Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Hassan">M. Hassan</a>, <a href="https://publications.waset.org/search?q=I.%20Osman"> I. Osman</a>, <a href="https://publications.waset.org/search?q=M.%20Yahia"> M. Yahia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This Paper proposes a new facial feature extraction approach, Wash-Hadamard Transform (WHT). This approach is based on correlation between local pixels of the face image. Its primary advantage is the simplicity of its computation. The paper compares the proposed approach, WHT, which was traditionally used in data compression with two other known approaches: the Principal Component Analysis (PCA) and the Discrete Cosine Transform (DCT) using the face database of Olivetti Research Laboratory (ORL). In spite of its simple computation, the proposed algorithm (WHT) gave very close results to those obtained by the PCA and DCT. This paper initiates the research into WHT and the family of frequency transforms and examines their suitability for feature extraction in face recognition applications.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Facial%20Feature%20Extraction" title=" Facial Feature Extraction"> Facial Feature Extraction</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=and%20Discrete%20Cosine%20Transform" title=" and Discrete Cosine Transform"> and Discrete Cosine Transform</a>, <a href="https://publications.waset.org/search?q=Wash-Hadamard%20Transform." title=" Wash-Hadamard Transform."> Wash-Hadamard Transform.</a> </p> <a href="https://publications.waset.org/2475/walsh-hadamard-transform-for-facial-feature-extraction-in-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2475/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2475/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2475/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2475/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2475/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2475/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2475/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2475/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2475/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2475/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2571</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10216</span> Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Zukisa%20Nante">Zukisa Nante</a>, <a href="https://publications.waset.org/search?q=Wang%20Zenghui"> Wang Zenghui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/search?q=Convolutional%20Neural%20Network" title=" Convolutional Neural Network"> Convolutional Neural Network</a>, <a href="https://publications.waset.org/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/search?q=Rectified%20Linear%20Unit" title=" Rectified Linear Unit"> Rectified Linear Unit</a>, <a href="https://publications.waset.org/search?q=ReLU" title=" ReLU"> ReLU</a>, <a href="https://publications.waset.org/search?q=feature%20extraction." title=" feature extraction."> feature extraction.</a> </p> <a href="https://publications.waset.org/10012606/face-recognition-using-principal-component-analysis-k-means-clustering-and-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012606/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012606/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012606/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012606/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012606/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012606/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012606/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012606/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012606/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012606/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">505</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10215</span> 3D Face Recognition Using Modified PCA Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Omid%20Gervei">Omid Gervei</a>, <a href="https://publications.waset.org/search?q=Ahmad%20Ayatollahi"> Ahmad Ayatollahi</a>, <a href="https://publications.waset.org/search?q=Navid%20Gervei"> Navid Gervei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present an approach for 3D face recognition based on extracting principal components of range images by utilizing modified PCA methods namely 2DPCA and bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing stage was implemented on the images to smooth them using median and Gaussian filtering. In the normalization stage we locate the nose tip to lay it at the center of images then crop each image to a standard size of 100*100. In the face recognition stage we extract the principal component of each image using both 2DPCA and (2D) 2 PCA. Finally, we use Euclidean distance to measure the minimum distance between a given test image to the training images in the database. We also compare the result of using both methods. The best result achieved by experiments on a public face database shows that 83.3 percent is the rate of face recognition for a random facial expression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=3D%20face%20recognition" title="3D face recognition">3D face recognition</a>, <a href="https://publications.waset.org/search?q=2DPCA" title=" 2DPCA"> 2DPCA</a>, <a href="https://publications.waset.org/search?q=%282D%29%202%20PCA" title=" (2D) 2 PCA"> (2D) 2 PCA</a>, <a href="https://publications.waset.org/search?q=Rangeimage" title=" Rangeimage"> Rangeimage</a> </p> <a href="https://publications.waset.org/5789/3d-face-recognition-using-modified-pca-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5789/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5789/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5789/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5789/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5789/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5789/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5789/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5789/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5789/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5789/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3066</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10214</span> Normalization Discriminant Independent Component Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Liew%20Yee%20Ping">Liew Yee Ping</a>, <a href="https://publications.waset.org/search?q=Pang%20Ying%20Han"> Pang Ying Han</a>, <a href="https://publications.waset.org/search?q=Lau%20Siong%20Hoe"> Lau Siong Hoe</a>, <a href="https://publications.waset.org/search?q=Ooi%20Shih%20Yin"> Ooi Shih Yin</a>, <a href="https://publications.waset.org/search?q=Housam%20Khalifa%20Bashier%20Babiker"> Housam Khalifa Bashier Babiker</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In face recognition, feature extraction techniques attempts to search for appropriate representation of the data. However, when the feature dimension is larger than the samples size, it brings performance degradation. Hence, we propose a method called Normalization Discriminant Independent Component Analysis (NDICA). The input data will be regularized to obtain the most reliable features from the data and processed using Independent Component Analysis (ICA). The proposed method is evaluated on three face databases, Olivetti Research Ltd (ORL), Face Recognition Technology (FERET) and Face Recognition Grand Challenge (FRGC). NDICA showed it effectiveness compared with other unsupervised and supervised techniques.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=small%20sample%20size" title=" small sample size"> small sample size</a>, <a href="https://publications.waset.org/search?q=regularization" title=" regularization"> regularization</a>, <a href="https://publications.waset.org/search?q=independent%20component%20analysis." title=" independent component analysis."> independent component analysis.</a> </p> <a href="https://publications.waset.org/16147/normalization-discriminant-independent-component-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/16147/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/16147/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/16147/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/16147/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/16147/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/16147/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/16147/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/16147/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/16147/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/16147/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/16147.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1954</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10213</span> Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Amir%20Hajian">Amir Hajian</a>, <a href="https://publications.waset.org/search?q=Sepehr%20Damavandinejadmonfared"> Sepehr Damavandinejadmonfared</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometrics" title="Biometrics">Biometrics</a>, <a href="https://publications.waset.org/search?q=finger%20vein%20recognition" title=" finger vein recognition"> finger vein recognition</a>, <a href="https://publications.waset.org/search?q=Principal%0D%0AComponent%20Analysis%20%28PCA%29" title=" Principal Component Analysis (PCA)"> Principal Component Analysis (PCA)</a>, <a href="https://publications.waset.org/search?q=Kernel%20Principal%20Component%20Analysis%0D%0A%28KPCA%29." title=" Kernel Principal Component Analysis (KPCA)."> Kernel Principal Component Analysis (KPCA).</a> </p> <a href="https://publications.waset.org/9999486/optimal-feature-extraction-dimension-in-finger-vein-recognition-using-kernel-principal-component-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9999486/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9999486/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9999486/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9999486/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9999486/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9999486/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9999486/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9999486/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9999486/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9999486/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9999486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1962</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10212</span> A Structural Support Vector Machine Approach for Biometric Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vishal%20Awasthi">Vishal Awasthi</a>, <a href="https://publications.waset.org/search?q=Atul%20Kumar%20Agnihotri"> Atul Kumar Agnihotri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face is a non-intrusive strong biometrics for identification of original and dummy facial by different artificial means. Face recognition is extremely important in the contexts of computer vision, psychology, surveillance, pattern recognition, neural network, content based video processing. The availability of a widespread face database is crucial to test the performance of these face recognition algorithms. The openly available face databases include face images with a wide range of poses, illumination, gestures and face occlusions but there is no dummy face database accessible in public domain. This paper presents a face detection algorithm based on the image segmentation in terms of distance from a fixed point and template matching methods. This proposed work is having the most appropriate number of nodal points resulting in most appropriate outcomes in terms of face recognition and detection. The time taken to identify and extract distinctive facial features is improved in the range of 90 to 110 sec. with the increment of efficiency by 3%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/search?q=Linear%20Discriminant%20Analysis" title=" Linear Discriminant Analysis"> Linear Discriminant Analysis</a>, <a href="https://publications.waset.org/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/search?q=Improved%20Support%0D%0AVector%20Machine" title=" Improved Support Vector Machine"> Improved Support Vector Machine</a>, <a href="https://publications.waset.org/search?q=iSVM" title=" iSVM"> iSVM</a>, <a href="https://publications.waset.org/search?q=elastic%20bunch%20mapping%20technique." title=" elastic bunch mapping technique."> elastic bunch mapping technique.</a> </p> <a href="https://publications.waset.org/10011989/a-structural-support-vector-machine-approach-for-biometric-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011989/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011989/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011989/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011989/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011989/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011989/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011989/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011989/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011989/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011989/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10211</span> Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mohammed%20Rziza">Mohammed Rziza</a>, <a href="https://publications.waset.org/search?q=Mohamed%20El%20Aroussi"> Mohamed El Aroussi</a>, <a href="https://publications.waset.org/search?q=Mohammed%20El%20Hassouni"> Mohammed El Hassouni</a>, <a href="https://publications.waset.org/search?q=Sanaa%20Ghouzali"> Sanaa Ghouzali</a>, <a href="https://publications.waset.org/search?q=Driss%20Aboutajdine"> Driss Aboutajdine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Curvelet" title="Curvelet">Curvelet</a>, <a href="https://publications.waset.org/search?q=Linear%20Discriminant%20Analysis%20%28LDA%29" title=" Linear Discriminant Analysis (LDA) "> Linear Discriminant Analysis (LDA) </a>, <a href="https://publications.waset.org/search?q=Contourlet" title=" Contourlet"> Contourlet</a>, <a href="https://publications.waset.org/search?q=Discreet%20Wavelet%20Transform" title="Discreet Wavelet Transform">Discreet Wavelet Transform</a>, <a href="https://publications.waset.org/search?q=DWT" title=" DWT"> DWT</a>, <a href="https://publications.waset.org/search?q=Block-based%20analysis" title=" Block-based analysis"> Block-based analysis</a>, <a href="https://publications.waset.org/search?q=face%20recognition%20%28FR%29." title="face recognition (FR).">face recognition (FR).</a> </p> <a href="https://publications.waset.org/6439/local-curvelet-based-classification-using-linear-discriminant-analysis-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6439/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6439/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6439/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6439/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6439/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6439/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6439/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6439/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6439/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6439/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1808</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10210</span> Low Resolution Face Recognition Using Mixture of Experts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fatemeh%20Behjati%20Ardakani">Fatemeh Behjati Ardakani</a>, <a href="https://publications.waset.org/search?q=Fatemeh%20Khademian"> Fatemeh Khademian</a>, <a href="https://publications.waset.org/search?q=Abbas%20Nowzari%20Dalini"> Abbas Nowzari Dalini</a>, <a href="https://publications.waset.org/search?q=Reza%20Ebrahimpour"> Reza Ebrahimpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 脳 48 ORL images to 12 脳 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Low%20resolution%20face%20recognition" title="Low resolution face recognition">Low resolution face recognition</a>, <a href="https://publications.waset.org/search?q=Multilayered%20neuralnetwork" title=" Multilayered neuralnetwork"> Multilayered neuralnetwork</a>, <a href="https://publications.waset.org/search?q=Mixture%20of%20experts%20neural%20network" title=" Mixture of experts neural network"> Mixture of experts neural network</a>, <a href="https://publications.waset.org/search?q=Principal%20componentanalysis" title=" Principal componentanalysis"> Principal componentanalysis</a>, <a href="https://publications.waset.org/search?q=Bicubic%20interpolation" title=" Bicubic interpolation"> Bicubic interpolation</a>, <a href="https://publications.waset.org/search?q=Nearest%20neighbor%20interpolation." title=" Nearest neighbor interpolation."> Nearest neighbor interpolation.</a> </p> <a href="https://publications.waset.org/7504/low-resolution-face-recognition-using-mixture-of-experts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7504/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7504/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7504/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7504/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7504/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7504/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7504/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7504/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7504/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7504/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7504.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1724</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10209</span> Face Recognition: A Literature Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=A.%20S.%20Tolba">A. S. Tolba</a>, <a href="https://publications.waset.org/search?q=A.H.%20El-Baz"> A.H. El-Baz</a>, <a href="https://publications.waset.org/search?q=A.A.%20El-Harby"> A.A. El-Harby</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented. Description and limitations of face databases which are used to test the performance of these face recognition algorithms are given. A brief summary of the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face recognition technology, and its conclusions are also given. Finally, we give a summary of the research results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Combined%20classifiers" title="Combined classifiers">Combined classifiers</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=graph%20matching" title=" graph matching"> graph matching</a>, <a href="https://publications.waset.org/search?q=neural%20networks." title=" neural networks."> neural networks.</a> </p> <a href="https://publications.waset.org/7912/face-recognition-a-literature-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7912/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7912/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7912/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7912/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7912/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7912/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7912/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7912/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7912/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7912/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">7723</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10208</span> A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vitomir%20Struc">Vitomir Struc</a>, <a href="https://publications.waset.org/search?q=Nikola%20Pavesic"> Nikola Pavesic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometrics" title="Biometrics">Biometrics</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=appearance%20based%20methods" title=" appearance based methods"> appearance based methods</a>, <a href="https://publications.waset.org/search?q=image%20degradations" title=" image degradations"> image degradations</a>, <a href="https://publications.waset.org/search?q=the%20XM2VTS%20database." title=" the XM2VTS database."> the XM2VTS database.</a> </p> <a href="https://publications.waset.org/8820/a-case-study-on-appearance-based-feature-extraction-techniques-and-their-susceptibility-to-image-degradations-for-the-task-of-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8820/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8820/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8820/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8820/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8820/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8820/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8820/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8820/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8820/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8820/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8820.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2284</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10207</span> Evolutionary Eigenspace Learning using CCIPCA and IPCA for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ghazy%20M.R.%20Assassa">Ghazy M.R. Assassa</a>, <a href="https://publications.waset.org/search?q=Mona%20F.%20M.%20Mursi"> Mona F. M. Mursi</a>, <a href="https://publications.waset.org/search?q=Hatim%20A.%20Aboalsamh"> Hatim A. Aboalsamh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditional principal components analysis (PCA) techniques for face recognition are based on batch-mode training using a pre-available image set. Real world applications require that the training set be dynamic of evolving nature where within the framework of continuous learning, new training images are continuously added to the original set; this would trigger a costly continuous re-computation of the eigen space representation via repeating an entire batch-based training that includes the old and new images. Incremental PCA methods allow adding new images and updating the PCA representation. In this paper, two incremental PCA approaches, CCIPCA and IPCA, are examined and compared. Besides, different learning and testing strategies are proposed and applied to the two algorithms. The results suggest that batch PCA is inferior to both incremental approaches, and that all CCIPCAs are practically equivalent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Candid%20covariance-free%20incremental%20principal%0Acomponents%20analysis%20%28CCIPCA%29" title="Candid covariance-free incremental principal components analysis (CCIPCA)">Candid covariance-free incremental principal components analysis (CCIPCA)</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=incremental%0Aprincipal%20components%20analysis%20%28IPCA%29." title=" incremental principal components analysis (IPCA)."> incremental principal components analysis (IPCA).</a> </p> <a href="https://publications.waset.org/12212/evolutionary-eigenspace-learning-using-ccipca-and-ipca-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12212/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12212/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12212/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12212/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12212/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12212/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12212/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12212/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12212/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12212/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12212.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1822</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10206</span> Assessment of Time-Lapse in Visible and Thermal Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sajad%20Farokhi">Sajad Farokhi</a>, <a href="https://publications.waset.org/search?q=Siti%20Mariyam%20Shamsuddin"> Siti Mariyam Shamsuddin</a>, <a href="https://publications.waset.org/search?q=Jan%20Flusser"> Jan Flusser</a>, <a href="https://publications.waset.org/search?q=Usman%20Ullah%20Sheikh"> Usman Ullah Sheikh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Although face recognition seems as an easy task for human, automatic face recognition is a much more challenging task due to variations in time, illumination and pose. In this paper, the influence of time-lapse on visible and thermal images is examined. Orthogonal moment invariants are used as a feature extractor to analyze the effect of time-lapse on thermal and visible images and the results are compared with conventional Principal Component Analysis (PCA). A new triangle square ratio criterion is employed instead of Euclidean distance to enhance the performance of nearest neighbor classifier. The results of this study indicate that the ideal feature vectors can be represented with high discrimination power due to the global characteristic of orthogonal moment invariants. Moreover, the effect of time-lapse has been decreasing and enhancing the accuracy of face recognition considerably in comparison with PCA. Furthermore, our experimental results based on moment invariant and triangle square ratio criterion show that the proposed approach achieves on average 13.6% higher in recognition rate than PCA. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Infrared%20Face%20recognition" title="Infrared Face recognition">Infrared Face recognition</a>, <a href="https://publications.waset.org/search?q=Time-lapse" title=" Time-lapse"> Time-lapse</a>, <a href="https://publications.waset.org/search?q=Zernike%0Amoment%20invariants" title=" Zernike moment invariants"> Zernike moment invariants</a> </p> <a href="https://publications.waset.org/8549/assessment-of-time-lapse-in-visible-and-thermal-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8549/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8549/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8549/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8549/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8549/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8549/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8549/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8549/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8549/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8549/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1784</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10205</span> Face Recognition with Image Rotation Detection, Correction and Reinforced Decision using ANN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hemashree%20Bordoloi">Hemashree Bordoloi</a>, <a href="https://publications.waset.org/search?q=Kandarpa%20Kumar%20Sarma"> Kandarpa Kumar Sarma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rotation or tilt present in an image capture by digital means can be detected and corrected using Artificial Neural Network (ANN) for application with a Face Recognition System (FRS). Principal Component Analysis (PCA) features of faces at different angles are used to train an ANN which detects the rotation for an input image and corrected using a set of operations implemented using another system based on ANN. The work also deals with the recognition of human faces with features from the foreheads, eyes, nose and mouths as decision support entities of the system configured using a Generalized Feed Forward Artificial Neural Network (GFFANN). These features are combined to provide a reinforced decision for verification of a person-s identity despite illumination variations. The complete system performing facial image rotation detection, correction and recognition using re-enforced decision support provides a success rate in the higher 90s. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Rotation" title="Rotation">Rotation</a>, <a href="https://publications.waset.org/search?q=Face" title=" Face"> Face</a>, <a href="https://publications.waset.org/search?q=Recognition" title=" Recognition"> Recognition</a>, <a href="https://publications.waset.org/search?q=ANN." title=" ANN."> ANN.</a> </p> <a href="https://publications.waset.org/14147/face-recognition-with-image-rotation-detection-correction-and-reinforced-decision-using-ann" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14147/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14147/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14147/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14147/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14147/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14147/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14147/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14147/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14147/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14147/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14147.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2062</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10204</span> A New Implementation of PCA for Fast Face Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hazem%20M.%20El-Bakry">Hazem M. El-Bakry</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Principal Component Analysis (PCA) has many different important applications especially in pattern detection such as face detection / recognition. Therefore, for real time applications, the response time is required to be as small as possible. In this paper, new implementation of PCA for fast face detection is presented. Such new implementation is designed based on cross correlation in the frequency domain between the input image and eigenvectors (weights). Simulation results show that the proposed implementation of PCA is faster than conventional one. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Fast%20Face%20Detection" title="Fast Face Detection">Fast Face Detection</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/search?q=Cross%20Correlation" title=" Cross Correlation"> Cross Correlation</a>, <a href="https://publications.waset.org/search?q=Frequency%20Domain" title="Frequency Domain">Frequency Domain</a> </p> <a href="https://publications.waset.org/7289/a-new-implementation-of-pca-for-fast-face-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7289/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7289/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7289/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7289/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7289/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7289/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7289/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7289/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7289/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7289/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1797</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10203</span> New Adaptive Linear Discriminante Analysis for Face Recognition with SVM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Mehdi%20Ghayoumi">Mehdi Ghayoumi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We have applied new accelerated algorithm for linear discriminate analysis (LDA) in face recognition with support vector machine. The new algorithm has the advantage of optimal selection of the step size. The gradient descent method and new algorithm has been implemented in software and evaluated on the Yale face database B. The eigenfaces of these approaches have been used to training a KNN. Recognition rate with new algorithm is compared with gradient. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=lda" title="lda">lda</a>, <a href="https://publications.waset.org/search?q=adaptive" title=" adaptive"> adaptive</a>, <a href="https://publications.waset.org/search?q=svm" title=" svm"> svm</a>, <a href="https://publications.waset.org/search?q=face%20recognition." title=" face recognition."> face recognition.</a> </p> <a href="https://publications.waset.org/12509/new-adaptive-linear-discriminante-analysis-for-face-recognition-with-svm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12509/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12509/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12509/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12509/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12509/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12509/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12509/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12509/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12509/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12509/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12509.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1422</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10202</span> A New Biologically Inspired Pattern Recognition Spproach for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=V.%20Kabeer">V. Kabeer</a>, <a href="https://publications.waset.org/search?q=N.K.Narayanan"> N.K.Narayanan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper reports a new pattern recognition approach for face recognition. The biological model of light receptors - cones and rods in human eyes and the way they are associated with pattern vision in human vision forms the basis of this approach. The functional model is simulated using CWD and WPD. The paper also discusses the experiments performed for face recognition using the features extracted from images in the AT & T face database. Artificial Neural Network and k- Nearest Neighbour classifier algorithms are employed for the recognition purpose. A feature vector is formed for each of the face images in the database and recognition accuracies are computed and compared using the classifiers. Simulation results show that the proposed method outperforms traditional way of feature extraction methods prevailing for pattern recognition in terms of recognition accuracy for face images with pose and illumination variations.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Image%20analysis" title=" Image analysis"> Image analysis</a>, <a href="https://publications.waset.org/search?q=Wavelet%20feature%20extraction" title=" Wavelet feature extraction"> Wavelet feature extraction</a>, <a href="https://publications.waset.org/search?q=Pattern%20recognition" title=" Pattern recognition"> Pattern recognition</a>, <a href="https://publications.waset.org/search?q=Classifier%20algorithms" title=" Classifier algorithms"> Classifier algorithms</a> </p> <a href="https://publications.waset.org/13389/a-new-biologically-inspired-pattern-recognition-spproach-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13389/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13389/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13389/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13389/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13389/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13389/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13389/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13389/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13389/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13389/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13389.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1677</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10201</span> Practical Aspects of Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Vural">S. Vural</a>, <a href="https://publications.waset.org/search?q=H.%20Yamauchi"> H. Yamauchi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Current systems for face recognition techniques often use either SVM or Adaboost techniques for face detection part and use PCA for face recognition part. In this paper, we offer a novel method for not only a powerful face detection system based on Six-segment-filters (SSR) and Adaboost learning algorithms but also for a face recognition system. A new exclusive face detection algorithm has been developed and connected with the recognition algorithm. As a result of it, we obtained an overall high-system performance compared with current systems. The proposed algorithm was tested on CMU, FERET, UNIBE, MIT face databases and significant performance has obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Adaboost" title="Adaboost">Adaboost</a>, <a href="https://publications.waset.org/search?q=Face%20Detection" title=" Face Detection"> Face Detection</a>, <a href="https://publications.waset.org/search?q=Face%20recognition" title=" Face recognition"> Face recognition</a>, <a href="https://publications.waset.org/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/search?q=Gabor%20filters" title=" Gabor filters"> Gabor filters</a>, <a href="https://publications.waset.org/search?q=PCA-ICA." title=" PCA-ICA."> PCA-ICA.</a> </p> <a href="https://publications.waset.org/3670/practical-aspects-of-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3670/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3670/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3670/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3670/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3670/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3670/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3670/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3670/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3670/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3670/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1598</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10200</span> Liveness Detection for Embedded Face Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hyung-Keun%20Jee">Hyung-Keun Jee</a>, <a href="https://publications.waset.org/search?q=Sung-Uk%20Jung"> Sung-Uk Jung</a>, <a href="https://publications.waset.org/search?q=Jang-Hee%20Yoo"> Jang-Hee Yoo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Liveness%20Detection" title="Liveness Detection">Liveness Detection</a>, <a href="https://publications.waset.org/search?q=Eye%20detection" title=" Eye detection"> Eye detection</a>, <a href="https://publications.waset.org/search?q=SQI." title=" SQI."> SQI.</a> </p> <a href="https://publications.waset.org/5308/liveness-detection-for-embedded-face-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5308/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5308/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5308/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5308/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5308/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5308/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5308/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5308/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5308/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5308/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5308.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3181</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10199</span> Principal Component Analysis for the Characterization in the Application of Some Soil Properties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kamolchanok%20Panishkan">Kamolchanok Panishkan</a>, <a href="https://publications.waset.org/search?q=Kanokporn%20Swangjang"> Kanokporn Swangjang</a>, <a href="https://publications.waset.org/search?q=Natdhera%20Sanmanee"> Natdhera Sanmanee</a>, <a href="https://publications.waset.org/search?q=Daoroong%20Sungthong"> Daoroong Sungthong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of this research is to study principal component analysis for classification of 67 soil samples collected from different agricultural areas in the western part of Thailand. Six soil properties were measured on the soil samples and are used as original variables. Principal component analysis is applied to reduce the number of original variables. A model based on the first two principal components accounts for 72.24% of total variance. Score plots of first two principal components were used to map with agricultural areas divided into horticulture, field crops and wetland. The results showed some relationships between soil properties and agricultural areas. PCA was shown to be a useful tool for agricultural areas classification based on soil properties. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=soil%20organic%20matter" title="soil organic matter">soil organic matter</a>, <a href="https://publications.waset.org/search?q=soil%20properties" title=" soil properties"> soil properties</a>, <a href="https://publications.waset.org/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/search?q=principal%20components" title=" principal components"> principal components</a> </p> <a href="https://publications.waset.org/2959/principal-component-analysis-for-the-characterization-in-the-application-of-some-soil-properties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2959/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2959/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2959/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2959/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2959/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2959/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2959/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2959/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2959/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2959/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2959.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">4114</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10198</span> Facial Recognition on the Basis of Facial Fragments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tetyana%20Baydyk">Tetyana Baydyk</a>, <a href="https://publications.waset.org/search?q=Ernst%20Kussul"> Ernst Kussul</a>, <a href="https://publications.waset.org/search?q=Sandra%20Bonilla%20Meza"> Sandra Bonilla Meza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild<em>) </em>face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Labeled%20Faces%20in%20the%20Wild%20%28LFW%29%20database" title=" Labeled Faces in the Wild (LFW) database"> Labeled Faces in the Wild (LFW) database</a>, <a href="https://publications.waset.org/search?q=Random%20Local%20Descriptor%20%28RLD%29" title=" Random Local Descriptor (RLD)"> Random Local Descriptor (RLD)</a>, <a href="https://publications.waset.org/search?q=random%20features." title=" random features."> random features.</a> </p> <a href="https://publications.waset.org/10007234/facial-recognition-on-the-basis-of-facial-fragments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10007234/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10007234/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10007234/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10007234/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10007234/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10007234/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10007234/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10007234/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10007234/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10007234/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10007234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1013</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10197</span> Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Kherchaoui">S. Kherchaoui</a>, <a href="https://publications.waset.org/search?q=A.%20Houacine"> A. Houacine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20expression%20identification" title="Facial expression identification">Facial expression identification</a>, <a href="https://publications.waset.org/search?q=curvelet%20coefficients" title=" curvelet coefficients"> curvelet coefficients</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machine%20%28SVM%29." title=" support vector machine (SVM)."> support vector machine (SVM).</a> </p> <a href="https://publications.waset.org/9998501/curvelet-features-with-mouth-and-face-edge-ratios-for-facial-expression-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998501/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998501/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998501/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998501/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998501/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998501/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998501/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998501/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998501/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998501/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998501.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1842</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=340">340</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=341">341</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Face%20recognition%20Principal%20component%20analysis&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>